Using LLMs to Generate GRX Client Code

Large Language Models (LLMs) such as Anthropic’s Claude, OpenAI’s ChatGPT, or Google’s Gemini can generate working GRX client code when provided with the right context. This section explains how to use LLMs effectively for rapid prototyping with the GRX API.

LLM Context File

We provide a single, self-contained context file that contains the API details an LLM needs to generate GRX client code:

Download grx_api_llm_context.txt

This file includes:

  • Complete Protocol Buffer definitions for all 5 public APIs

  • Service endpoints and port numbers

  • Setup instructions for Python, Java, and C++

  • Working code examples

  • Key patterns, domain knowledge, and gotchas

How to use it:

  1. Download the context file.

  2. Open your preferred LLM (Claude, ChatGPT, Gemini, etc.).

  3. Paste the contents of the file, or attach it as a file, at the beginning of your conversation.

  4. Tell the LLM which language you want to use and whether you are working inside this example repository or in a new project.

  5. Ask it to include exact setup commands, working directories, proto binding generation, and run commands.

Example Prompts

Here are some example prompts you can use with the context file. The LLM’s response will include working code and instructions on how to set up and run it.

Basic data retrieval (Python):

Using the GRX API context provided, write a Python script that:
1. Connects to a GRX receiver at YOUR_GRX_IP
2. Retrieves the GNSS position and prints latitude, longitude, and
   fix type
3. Lists all tracked aircraft with their ICAO address and signal level
4. Gets the radio front-end noise level
Include setup instructions, including the working directory, pip install
commands, proto file placement, gRPC binding generation, and the run command.

Real-time monitoring (Python):

Write a Python script that continuously monitors a GRX receiver at
YOUR_GRX_IP and prints:
- GNSS fix status every 5 seconds
- System health (CPU temperature, voltage) every 10 seconds
- A warning if any frames are being dropped on the Mode S stream
Use threading. Include all setup steps.

Mode S decoder (Python):

Write a Python script that connects to a GRX at YOUR_GRX_IP, subscribes
to Mode S DF17 (ADS-B) frames, decodes each frame using the pyModeS
library, and prints:
- ICAO address
- Message type (identification, position, velocity)
- Decoded content (callsign, lat/lon, speed/heading)
Include error handling and setup instructions.

Waterfall spectrum (Java):

Write a Java application that connects to a GRX receiver and saves
a waterfall spectrogram as a JPEG file. Use the Spectrumd API to
capture 30 seconds of data. Include the Maven pom.xml and instructions
to build and run.

I/Q sample recording (Python):

Write a Python script that uses the Sample Streaming API to record
5 seconds of raw I/Q samples from the 1090 MHz channel of a GRX
receiver to a binary file. Print the stream properties (center
frequency, sample rate) and report any lost blocks. Include setup
instructions.

SDR tuning (Python):

Write a Python script for a GRX with a tunable SDR daughterboard that:
1. Lists all tunable channels
2. Tunes the first tunable channel to 433 MHz
3. Captures a 10-second waterfall chart at the new frequency
4. Saves it as waterfall.jpg
Include setup and run instructions.

Multi-service dashboard (C++):

Write a C++ application using CMake that connects to a GRX receiver
and prints a one-time status dashboard showing:
- Number of tracked aircraft
- GNSS fix type and position
- CPU temperature
- Message rates from GetStatistics()
Include the CMakeLists.txt and build instructions.

Tips for Best Results

  • Always provide the context file. Without it, the LLM will not know the GRX-specific message types, field names, or port numbers.

  • Be specific about the language. Mention Python, Java, or C++ explicitly.

  • Specify the project layout. Say whether you are using this repository’s examples or creating a new project. This affects where generated proto bindings should be written.

  • Ask for setup instructions. Include phrases like “include the working directory for each command”, “include proto binding generation”, and “include the exact run command” so the LLM provides ready-to-run output.

  • Iterate. If the generated code doesn’t work, paste the error message back and ask the LLM to fix it.

  • Request specific APIs. Mention the API by name (e.g., “Receiver API”, “Samplestreamingd”) to help the LLM focus on the right service.

Obtaining Proto Files

The .proto files needed for code generation are available for download from your GRX receiver’s built-in documentation at http://<GRX-IP>/doc/ (see the API Reference section). You can also find them in the Protocol Definitions section of this guide.

When generating bindings manually, keep all six public proto files in one directory because several service definitions import Common.proto. For Python, generate bindings into the same directory as your script, or add the generated output directory to PYTHONPATH. For Java/Maven, place the proto files in src/main/proto/. For C++/CMake, keep the proto files in the directory referenced by your CMakeLists.txt.