📋 Contents

1 Overview & Motivation

🎯 Why Port to GNU Radio 4.0?

GNU Radio 4.0 ("GR4") is a ground‑up rewrite of the 3.x series. It trades Boost and decade‑old macros for modern C++23 features, an all‑new lock‑free scheduler, and auto‑vectorised math using <experimental/simd>. Porting frequently‑used GR3 blocks means everyone—students, hobbyists, scientists—can enjoy:

🔍 What This Guide Covers

📊 Is Your Block a Good Candidate?

Ask yourself:

Question Heuristic
Used by many flowgraphs? Check GitHub search or your own notebooks.
Algorithm self‑contained? Fewer external deps = faster port.
Contains hand‑written intrinsics? Great! We'll swap them for std::simd.
Does it allocate big buffers? Might need redesign with std::span.

If you answer yes to the first two, dive in.

2 Prerequisites & Setup

Quick‑start: one command, zero surprises
docker pull --platform linux/amd64 ghcr.io/fair-acc/gr4-build-container:latest

🛠 Toolchain Basics (Bare‑Metal)

Tool Min Version Why
GCC ≥13.3 (≥14.2 for full <experimental/simd>) minimum to compile GR4
Clang 18 (recommended) faster builds, great diagnostics
CMake 3.25 presets & fetched content
Ninja any parallel build engine
Python 3.10 unit‑test harness & bindings

Install on Ubuntu 24.04:

sudo add-apt-repository ppa:ubuntu-toolchain-r/test -y sudo apt update sudo apt install gcc-14 g++-14 clang-18 cmake ninja-build git python3-pip -y export CC=gcc-14 CXX=g++-14 # or clang-18/clang++-18

🐳 Docker Route (Recommended for Beginners)

  1. Pull the container (see quick‑start).
  2. Run with the current repo mounted:
docker run --rm -it \ --volume="${PWD}:/work/src" \ --workdir="/work/src" \ ghcr.io/fair-acc/gr4-build-container:latest bash

3. Inside the shell, configure + build:

rm -rf build && mkdir build && cd build cmake -DCMAKE_BUILD_TYPE=RelWithAssert -DGR_ENABLE_BLOCK_REGISTRY=ON .. cmake --build . -j$(nproc) ctest --output-on-failure -j$(nproc)

💡 ZRAM Route (If Your Laptop Runs Out of RAM)

Building GR4 can transiently spike to 6 GB+ of RAM with GCC's heavy templates. If your system swaps to disk, compile times crawl. zram swaps to compressed RAM instead of SSD.

# Enable 8 GiB zram swap (needs sudo) cd gnuradio4 sudo ./enableZRAM.sh # provided in repo root # Build as usual mkdir -p build && cd build cmake -DCMAKE_BUILD_TYPE=RelWithAssert .. cmake --build . -j$(nproc) # Afterwards, free the device sudo swapoff /dev/zram0 echo 1 | sudo tee /sys/block/zram0/reset

♻️ Rule of thumb: if free -h shows >80 % memory used during compile, enable zram.

🗂 Working Inside the Main Repo

# 1. Fork the repo on GitHub # 2. Clone your fork git clone https://github.com/<you>/gnuradio4.git cd gnuradio4 git remote add upstream https://github.com/fair-acc/gnuradio4.git git checkout -b port/math/MultiplyConst # ...hack... # 3. Commit w/ DCO & push git add . git commit -s -m "feat(math): port MultiplyConst block (float & complex)" git push origin port/math/MultiplyConst # 4. Open PR → choose 'Create pull request'
Tip Use git push --force-with-lease (not --force) when you squash‑rebase.

3 Architecture Changes – 3.x vs 4.0

3.1 What You're Really Looking At

Version What the snippet actually is Where it lives
3.x gr_math.h – utility header with inline helpers (fast complex‑multiply, lookup tables, etc.). Not a block. Part of gr‑core; many math blocks call these helpers.
4.0 math.hppcollection of fully‑fledged blocks (Add, Multiply, AddConst, …) written with the new gr::Block<> API. Lives in gr::blocks::math; each instantiation registered via GR_REGISTER_BLOCK.
So: GR3's header supports math inside blocks, whereas GR4's header is the math blocks.

3.2 Key API Differences

Category GNU Radio 3.x GNU Radio 4.0 Why it matters
Language era C++03/11, Boost, macros C++23-ready code (concepts, std::span) + std::simd from <experimental/simd> Safer code, fewer macros, faster binaries
Block base Inherit gr::sync_block, override work() gr::Block<Derived> (CRTP) → processBulk() Compile‑time errors instead of run‑time segfaults
Port model Hard‑coded indices PortIn<T>/PortOut<T> templates Type‑safe, introspectable
Registration Custom make() + YAML One‑liner GR_REGISTER_BLOCK Slashes boilerplate
Vectorisation Manual intrinsics std::simd auto‑vector Free speed on AVX/NEON
Scheduler Thread‑per‑block Task‑based executor (currently Folly; conceptually similar to TBB) Better CPU utilisation
Live parameter change Custom setters & mutex Any reflected member mutable at run‑time Zero extra C++ for AGC, etc.

3.3 Concrete Example – Multiply‑by‑Constant

Aspect Old 3.x New 4.0 Benefit
Source files 10+ variants (*_ff, *_cc, …) 1 template 10× less code
Port declaration Raw pointer arrays PortIn<T> in; PortOut<T> out; Compilation catches mismatch
Parameter Private k_, setter Public value reflected No boilerplate
SIMD Optional helper call Automatic if compiler supports it Speed
GUI Separate YAML + Python glue Code‑gen from reflection Less maintenance

4 Step‑by‑Step Porting Workflow

Roadmap Follow each sub‑step and run tests after every compile.

4.1 Pre‑Port Checklist

4.2 Porting at a Glance

[Mermaid diagram would be rendered here showing the workflow: Understand GR3 block → Copy tests to /tests → Write GR4 skeleton header → Port algorithm → processBulk → Add GR_MAKE_REFLECTABLE → Register block → Compile & run tests → Open PR]

4.3 Incremental Commits

Commit every compiling state:

# good habit cmake --build build -j$(nproc) && ctest --output-on-failure

When tests fail, git stash small experiments—keep main branch green.

5 Block API Migration

5.1 From work() to processBulk() (Beginner Friendly)

Old way (pseudo‑code):

std::int32_t work(std::int32_t noutput_items, gr_vector_const_void_star& input_items, gr_vector_void_star& output_items) { const float* in = (const float*)input_items[0]; float* out = (float*)output_items[0]; for (std::int32_t i = 0; i < noutput_items; i++) out[i] = in[i] * k_; return noutput_items; }

Problems: raw casts, no bounds check.

New way:

work_return_t processBulk(std::span<const float> in, std::span<float> out) { for (std::size_t i = 0; i < in.size(); ++i) out[i] = in[i] * value; // 'value' is reflected param return {out.size(), Status::OK}; }

Key points for beginners:

  1. std::span is like a safe pointer + length.
  2. The work_return_t tells the scheduler how many items you produced.
  3. No virtual calls—processBulk() is a normal function.

5.2 work_return_t Status Values

Unlike GR 3.x where negative return values indicated errors, GR4 uses explicit status enums for clearer semantics:

Status When to Use Example Scenario
Status::OK Normal processing completed Successfully processed all input samples
Status::WAIT Need more input data FIR filter doesn't have enough taps
Status::DONE Block finished its work File source reached EOF
Status::ERROR Unrecoverable error Invalid configuration or I/O failure
Status::DRAIN Flushing remaining data Filter outputting final samples

Practical Examples:

// FIR filter waiting for more input work_return_t processBulk(std::span<const T> input, std::span<T> output) { if (input.size() < _filter.num_taps()) { return {0, Status::WAIT}; // Need more samples } // ... process normally ... return {output.size(), Status::OK}; } // Unrecoverable error case work_return_t processBulk(std::span<const T> input, std::span<T> output) { if (!_is_configured) { return {0, Status::ERROR}; // Cannot proceed } // ... normal processing ... }

⚠️ Exception Guidelines: Exceptions inside processBulk() are discouraged—they prevent compiler optimizations and complicate the scheduler. Use explicit Status::ERROR returns instead.

5.3 processOne vs processBulk: A Practical Example

Here's a real-world decimating filter showing when to use each function type:

template<typename T, typename TParent> class DecimatingFilter : public gr::Block<DecimatingFilter<T, TParent>> { FIR<T> _filter; std::uint32_t decimate = 1; public: // Use processOne for constant-rate processing (1:1 ratio) [[nodiscard]] T processOne(T input) noexcept requires(TParent::ResamplingControl::kIsConst) { return _filter.processOne(input); } // Use processBulk for variable-rate processing (N:M ratio) [[nodiscard]] work::Status processBulk(std::span<const T> input, std::span<T> output) noexcept requires(not TParent::ResamplingControl::kIsConst) { assert(output.size() >= input.size() / decimate); // optional, usually not needed std::size_t out_sample_idx = 0; for (std::size_t i = 0; i < input.size(); ++i) { T output_sample = _filter.processOne(input[i]); if (i % decimate == 0) { output[out_sample_idx++] = output_sample; } } return work::Status::OK; } };

Key Design Points:

5.4 Declaring Ports

GR_DECLARE_PORT(in, PortIn<float>); GR_DECLARE_PORT(out, PortOut<float>);

That's it—no magic numbers like input_items[0].

5.5 Reflection One‑Liner

GR_MAKE_REFLECTABLE(MultiplyConst, (float, value, "Multiplier", 1.0f, "Constant gain factor"));

This auto‑generates (CMake script picks up the reflection data and writes YAML + Python stubs):

6 SIMD Optimisation

std::simd is headed for the post-C++23 standard; compilers already ship it under <experimental/simd>. It gives you auto‑vectorisation without writing AVX/NEON intrinsics. GR4's meta‑helpers make it trivial to support both scalar and vector paths.

6.1 Detecting Whether SIMD Is Available

#include <experimental/simd> #if defined(__cpp_lib_simd) // GCC ≥13 / Clang ≥16 // then <experimental/simd> is available #endif

GR4 wraps this behind the convenience concept gr::meta::any_simd<V,T> used in the generated processOne() template.

6.2 A Minimal Example

Below is the SIMD‑aware branch extracted from Analog.hpp (MultiplyConst):

template<gr::meta::t_or_simd<T> V> [[nodiscard]] constexpr V processOne(const V& a) const noexcept { if constexpr (gr::meta::any_simd<V, T>) { return a * value; // element‑wise vector multiply (AVX/NEON) } else { return a * value; // plain scalar – same line, different type } }

Note the single line of math appears twice—one inside the SIMD branch, one outside. Most real blocks need no extra code: the compiler emits the vector loop for you.

6.3 Fallback Strategy

If your compiler is older than GCC 13/Clang 16, __cpp_lib_simd is undefined and the scalar branch is chosen. Performance degrades gracefully, functionality remains identical.

6.4 Benchmarking Tips

Tip Command
Build with ‑O3 ‑march=native cmake -DCMAKE_CXX_FLAGS="-O3 -march=native" ..
Time one block gr_benchmark -b math::MultiplyConst -n 10M
Check assembly objdump -dS libgnuradio‑math.so | grep -E "(ymm|zmm|vq)"
Rule of thumb: if you see vfmadd or vmulps in the disassembly, SIMD is working.

7 Reflection & Registration System

7.1 Why Reflection?

GR4 can introspect a block at run‑time—ports, parameters, doc‑strings—thanks to a tiny header‑only reflection system. This powers future GUI builders and Python auto‑bindings.

7.2 Dissecting an Example

struct MultiplyConst : gr::Block<MultiplyConst> { PortIn<float> in; PortOut<float> out; float value = 1.0f; GR_MAKE_REFLECTABLE(MultiplyConst, in, out, value); };

GR_MAKE_REFLECTABLE:

  1. Registers each public member (in, out, value).
  2. Emits metadata (type, default, doc string).
  3. Auto‑generates setters/getters used by Python and YAML.

7.3 Registration One‑Liner

GR_REGISTER_BLOCK("gr::blocks::math::MultiplyConst", MultiplyConst, ([T], std::multiplies<[T]>), // template params & functor [ float, double, std::complex<float>, std::complex<double> ])

Parameters:

  1. Fully‑qualified name – becomes YAML path and Python import path.
  2. C++ symbol – the class or alias to instantiate.
  3. Template pack – how to splice the functor/type into the template.
  4. Type list – every concrete data type you want exposed.

7.4 Hot‑Reloading Parameters

Because value is reflected, you can change it in a flowgraph at run‑time:

blk = gr.blocks.math.multiply_const_f() # Python auto‑binding blk.value = 0.5 # halves the gain while the graph runs

No extra C++ needed!

8 Testing & Validation

A good port is bit‑exact and has unit tests covering corner cases. GR4 ships a thin Boost.UT harness in gnuradio‑4.0/testing.

8.1 Minimal Test Skeleton

#include <boost/ut.hpp> #include <gnuradio-4.0/testing/TagMonitors.hpp> using namespace boost::ut; using gr::blocks::math::MultiplyConst; "MultiplyConst scalar"_test = [] { constexpr float k = 2.0f; MultiplyConst<float> blk({{"value", k}}); expect(eq(blk.processOne(3.0f), 6.0f)); };

8.2 Graph‑Level Tests (qa_analog.cpp)

qa_analog.cpp wires sources → DUT → sink inside an in‑memory graph and validates the full scheduler path.

Graph g; auto& src = g.emplaceBlock<TagSource<float>>( property_map{{"values", {1,2,3}}} ); auto& mul = g.emplaceBlock<MultiplyConst<float>>( property_map{{"value",2.0f}} ); auto& snk = g.emplaceBlock<TagSink<float, ProcessFunction::USE_PROCESS_ONE>>(); g.connect(src,"out",mul,"in"); g.connect<"out">(mul).to<"in">(snk); expect(eq(scheduler::Simple{std::move(g)}.runAndWait().has_value(), true));

8.3 CI Tips

Add a quick workflow to .github/workflows/ci.yml:

- name: Build & test (Clang 18) run: | cmake -B build -S . -DCMAKE_CXX_COMPILER=clang++-18 -GNinja ninja -C build -j$(nproc) ctest --test-dir build -j$(nproc) --output-on-failure

Enable multiple jobs for GCC/Clang, and add ASAN if possible.

9 Complete Porting Examples

This section walks through three complete ports from scratch, showing the full transformation from GR3 to GR4 code.

9.1 Simple Math Block: MultiplyConst

GR3 Original:

// From gr-blocks/lib/multiply_const_ff_impl.cc class multiply_const_ff_impl : public multiply_const_ff { float d_k; public: multiply_const_ff_impl(float k) : d_k(k) {} void set_k(float k) { d_k = k; } float k() const { return d_k; } int work(int noutput_items, gr_vector_const_void_star& input_items, gr_vector_void_star& output_items) override { const float* in = (const float*)input_items[0]; float* out = (float*)output_items[0]; for (int i = 0; i < noutput_items; i++) { out[i] = in[i] * d_k; } return noutput_items; } };

GR4 Port:

// From gr4/blocks/math/Analog.hpp template<typename T> struct MultiplyConst : public gr::Block<MultiplyConst<T>> { PortIn<T> in; PortOut<T> out; T value = T{1}; // reflected parameter // SIMD-aware processing template<gr::meta::t_or_simd<T> V> [[nodiscard]] constexpr V processOne(const V& a) const noexcept { if constexpr (gr::meta::any_simd<V, T>) { return a * value; // vectorized multiply } else { return a * value; // scalar multiply } } GR_MAKE_REFLECTABLE(MultiplyConst, in, out, value); }; // Registration for all numeric types GR_REGISTER_BLOCK("gr::blocks::math::MultiplyConst", MultiplyConst, [T], std::multiplies<T>, [float, double, std::complex<float>, std::complex<double>]);

Key Changes:

9.2 Stateful Block: Integrate

GR3 Original:

// From gr-blocks/lib/integrate_ff_impl.cc class integrate_ff_impl : public integrate_ff { std::uint32_t d_decim; std::uint32_t d_count; float d_sum; public: integrate_ff_impl(std::uint32_t decim) : d_decim(decim), d_count(0), d_sum(0) {} std::int32_t work(std::int32_t noutput_items, gr_vector_const_void_star& input_items, gr_vector_void_star& output_items) override { const float* in = (const float*)input_items[0]; float* out = (float*)output_items[0]; std::int32_t j = 0; for (std::int32_t i = 0; i < noutput_items * d_decim; i++) { d_sum += in[i]; d_count++; if (d_count == d_decim) { out[j++] = d_sum; d_sum = 0; d_count = 0; } } return j; } };

GR4 Port:

// From gr4/blocks/math/Analog.hpp template<typename T> struct Integrate : public gr::Block<Integrate<T>> { PortIn<T> in; PortOut<T> out; std::uint32_t decim = 1; private: std::uint32_t d_count = 0; T d_sum = T{0}; public: work_return_t processBulk(std::span<const T> input, std::span<T> output) { std::size_t j = 0; for (std::size_t i = 0; i < input.size() && j < output.size(); ++i) { d_sum += input[i]; d_count++; if (d_count == decim) { output[j++] = d_sum; d_sum = T{0}; d_count = 0; } } return {j, Status::OK}; } GR_MAKE_REFLECTABLE(Integrate, in, out, decim); };

Key Changes:

9.3 Advanced Processing: Argmax

GR3 Original:

// From gr-blocks/lib/argmax_fs_impl.cc class argmax_fs_impl : public argmax_fs { size_t d_vlen; public: argmax_fs_impl(size_t vlen) : d_vlen(vlen) {} int work(int noutput_items, gr_vector_const_void_star& input_items, gr_vector_void_star& output_items) override { const float* in = (const float*)input_items[0]; short* out = (short*)output_items[0]; for (int i = 0; i < noutput_items; i++) { float max_val = in[i * d_vlen]; size_t max_idx = 0; for (size_t j = 1; j < d_vlen; j++) { if (in[i * d_vlen + j] > max_val) { max_val = in[i * d_vlen + j]; max_idx = j; } } out[i] = (short)max_idx; } return noutput_items; } };

GR4 Port:

// From gr4/blocks/math/Analog.hpp template<typename T> struct Argmax : public gr::Block<Argmax<T>> { PortIn<T> in; PortOut<gr::Size_t> out; std::size_t vlen = 1; work_return_t processBulk(std::span<const T> input, std::span<gr::Size_t> output) { const std::size_t n_vectors = input.size() / vlen; const std::size_t n_output = std::min(n_vectors, output.size()); for (std::size_t i = 0; i < n_output; ++i) { const auto vector_start = input.subspan(i * vlen, vlen); const auto max_it = std::max_element(vector_start.begin(), vector_start.end()); output[i] = static_cast<gr::Size_t>(std::distance(vector_start.begin(), max_it)); } return {n_output, Status::OK}; } GR_MAKE_REFLECTABLE(Argmax, in, out, vlen); };

Key Changes:

9.4 Common Porting Patterns

Pattern GR3 Anti-Pattern GR4 Best Practice
Type variants Separate _ff, _cc, _ii files Single template with type registration
Parameters Private + getter/setter Public + reflection
Buffers Raw pointers + manual indexing std::span + bounds checking
State Mix public/private inconsistently Private state, reflected parameters
Algorithms Hand-rolled loops STL algorithms when possible

10 Best Practices & Conventions

This section covers coding standards, naming conventions, and architectural patterns that make GR4 blocks maintainable and performant.

10.1 Naming Conventions

Element Convention Example Rationale
Block names PascalCase MultiplyConst, FftFilter Matches C++ class naming
Port names lowerCamel, descriptive in, out, taps Clear intent in Python
Parameters snake_case sample_rate, cutoff_freq Consistent with GNU Radio tradition
Private members d_ prefix d_history, d_taps Distinguishes from parameters
Template params Single uppercase letter T, U, V Standard C++ convention

10.2 Performance Best Practices

Golden Rule: Write readable code first, then optimize the hot paths.

Memory Management

Algorithm Optimization

Compiler Hints

// Good: Help the compiler optimize [[nodiscard]] constexpr auto processOne(const T& input) const noexcept { // constexpr allows compile-time evaluation // noexcept enables aggressive optimization return input * gain; } // Bad: Runtime overhead auto processOne(const T& input) { if (enable_processing) { // branch in hot path return input * gain; } return input; }

10.3 Code Organization Patterns

Single Responsibility Principle

Each block should do one thing well:

// Good: Clear, focused responsibility struct LowPassFilter : public gr::Block<LowPassFilter<T>> { PortIn<T> in; PortOut<T> out; float cutoff_freq = 1000.0f; // ... filter implementation }; // Bad: Mixed responsibilities struct AudioProcessor : public gr::Block<AudioProcessor<T>> { // Does filtering, AGC, and compression - too much! };

Template Parameter Guidelines

10.4 Error Handling

Input Validation

// Good: Validate in constructor or setter struct Decimator : public gr::Block<Decimator<T>> { int decim = 1; void validateParameters() { if (decim <= 0) { throw std::invalid_argument("Decimation must be positive"); } } // Called automatically by reflection system void setDecimation(int dec) { decim = dec; validateParameters(); } };

Graceful Degradation

// Good: Fallback when SIMD unavailable template<gr::meta::t_or_simd<T> V> [[nodiscard]] constexpr V processOne(const V& input) const noexcept { if constexpr (gr::meta::any_simd<V, T>) { return performSIMDOperation(input); } else { return performScalarOperation(input); // Always works } }

10.5 Testing Strategy

Unit Test Coverage

Integration Tests

// Good: Test in realistic flowgraph "MultiplyConst in flowgraph"_test = [] { Graph g; auto& src = g.emplaceBlock<TagSource<float>>(); auto& mul = g.emplaceBlock<MultiplyConst<float>>({{"value", 2.0f}}); auto& snk = g.emplaceBlock<TagSink<float>>(); g.connect(src, "out", mul, "in"); g.connect(mul, "out", snk, "in"); scheduler::Simple{std::move(g)}.runAndWait(); // Verify results... };

10.6 Documentation Standards

Block Documentation

/// @brief Multiply input signal by a constant factor /// /// This block multiplies each input sample by a constant value. /// Supports SIMD acceleration when available. /// /// @tparam T Input/output sample type (float, double, complex<float>, etc.) template<typename T> struct MultiplyConst : public gr::Block<MultiplyConst<T>> { PortIn<T> in; ///< Input signal PortOut<T> out; ///< Output signal (input * value) T value = T{1}; ///< @brief Multiplication factor @unit linear GR_MAKE_REFLECTABLE(MultiplyConst, in, out, value); };

Parameter Documentation

10.7 Common Anti-Patterns to Avoid

❌ Anti-Pattern ✅ Better Approach Why
Global state / singletons Dependency injection via constructor Testability, thread safety
String-based configuration Strongly typed parameters Compile-time validation
Deep inheritance hierarchies Composition over inheritance Flexibility, maintainability
Premature optimization Profile-guided optimization Readable code first
Magic numbers in code Named constants or parameters Self-documenting code

11 Troubleshooting Common Issues

This section covers the most common problems encountered during porting, with step-by-step solutions.

11.1 Compilation Errors

❌ "No matching function for call to 'gr::Block'"

error: no matching function for call to 'gr::Block<MyBlock>::Block()' note: candidate expects 1 argument, 0 provided

✅ Solution: GR4 blocks need a property map constructor:

// Bad: Missing constructor struct MyBlock : public gr::Block<MyBlock> { // ... }; // Good: Add property map constructor struct MyBlock : public gr::Block<MyBlock> { explicit MyBlock(const property_map& params = {}) { // Initialize from params if needed } // ... };

❌ "Cannot convert 'const float*' to 'std::span<float>'"

error: cannot convert 'const float*' to 'std::span<float>' in assignment

✅ Solution: Use std::span<const T> for input, std::span<T> for output:

// Bad: Wrong const-ness work_return_t processBulk(std::span<float> input, std::span<float> output); // Good: Input should be const work_return_t processBulk(std::span<const float> input, std::span<float> output);

❌ "GR_MAKE_REFLECTABLE not found"

error: 'GR_MAKE_REFLECTABLE' was not declared in this scope

✅ Solution: Include the reflection header:

#include <gnuradio-4.0/Block.hpp> #include <gnuradio-4.0/reflection.hpp> // Add this line

11.2 Runtime Errors

❌ "Block not found in registry"

RuntimeError: Block 'gr::blocks::math::MyBlock' not found in registry

✅ Solution: Ensure your block is registered and linked:

// 1. Add registration at end of header GR_REGISTER_BLOCK("gr::blocks::math::MyBlock", MyBlock, [T], [float, double]); // 2. Make sure it's compiled into the library // Check CMakeLists.txt includes your header // 3. Verify it's linked #include <gnuradio-4.0/math/MyBlock.hpp> // Force instantiation

❌ "Scheduler hangs or crashes"

Program hangs indefinitely or crashes with segmentation fault

✅ Solution: Check your work return and buffer handling:

// Bad: Wrong return value work_return_t processBulk(std::span<const T> input, std::span<T> output) { // ... process data ... return {input.size(), Status::OK}; // WRONG: should return output.size() } // Good: Return actual items produced work_return_t processBulk(std::span<const T> input, std::span<T> output) { std::size_t items_produced = std::min(input.size(), output.size()); // ... process data ... return {items_produced, Status::OK}; }

❌ "Port connection failed"

RuntimeError: Cannot connect incompatible port types

✅ Solution: Check port type compatibility:

// Bad: Type mismatch PortIn<float> in; PortOut<double> out; // Different types! // Good: Consistent typing PortIn<T> in; PortOut<T> out;

11.3 Performance Issues

❌ "Block is slower than GR3 version"

✅ Debugging checklist:

  1. Check compiler flags:
    cmake -DCMAKE_BUILD_TYPE=Release -DCMAKE_CXX_FLAGS="-O3 -march=native" ..
  2. Verify SIMD is working:
    objdump -dS libgnuradio-math.so | grep -E "(ymm|zmm|vfmadd)"
  3. Profile hot paths:
    perf record -g ./your_flowgraph perf report
  4. Check memory allocation:
    valgrind --tool=massif ./your_flowgraph

❌ "High CPU usage in scheduler"

✅ Solution: Optimize your processOne/processBulk implementation:

// Bad: Inefficient inner loop for (std::size_t i = 0; i < input.size(); ++i) { output[i] = std::sin(input[i]); // Expensive call per sample } // Good: Vectorized operation std::transform(input.begin(), input.end(), output.begin(), [](const auto& x) { return std::sin(x); });

11.4 SIMD-Related Issues

❌ "SIMD code compiles but crashes"

Segmentation fault in SIMD code path

✅ Solution: Check alignment and bounds:

// Bad: Assumes aligned data template<gr::meta::t_or_simd<T> V> V processOne(const V& input) { // May crash if input not aligned return input * 2.0f; } // Good: Let compiler handle alignment template<gr::meta::t_or_simd<T> V> V processOne(const V& input) { if constexpr (gr::meta::any_simd<V, T>) { return input * static_cast<V>(2.0f); // Explicit cast } else { return input * 2.0f; } }

❌ "SIMD not being used despite compiler support"

# No SIMD instructions in assembly output

✅ Solution: Check your template constraints:

// Bad: Missing SIMD template void processOne(const T& input) { // Only scalar version } // Good: SIMD-aware template template<gr::meta::t_or_simd<T> V> [[nodiscard]] constexpr V processOne(const V& input) const noexcept { if constexpr (gr::meta::any_simd<V, T>) { return input * value; // SIMD path } else { return input * value; // Scalar path } }

11.5 Testing and Validation Issues

❌ "Unit tests fail intermittently"

✅ Solution: Check for floating-point precision issues:

// Bad: Exact comparison expect(eq(result, expected)); // Good: Tolerance-based comparison expect(approx(result, expected, 1e-6f));

❌ "Tests pass but flowgraph produces wrong output"

✅ Solution: Add integration tests with realistic data:

// Test with actual signal processing chain "Integration test with realistic signal"_test = [] { Graph g; auto& src = g.emplaceBlock<SignalSource<float>>({ {"frequency", 1000.0f}, {"sample_rate", 48000.0f} }); auto& dut = g.emplaceBlock<MyBlock<float>>(); auto& snk = g.emplaceBlock<VectorSink<float>>(); // ... connect and run ... // Verify output characteristics auto output = snk.getData(); expect(output.size() > 0); // Check frequency domain, RMS, etc. };

11.6 Build System Issues

❌ "CMake can't find GR4 components"

CMake Error: Could not find a configuration file for package "gnuradio"

✅ Solution: Set the correct CMake paths:

# Option 1: Set CMAKE_PREFIX_PATH export CMAKE_PREFIX_PATH=/usr/local/lib/cmake/gnuradio:$CMAKE_PREFIX_PATH # Option 2: Use find_package with PATHS find_package(gnuradio REQUIRED PATHS /usr/local/lib/cmake/gnuradio) # Option 3: Install GR4 to standard location cmake --install build --prefix /usr/local

❌ "Linking errors with undefined symbols"

undefined reference to `gr::blocks::math::MyBlock<float>::MyBlock()'

✅ Solution: Ensure proper template instantiation:

// In your .cpp file (if you have one) template class MyBlock<float>; template class MyBlock<double>; template class MyBlock<std::complex<float>>; template class MyBlock<std::complex<double>>; // Or use explicit instantiation in header extern template class MyBlock<float>;

11.7 Debugging Techniques

Enable Debug Logging

# Set environment variable export GR_LOG_LEVEL=DEBUG # Or in code gr::log::set_level(gr::log::Level::DEBUG);

Use GDB for Crashes

# Compile with debug symbols cmake -DCMAKE_BUILD_TYPE=Debug .. # Run with GDB gdb ./your_program (gdb) run (gdb) bt # when it crashes

AddressSanitizer for Memory Issues

# Enable ASAN cmake -DCMAKE_CXX_FLAGS="-fsanitize=address -g" .. # Run your program - it will catch memory errors

11.8 Common Gotchas

❌ Common Mistake ✅ Correct Approach Why It Matters
Using std::vector in processOne Use fixed-size arrays or class members Allocation in hot path kills performance
Forgetting const on input spans std::span<const T> for inputs Prevents accidental modification
Returning wrong item count Return actual items produced Scheduler needs accurate counts
Missing noexcept on processOne Mark processOne as noexcept Enables compiler optimizations
Not testing edge cases Test with zero, NaN, infinity Real-world data is messy

12 Resources & References

This section provides essential links, documentation, and community resources for GNU Radio 4.0 development.

12.1 Official Documentation

Resource URL Description
GNU Radio 4.0 Repository github.com/fair-acc/gnuradio4 Main development repository
API Documentation gnuradio.github.io/gr4-docs/ Automatically generated API docs
GNU Radio Website gnuradio.org Official project website
GR4 Design Documents github.com/fair-acc/gnuradio4/docs Architecture and design rationale

12.2 Development Resources

Build Environment

Code Examples

12.3 Community and Support

Platform Link Purpose
Mailing List discuss-gnuradio@gnu.org General discussion and support
Element Channel chat.gnuradio.org Official communication channel (Matrix/Element)
Stack Overflow stackoverflow.com/questions/tagged/gnuradio Programming questions and answers
GitHub Discussions github.com/fair-acc/gnuradio4/discussions Feature requests and development discussion

12.4 Learning Resources

C++ and Modern C++ Resources

SIMD and Performance

12.5 Development Tools

Compilers and Build Systems

Tool Minimum Version Installation Notes
GCC 13.3 (14.2 for full simd) sudo apt install gcc-14 Best C++23 support
Clang 18+ sudo apt install clang-18 Better diagnostics
CMake 3.25+ pip install cmake Presets support
Ninja Any sudo apt install ninja-build Fast parallel builds

Development Environment

12.6 Testing and Quality Assurance

Testing Frameworks

Code Quality Tools

12.7 Related Projects and Dependencies

Core Dependencies

Optional Dependencies

12.8 Contributing Guidelines

Code Contribution Process

  1. Fork the repository on GitHub
  2. Create feature branch: git checkout -b feature/my-new-block
  3. Follow coding standards: Use clang-format configuration
  4. Add tests: Unit tests and integration tests
  5. Sign commits: git commit -s (DCO required)
  6. Open pull request with clear description

Code Review Guidelines

12.9 Research Papers and Publications

Note: Some publications may still be in preparation. Check the repository for latest papers.

12.10 Quick Reference

Essential Commands

# Clone and build git clone https://github.com/fair-acc/gnuradio4.git cd gnuradio4 mkdir build && cd build cmake -DCMAKE_BUILD_TYPE=Release .. cmake --build . -j$(nproc) # Run tests ctest --output-on-failure # Install sudo cmake --install . # Create new OOT module (if needed) gr-template/bootstrap_oot.sh myblocks

Common Environment Variables

# Logging export GR_LOG_LEVEL=DEBUG # Build optimization export CMAKE_BUILD_TYPE=Release export CMAKE_CXX_FLAGS="-O3 -march=native" # Python path (if needed) export PYTHONPATH=/usr/local/lib/python3.x/site-packages:$PYTHONPATH