Tuesday, October 8, 2024

Symbol suffixes compilers use to confuse developers

Ever taken a peek at the symbol table of an object or executable? Or, if you're feeling particularly adventurous, have you snooped around the Linux kernel symbol table (kallsyms, for those on a first-name basis)? If so, you've probably been baffled by the bizarre names the compiler uses to mangle your precious symbols. Well, you’re not alone. I’ve spent some quality time collecting bits and pieces of information from various corners of the internet, carefully cutting and pasting (with great skill, I might add) to create this handy page where everything is nicely grouped. If you're anything like me, you’ll appreciate having all this info in one convenient place. Enjoy the madness!

Suffix Description Note
.constprop.<n> Constant propagation. Indicates that the function has been optimized using constant propagation, where constant values were propagated through the code. The is usually a version number and can increment if the function is optimized multiple times.
.isra.<n> Interprocedural Scalar Replacement of Aggregates (ISRA). This optimization involves breaking down aggregates (like structs or arrays) passed to functions into individual scalar values. It helps improve register usage and reduces memory accesses. The indicates the version or stage of the transformation.
.clone.<n> Function cloning. When the compiler creates a cloned version of a function to optimize it for specific use cases (e.g., with certain constant arguments, different calling conventions, or to assist in function inlining), it adds the .clone. suffix. This is useful for tailoring the function to certain conditions, such as a specific set of input parameters.
.part.<n> Function partitioning. The .part. suffix appears when the compiler splits a large function into smaller parts. This often happens due to the function being too complex for certain optimizations or for inlining purposes. The refers to the specific part number.
.cold Cold code. This suffix is added to functions or parts of functions that are considered "cold" by the compiler. Cold code refers to parts of the program that are rarely executed, such as error handling code or unlikely branches in conditionals. The compiler may optimize these functions for size rather than speed, or move them to separate sections of memory to improve cache performance for "hot" code (code that is frequently executed).
.hot Hot code. Similar to .cold, this suffix indicates "hot" code, which is frequently executed. The compiler might apply aggressive optimizations focused on improving execution speed, such as loop unrolling, function inlining, or improved branch prediction.
.likely/.unlikely Likely or unlikely branches. These suffixes indicate that the compiler has predicted whether a particular branch of code is likely or unlikely to be executed, usually based on profiling data or heuristics. The likely branch is optimized for speed, while the unlikely branch might be optimized for size or minimized in terms of performance impact.
.lto_priv.<hash> Link-Time Optimization (LTO) private function. This suffix appears during link-time optimization (LTO), where functions are optimized across translation units (source files). The .lto_priv. suffix is added to private (non-exported) functions that have been internalized and optimized during the LTO phase. The is typically a unique identifier.
.omp_fn.<n> OpenMP function. Functions generated as part of OpenMP parallelization are often given this suffix. It indicates that the function was created or modified to handle a parallel region of code as specified by OpenMP directives. The is the version or index of the OpenMP function.
.split.<n> Split functions. This suffix appears when a large function is split into smaller pieces, either for optimization reasons or due to certain compiler strategies (like function outlining). The indicates the part or section number of the split function.
.inline Inlined function. Functions marked with this suffix have been aggressively inlined by the compiler. Sometimes, a specialized inlined version of the function is created, while the original remains intact for non-inlined calls.
.to/.from Conversion functions. These suffixes are used when functions are involved in some sort of conversion process, such as casting or transforming data structures from one form to another. .to typically refers to converting to a certain form, and .from refers to converting from a form.
.gcda Profiling data generation (related to GCov). This suffix is associated with functions that produce profiling data used by GCov (GNU's code coverage tool). These functions track execution counts and other statistics to generate code coverage information.
.llvm.<hash> Local linkage promoted to External linkage With ThinLTO, you might see mangled names having this suffix. This happens because functions inlined across units need their local references made global, causing name changes.

Constant Propagation: Overview

Constant Propagation is an important optimization technique used by compilers to improve the performance of generated code. It involves analyzing the code to identify constant values that are known at compile-time and propagating these constants throughout the code. By substituting variables with their constant values, the compiler can simplify expressions and potentially remove unnecessary calculations, improving both the runtime performance and memory usage of the program.

How Constant Propagation Works:

  1. Identify Constants During the compilation process, the compiler looks for variables that are assigned constant values. For example:

    int x = 5; 
    int y = 2 \* x;
    Here, x is a constant because it is assigned a known value 5.
  2. Propagation: Once the compiler identifies a constant, it replaces the variable with its constant value wherever it appears in subsequent code. Continuing the above example:

    int y = 2 * 5;
  3. Simplification: After propagation, the compiler can further simplify expressions that involve constants:

    int y = 10;    
  4. Dead Code Elimination: Sometimes, constant propagation leads to opportunities for other optimizations, such as dead code elimination. For instance, after propagating constants, conditional branches that always evaluate to true or false can be simplified, allowing the compiler to remove unnecessary branches:

    if (5 < 10) { 
        // This block is always executed, so the condition can be removed. 
    }

Benefits of Constant Propagation:

  • Improved Performance: Constant propagation can eliminate runtime calculations, reducing the overall number of operations in the code. This leads to faster execution times.
  • Reduced Code Size: Simplifying expressions and removing redundant code can reduce the size of the compiled binary.
  • Better Memory Usage: By eliminating unnecessary variables and operations, constant propagation can reduce memory consumption.

Example of Constant Propagation:

Consider the following simple C program:

void example() {
    int a = 10;
    int b = a + 5;
    int c = b * 2;
    printf("%d\n", c);
}

Without constant propagation, this program might be compiled as-is, performing the operations b = a + 5 and c = b * 2 at runtime. However, with constant propagation, the compiler could optimize the program as follows:

void example() {
    int c = 30;
    printf("%d\n", c);
}

Constant Propagation vs. Constant Folding:

  • Constant Propagation focuses on replacing variables that hold constant values with those constants wherever possible in the code.
  • Constant Folding is another optimization technique that involves evaluating constant expressions at compile-time rather than runtime. For example:

    int x = 5 + 3;

    Here, constant folding would replace 5 + 3 with 8 at compile-time, eliminating the need for the addition operation at runtime.

Both techniques often work together, with constant propagation creating opportunities for constant folding and vice versa.

Types of Constant Propagation:

  1. Intra-Procedural Constant Propagation: This type of propagation occurs within a single function or block of code. The compiler tracks constants within the boundaries of the function or block, but it does not propagate values across different functions.
  2. Inter-Procedural Constant Propagation: This is a more advanced form of propagation where the compiler tracks and propagates constants across function boundaries. It requires more complex analysis but can result in better optimization, especially in large programs with function calls.

What .constprop.0 Means

  • Suffix Meaning: The .constprop.0 suffix is added by the compiler (usually by GCC or Clang) to signify that the function has been subjected to constant propagation optimization. The number (0 in .constprop.0) is just a version indicator and can be incremented if the function undergoes further stages of optimization.
  • Constant Propagation at the Function Level: When a compiler identifies that certain arguments to a function are constants, it can create a specialized version of the function where those constants are hardcoded. This allows the function to be optimized more aggressively. The suffix is attached to the optimized function to distinguish it from the original unoptimized version. For example, consider the following function:

    int add(int x, int y) {
        return x + y;
    } 

    If, during optimization, the compiler finds that this function is frequently called with constant values, say add(3, 5), it might create a specialized version of the function where the constants are propagated:

    int add.constprop.0() {
        return 8;  // 3 + 5 has been precomputed
    }

    In the compiled code, this new function might be named something like add.constprop.0 to reflect that it has been optimized based on constant propagation.

When Does Constant Propagation Trigger This?

The compiler performs constant propagation across function boundaries when it can determine that certain function parameters are constant in all or some of the calls to that function. This optimization is often triggered in conjunction with inlining, constant folding, and function specialization. Here’s how it works: 1. Function Inlining: When the compiler decides to inline a function (replace the call to the function with the actual function code), it can propagate constant arguments into the inlined function. This can lead to opportunities for further simplification. If the function isn’t fully inlined for all calls, the compiler might create a specialized version with constant propagation applied for those constant cases. 2. Function Specialization: If a function is called multiple times with certain arguments that are constant, the compiler might generate a specialized version of the function for those constant values. The .constprop.0 function is such a specialized version where constant propagation and potentially other optimizations (like dead code elimination or loop unrolling) have been applied. 3. Rewriting Calls: After creating the specialized version of the function, the compiler rewrites calls to the original function with constant arguments to point to the optimized .constprop.0 version. This way, the optimized version is used where possible, but the original version remains available for cases where the arguments aren’t constant.

Benefits of .constprop.0 Functions

The creation of these specialized functions with constant propagation offers several benefits: - Performance Gains: The compiler can optimize away redundant computations and simplify the function, leading to faster execution. For example, expressions that depend on constants might be precomputed, conditional branches might be eliminated, and loops might be unrolled. - Reduced Code Size: In some cases, specialized functions with constant propagation can actually reduce the code size, as the compiler might remove code paths that are no longer needed (for example, dead branches or unnecessary variable assignments). - Better Cache Usage: Specialized versions of functions can be smaller and more cache-friendly since they focus only on the specific case where certain inputs are constant.

Example in Practice

Consider this C code:

int compute(int a, int b) {
    return a * 2 + b;
}

int main() {
    return compute(4, 5);
}

Without optimization, the compute function would be called at runtime with the arguments 4 and 5. However, during constant propagation, the compiler detects that 4 and 5 are constants and creates a specialized version of compute:

int compute.constprop.0() {
    return 13;  // Precomputed: 4 * 2 + 5
}

The call in main() would be replaced by a direct call to compute.constprop.0(), and no runtime multiplication or addition would be required.

Why Does the Original Function Stay?

The original, non-specialized version of the function typically stays in the binary if there are calls to it with non-constant arguments or if it cannot be fully optimized in all cases. The .constprop.0 function is just an optimized variant for cases where constants are known, so the compiler keeps both versions to handle different calling scenarios.

Possible Reasons for .inline Suffix Existence:

  1. Partial Inlining:
    • What Happens: Sometimes, the compiler may choose to inline a function only in certain places (e.g., hot paths where performance is critical) while retaining the original non-inlined version for other calls. This can happen when the function is small enough to be inlined in performance-critical paths but also used in non-critical paths or in situations where inlining might increase code size too much.
    • Result: In this case, an inlined version may be created, but the original function with an .inline suffix might still be retained for non-inlined calls. This allows the compiler to balance performance and code size.
  2. Inlining Across Translation Units (LTO):
    • What Happens: During Link-Time Optimization (LTO), functions might be inlined across different translation units (source files). However, the function might still need to be retained in its original form for other purposes (such as if it’s part of a shared library or called from another compilation unit that was not optimized in the same way).
    • Result: A version of the function with the .inline suffix could be preserved as an internal symbol, ensuring that the compiler can still generate callable code if needed, while simultaneously allowing aggressive inlining across units.
  3. Multiple Optimization Levels:
    • What Happens: The compiler might generate different versions of a function to optimize for specific use cases. For instance, it could create an inlined version for certain contexts and a standalone version for others, especially if different parts of the code are compiled with different optimization flags or under different constraints (e.g., space vs. speed optimizations).
    • Result: The .inline suffix would then be used to distinguish the inlined version from the original, non-inlined function, even though the function is still present as a callable entity.
  4. Debugging and Profiling:
    • What Happens: Compilers sometimes retain inlined function symbols in the binary even though the code has been inlined, for the purpose of debugging and profiling. Tools like gdb or performance profilers may need to refer to the original function for accurate stack traces, debugging information, or code coverage data.
    • Result: The compiler might keep a symbol with the .inline suffix so that debugging information remains available, even if the function body no longer exists in its original form.
  5. Function Attributes:
    • What Happens: Certain function attributes or calling conventions may require that a function symbol still exists in the binary, even if the function has been inlined elsewhere. For instance, a function might be declared inline but also weak (meaning it can be overridden) or have other attributes that necessitate keeping a symbol for linking purposes.
    • Result: The compiler may generate both an inlined version and retain a separate version of the function marked with .inline, to fulfill these attributes or constraints.

### Scenario 1: The Function Is Declared inline

When a function is explicitly declared as inline in the source code: - Expectation: The programmer indicates that they would prefer the function to be inlined to avoid the overhead of a function call. This, however, is a hint, not a guarantee. The compiler can still choose not to inline the function, especially if inlining it would increase code size excessively or if the function is too complex. - Linkage and Visibility: Typically, inline functions are defined in headers or in multiple translation units because they should be available to multiple parts of the program. If you declare a function as inline, but it has external linkage, this means the function is visible across multiple translation units, and the linker might still need to ensure that only one definition is used. As a result, the function may still need a symbol in the binary. - Compilers can generate a symbol for such inline functions, especially if they are not inlined in all cases. The symbol might be suffixed with .inline if the compiler creates a specialized version after attempting partial inlining. - Why retain a symbol?: Even though the function is marked inline, the compiler might not inline it everywhere. It might still create a regular function for some call sites while inlining others. The symbol could remain to provide an externally accessible version in case the inlining isn’t performed universally. - In Public Libraries or Interfaces: Despite being marked inline, such functions might still need Interprocedural Scalar Replacement of Aggregates (ISRA) is a compiler optimization technique aimed at improving performance by breaking down large data structures (such as arrays, structs, or classes, collectively called aggregates) into their individual scalar components (like integers or floating-point values). This allows the compiler to perform more efficient optimizations on those individual parts rather than working with the entire structure as a whole. Interprocedural means that this optimization can take place across function boundaries, not just within a single function.

Let’s explore ISRA in detail:

Key Concepts in ISRA

  1. Aggregate Data Structures:
    • Aggregate types refer to complex data structures such as structs, arrays, or classes, which group together multiple individual variables into a single entity.
    • For example, in C, a struct might look like this:

      struct Point {
          int x;
          int y;
      };
    • The Point structure holds two integers, x and y, as part of one entity. Passing and manipulating this entire structure at once can be inefficient, especially when only some of its fields are used in a function.
  2. Scalar Replacement:
    • Scalar replacement is the process of breaking down an aggregate into its individual scalar components, such as integers, floats, or pointers. This allows the compiler to work with these smaller, more manageable parts instead of the entire structure.
    • For example, the compiler could split struct Point into two scalar variables, int x and int y, allowing it to perform optimizations on x and y independently.

How ISRA Works

In the context of interprocedural optimization, ISRA looks at the data being passed between functions (i.e., across function boundaries) and determines whether the entire aggregate needs to be passed, or if the individual fields of the aggregate can be passed as independent scalars. Consider this simple example:

struct Point {
    int x;
    int y;
};

int computeDistance(struct Point p) {
    return p.x * p.x + p.y * p.y;
}

Without ISRA, the computeDistance function would take a struct Point argument by value, which means that both x and y are passed as part of the struct Point object. This may involve unnecessary memory loads, stores, and passing the entire structure on the stack.

What Happens During ISRA

ISRA optimizes this process by performing the following steps: 1. Function Argument Decomposition: - Instead of passing the entire struct Point as a single argument to computeDistance, ISRA breaks it down into its components. This means that instead of passing the structure p, the compiler will generate a version of the function that takes two int arguments, x and y: int computeDistance(int x, int y) { return x * x + y * y; } 2. Across Function Boundaries: - The key part of ISRA is that it works interprocedurally—meaning it doesn’t just happen within one function but across function calls. If a function calls computeDistance, the compiler can transform both the calling function and computeDistance so that they pass and work on the individual scalar values (x and y), instead of the entire struct Point. For example: void process() { struct Point p = {3, 4}; int d = computeDistance(p); } ISRA would convert this into: void process() { int x = 3; int y = 4; int d = computeDistance(x, y); } 3. Improved Register Utilization: - By breaking down aggregates into their scalar components, the compiler can store and manipulate those values directly in CPU registers, which are much faster than accessing memory. In the example above, the x and y values can be kept in registers instead of being stored and loaded from memory, reducing the overhead of memory access. 4. Dead Code Elimination: - If only part of the structure is used, ISRA can also help eliminate unused fields. For instance, if a function only needs p.x but not p.y, the compiler can avoid passing or loading p.y entirely. This further reduces unnecessary computation and memory access.

Example of ISRA Optimization

Before ISRA:

struct Point {
    int x;
    int y;
};

int computeDistance(struct Point p) {
    return p.x * p.x + p.y * p.y;
}

void process() {
    struct Point p = {3, 4};
    int d = computeDistance(p);
}

After ISRA:

int computeDistance(int x, int y) {
    return x * x + y * y;
}

void process() {
    int x = 3;
    int y = 4;
    int d = computeDistance(x, y);
}

Benefits of ISRA

  1. Reduced Memory Traffic:
    • Since scalar values (like integers and floats) can often be passed in registers, ISRA reduces the need to read from or write to memory when working with aggregate data. This leads to faster execution because memory access is generally slower than register access.
  2. Smaller Code Size:
    • By eliminating the need to pass entire aggregates (especially if they are large), the generated code becomes smaller and more efficient, as the overhead of copying entire data structures is avoided.
  3. Better Cache Usage:
    • ISRA reduces memory accesses, which improves cache performance. By avoiding unnecessary loads and stores of the entire structure, it minimizes cache pollution, which can result in better overall performance.
  4. Improved Optimizations:
    • Once aggregates are replaced by scalars, the compiler can apply additional optimizations, such as constant propagation, dead code elimination, and register allocation, to individual fields, which can result in more efficient code.

Challenges and Limitations of ISRA

  • Large Structures: For very large structures, ISRA may not always be beneficial because breaking them down into many scalar values can lead to high register pressure. This is especially true on architectures with limited registers, where using too many registers for scalar values can degrade performance.
  • ABI Constraints: Some Application Binary Interfaces (ABI) dictate how functions should pass arguments (whether in registers or on the stack). ISRA optimizations must respect these rules, which may limit the extent to which aggregate structures can be scalarized.
  • Complex Structures: ISRA is easier to apply to simple aggregates (like structs with only a few fields), but it can be more complex or impractical for deeply nested or very large structures, especially if pointers are involved.

Monday, September 30, 2024

Injecting Code on the Fly: Overcoming Challenges to produce data self stuffed binary blobs.

Sometimes you run a long-lasting process on a remote machine… For example, to compress a large file… When suddenly you have an emergency: your wife is itching to shop. At that point, you typically have a couple of options:

  1. Stop the job and restart it at a more convenient time.
  2. Keep your computer connected and go out shopping.

Surely, if you had known beforehand, you could have started the job in a screen or tmux session, but usually things didn’t go that way, and now you have to decide what to do.

If you have enough time, you can use your trusty gdb to sort this problem out. You can attach to the program, close stderr and stdout, and then create new files to replace them. You can use sigaction to disable SIGHUP, but that isn’t something you can manage when shopping is calling… You simply don’t have the time.

To address this problem, I was trying to code a simple tool to automate the gdb process.

While I was able to produce a PoC using C and Assembly, trying to have the same using pure C, I ran into a challenge that I’d like to discuss briefly and gather suggestions on, if any.

The issue is that the tool needs to inject code into the running program to replace file descriptors and disable signals. However, the code that needs to be injected might require some data.

The obvious solution would be to write a small binary in assembly that can be executed in that context, but I wanted to write it in C. The problem is: can I embed data into a function in the .text section? The assembly equivalent would be something trivial like:

jmp code data: .byte [...] code: body of the function

Doing something similar in C that is both functional and portable is far from trivial. Here’s my current solution, which still has a few issues, and I’d like to collect suggestions:


void injected_function() { volatile int a = 0; if (a) { str: asm volatile ( ".byte 0x48, 0x65, 0x6c, 0x6c, 0x6f, 0x20, 0x66, 0x72, 0x6f, 0x6d, 0x20," "0x69, 0x6e, 0x6a, 0x65, 0x63, 0x74, 0x65, 0x64, 0x20, 0x66, 0x75, 0x6e," "0x63, 0x21, 0x0a, 0x00, 0x00" ); } str_end: write(1, (void *) &&str, (&&str_end - &&str)); }

This function is supposed to be injected into the address space of a running program and write: “Hello from injected func!” However, there are a few quirks: (maybe more, but I haven’t stomped into them yet)

  • In x86_64, when -fcf-protection=full is enabled, the asm volatile statements are considered valid jump targets, resulting in an endbr64 being inserted. Solutions include skipping 2 bytes to avoid printing the opcodes of the endbr64 or disabling CFI using -fcf-protection=none. I don’t like either solution, but I couldn’t find another workaround.
  • When compiling this for aarch64, if the message length is not a multiple of four, the resulting label of the code after the string becomes misaligned. My solution was to add a few \x00 bytes to ensure proper alignment, but I’m not satisfied with this approach.

I’m looking for a solution that is architecture-independent, but I haven’t been able to find one. Does anyone have any suggestions?

Monday, July 22, 2024

Using BTF to Build Out-of-Tree Kernel Modules with Private Struct Definitions

Introduction

OoT kernel modules often face challenges when they need access to private header struct definitions that are not available in public headers. Traditional methods to access these private headers can lead to complications and maintenance challenges. This blog post presents a PoC that demonstrates a method to write OoT kernel modules using BTF to leverage private header struct definitions. This approach aims to simplify the build process and improve maintainability.

What is BTF and Why is it in the Linux Kernel?

BPF is a technology used for network packet filtering, tracing, and monitoring within the Linux kernel. It allows users to run sandboxed programs in the kernel space, enabling powerful debugging and performance analysis capabilities. Producing BPF machine code is straightforward with a compiler that targets BPF, but writing a BPF program is more complicated due to the need to access kernel data during execution. For example, if you want to check if the IP of a given packet is your target, you need to access the structure representing the packet in your BPF program. You must navigate to the correct field by moving from the structure's address by a specific offset and interpret it correctly. This is where BTF comes into play. BTF, or BPF Type Format, is a slim and compact way to represent the structures used in the kernel, accounting for structure randomization. It provides rich type information for BPF programs, essential for accessing and manipulating kernel data structures accurately. BTF enhances the BPF ecosystem by enabling programs to understand and work with kernel data without needing explicit header files. To support BPF program development, an ecosystem has emerged, with libbpf being the key library that facilitates this. BPF programs need to be loaded into the kernel, and there is a BPF syscall for this operation. libbpf allows creating a loader program in native assembly that not only loads the program into the kernel but also links it (similar to the compiler's link process) to adapt it to the specific kernel, using BTF. Historically, Clang was the first C compiler to support the BPF target. GCC also supports the BPF target, but Clang remains the more commonly used compiler for this task. BTF focuses solely on describing data structures, which is why it is much more compact than other debugging formats like DWARF. The BTF section included in a production kernel is around 10-20MB, while DWARF info would be around 250-500MB.

Using BTF to Ease OoT Module Build and Maintenance

The PoC demonstrates how to build an OoT kernel module that requires private struct definitions by utilizing BTF. Here’s a step-by-step overview of the process:

  • Search for Structure to Define: Identify the private structures and unions needed for the OoT module from the Linux headers.
  • Collect All Structures and Unions: Gather all relevant structures and unions from the Linux headers.
  • Extract vmlinux from Bootable Image: Extract the vmlinux file from a bootable kernel image, which contains the BTF information.
  • Extract Structures from BTF: Use BTF to extract the required structures from the vmlinux file.
  • Filter BTF Extracted Structures: Filter out the structures that are already declared in public headers to avoid duplication.
  • Produce Header File: Generate a header file containing the necessary structures and unions.
  • Build Kernel Module: Use a customized Makefile and scripts to build the kernel module with the generated header file.

The PoC includes:

  • A customized Makefile that runs scripts to prepare the environment.
  • Module source code that marks structures with //BTF_INCLUDE to indicate they need to be imported.
  • Scripts to ensure consistency with existing structure declarations in public headers.
  • Scripts to handle dependencies and recursively extract related structures without redeclaring existing ones.

Conclusion

This PoC showcases a functional solution for using BTF to build OoT kernel modules with private struct definitions. It demonstrates how BTF can be used to retrieve structure information about non-public definitions. While this PoC is not intended to promote the use of non-public structures in OoT modules, it acknowledges that sometimes this is unavoidable. Using BTF for this purpose can significantly increase the maintainability of the OoT kernel module across different kernel versions.

Thursday, June 20, 2024

Unidentified Kernel symbols: Syscall macro expansion

When navigating kernel symbols, it is not uncommon to encounter symbols that do not appear to be declared in the source code. This is often the case with symbols related to syscalls. We know that symbols are created during preprocessing (see my previous blog posts), but syscall declarations seem to be more complex. Let's look at an example:

#include "kernel.h" SYSCALL_DEFINE4(test, unsigned long, first, unsigned long, second, unsigned long, third, unsigned long, fourth) { printk("hello"); }

The nice function above, after being preprocessed, spawns a few other functions:

struct pt_regs; static inline int is_syscall_trace_event(struct trace_event_call *tp_event) { return 0; } asmlinkage long __arm64_sys_test(const struct pt_regs *regs); ALLOW_ERROR_INJECTION(__arm64_sys_test, ERRNO); static long __se_sys_test(__MAP(4,__SC_LONG,unsigned long, first, unsigned long, second, unsigned long, third, unsigned long, fourth)); static inline long __do_sys_test(__MAP(4,__SC_DECL,unsigned long, first, unsigned long, second, unsigned long, third, unsigned long, fourth)); asmlinkage long __arm64_sys_test(const struct pt_regs *regs) { return __se_sys_test(__MAP(4,__SC_ARGS ,,regs->regs[0],,regs->regs[1],,regs->regs[2] ,,regs->regs[3],,regs->regs[4],,regs->regs[5])); } static long __se_sys_test(__MAP(4,__SC_LONG,unsigned long, first, unsigned long, second, unsigned long, third, unsigned long, fourth)) { long ret = __do_sys_test(__MAP(4,__SC_CAST,unsigned long, first, unsigned long, second, unsigned long, third, unsigned long, fourth)); __MAP(4,__SC_TEST,unsigned long, first, unsigned long, second, unsigned long, third, unsigned long, fourth); __PROTECT(4, ret,__MAP(4,__SC_ARGS,unsigned long, first, unsigned long, second, unsigned long, third, unsigned long, fourth)); return ret; } static inline long __do_sys_test(__MAP(4,__SC_DECL,unsigned long, first, unsigned long, second, unsigned long, third, unsigned long, fourth)) { printk("hello"); }

This example is for the aarch64 architecture, but other architectures undergo the same processing. The main function called when a syscall is invoked is __arm64_sys_test, which in turn calls __se_sys_test, and then __do_sys_test. Please note that the user code is part of this latter function. As we know, compilers perform complex optimizations when building user code and do not always honor the inline specifier. This is why, when looking at symbols (for example, in kallsyms), you may or may not see do_sys_* functions. The rationale behind this is:

If in a kernel splat backtrace you happen to see:
__do_sys_set_mempolicy_home_node+0xdc/0x1e4 __arm64_sys_set_mempolicy_home_node+0x20/0x2c
and in another build of the same kernel, you only see:
__arm64_sys_set_mempolicy_home_node+0x1d0/0x360
The error might have actually occurred at the same line of the source code.

Thursday, May 23, 2024

Investigate Obscure Kernel Symbols

Introduction

In the world of Linux kernel development, one often encounters intriguing anomalies that spark curiosity and investigation. My journey into exploring such peculiarities began with a previous deep dive into duplicate symbols within the Linux kernel. This exploration revealed fascinating insights into how certain symbols names, appears multiple times having different addresses. It was fun to discover that among multiple different addresses having the same name, there were also actual duplicates of the same function (name and body), even thought, the majority of those symbols having the same name were actually different objects. Building on that foundation, my current investigation delves into another set of mysterious symbols, those that appear to be aliases for given addresses in the kernel (multiple names for the same address), but whose origins are not immediately obvious. Their presence had significant consequences in my new effort. I'm currently adding a new feature to ks-nav, a nifty tool that generates diagrams from the kernel binary image. The goal is to provide kernel analysts with valuable insights into the kernel code, because who doesn't love a good kernel investigation? The tool already produces call tree diagrams and visualize subsystem interactions triggered by specific functions. My latest endeavor? To add functionality that reveals how global variables are used and shared among functions. The topic of this blog post springs from analyzing the output of this tool. Here's an image produced by investigating the global symbols shared starting from the function hugetlb_vma_lock_alloc.

The Problem of Macro Expansion and Symbol Aliasing

Unlike the previous investigation where symbols were straightforward duplicates, the issue at hand now involves a more complex phenomenon stemming from macro expansion. The process of macro expansion in the kernel can result in multiple symbols being generated with the same name, even though, each of these are actually different variables in memory. You can have the same phenomenon originate by compiler multiple mangling of the code such as inlining, or macro expansion, but when it happens, to allow the compiler to manage these same name symbols as different, the compiler must transform these names to allow it to differentiate. In practical terms, this just means that the compiler appends numbers to the identifier name to produce a new unique identifier. A simple example can clarify this:
$ cat h.c
#include 

int pippo(int i){
        static int paperino;
        if (i>=0) paperino=i;
        return paperino;
}
int pluto(int i){
        static int paperino;
        if (i>=0) paperino=i;
        return paperino;
}

int main(){
        printf("paperino= %d\n", pippo(55) );
        printf("paperino= %d\n", pippo(-1) );
        printf("paperino= %d\n", pluto(99) );
        printf("paperino= %d\n", pluto(-1) );
}
$ gcc -g h.c -o h
$ ./h
paperino= 55
paperino= 55
paperino= 99
paperino= 99
$ nm -n h
                 w __cxa_finalize@@GLIBC_2.2.5
                 w __gmon_start__
                 w _ITM_deregisterTMCloneTable
                 w _ITM_registerTMCloneTable
                 U __libc_start_main@@GLIBC_2.2.5
                 U printf@@GLIBC_2.2.5
0000000000001000 t _init
0000000000001060 T _start
0000000000001090 t deregister_tm_clones
00000000000010c0 t register_tm_clones
0000000000001100 t __do_global_dtors_aux
0000000000001140 t frame_dummy
0000000000001149 T pippo
000000000000116b T pluto
000000000000118d T main
0000000000001210 T __libc_csu_init
0000000000001280 T __libc_csu_fini
0000000000001288 T _fini
0000000000002000 R _IO_stdin_used
0000000000002014 r __GNU_EH_FRAME_HDR
00000000000021ac r __FRAME_END__
0000000000003db8 d __frame_dummy_init_array_entry
0000000000003db8 d __init_array_start
0000000000003dc0 d __do_global_dtors_aux_fini_array_entry
0000000000003dc0 d __init_array_end
0000000000003dc8 d _DYNAMIC
0000000000003fb8 d _GLOBAL_OFFSET_TABLE_
0000000000004000 D __data_start
0000000000004000 W data_start
0000000000004008 D __dso_handle
0000000000004010 B __bss_start
0000000000004010 b completed.8061
0000000000004010 D _edata
0000000000004010 D __TMC_END__
0000000000004014 b paperino.2316
0000000000004018 b paperino.2320
0000000000004020 B _end
$

This example shows, how the conflict generated by having two global variables having the same name, paperino, forced the compiler to differentiate them by appending a number. It is lesser known, but static local variables defined in functions are actually global variables. In the function namespace they do not generate any conflict, but in the compiler unit namespace they do, and this is why the compiler mangles names like that in the binary.

Back to the problem identified by the ks-nav new feature, in the diagram, there are two global data symbols that are evidently mangled by the compiler: the __key.11 and the __already_done.1 Let's start focusing on the simpler, just to familiarize with the phenomenon: the __already_done family of symbols. The analysis evidenced it comes from pr_warn_once. This function uses a macro to ensure that the warning message is printed only once. This mechanism ensures that each warning instance is tracked separately using a dedicated variable. To illustrate how this works, let's track down how the pr_warn_once macro is expanded.

step 1

  #define pr_warn_once(fmt, ...)                                  \
        printk_once(KERN_WARNING pr_fmt(fmt), ##__VA_ARGS__)
  

step 2

  #define printk_once(fmt, ...)                                   \
        DO_ONCE_LITE(printk, fmt, ##__VA_ARGS__)
  

step 3

  #define DO_ONCE_LITE(func, ...)                                         \
        DO_ONCE_LITE_IF(true, func, ##__VA_ARGS__)
  

step 4

  #define DO_ONCE_LITE_IF(condition, func, ...)                           \
        ({                                                              \
                bool __ret_do_once = !!(condition);                     \
                                                                        \
                if (__ONCE_LITE_IF(__ret_do_once))                      \
                        func(__VA_ARGS__);                              \
                                                                        \
                unlikely(__ret_do_once);                                \
        })
  

step 5

  #define __ONCE_LITE_IF(condition)                                       \
        ({                                                              \
                static bool __section(".data.once") __already_done;     \
                bool __ret_cond = !!(condition);                        \
                bool __ret_once = false;                                \
                                                                        \
                if (unlikely(__ret_cond && !__already_done)) {          \
                        __already_done = true;                          \
                        __ret_once = true;                              \
                }                                                       \
                unlikely(__ret_once);                                   \
        })
  

The last expansion step finally provides evidences where the symbol __already_done.1 is coming from. It is easy to understand that if more than one pr_warn_once is present into the same compilation unit, the compiler ends up in having several __already_done instances actually referring different memory area, hence it is forced to change these names. This is how __already_done.[0-9]+ symbol family is generated.

But if the compiler is so careful with names and addresses, how the aliases I mentioned at the beginning are even possible?

The Curious Case of __key Symbols

The __key family of symbols presents a different kind of anomaly. These symbols are closely tied to the spin_lock_init function and exhibit unique behavior compared to the __already_done family. The crux of the issue lies in how the compiler handles structures with no members in C. In the context of the Linux kernel, when the lockdep feature is disabled (this what happen when it is enabled), the lock_class_key structure becomes an empty struct. This means that when the compiler allocates such a variable in the data or BSS sections, it effectively allocates a zero-sized object. As a result, the next object allocated immediately afterward, ends up sharing the same address as the zero-sized object. This is the cause of the presence of these alias like symbols. They are not meant to be alias, they just happen to be such.

The __key symbols thus become aliases, purely due to the lock_class_key zero-sized nature when lockdep is disabled. This behavior is both unintended and inconsistent, as enabling lockdep causes the __key symbols to have a non-zero size, thereby preventing them from aliasing with other symbols.

Here is an example of zero sized __key objects, compared with the same, when the lockdep is enabled:

as it appears when lockdep is disabled

$ cat System.map| grep  ffffffff83534360
ffffffff83534360 b __key.11
ffffffff83534360 b __key.12
ffffffff83534360 b static_call_initialized
$ readelf -Wa vmlinux |grep __key.1[12]
 11513: ffffffff83534360     0 OBJECT  LOCAL  DEFAULT   35 __key.12
 11514: ffffffff83534360     0 OBJECT  LOCAL  DEFAULT   35 __key.11
 19420: ffffffff83541710     0 OBJECT  LOCAL  DEFAULT   35 __key.12
 19421: ffffffff83541710     0 OBJECT  LOCAL  DEFAULT   35 __key.11
 45259: ffffffff835690b8     0 OBJECT  LOCAL  DEFAULT   35 __key.11
 47597: ffffffff83569b38     0 OBJECT  LOCAL  DEFAULT   35 __key.12
 47598: ffffffff83569b38     0 OBJECT  LOCAL  DEFAULT   35 __key.11
 51424: ffffffff8356dac0     0 OBJECT  LOCAL  DEFAULT   35 __key.12
  

readelf shows 0 sized objects, and kernel's system map shows the collision between symbols

as it appears when lockdep is enabled

$ readelf -Wa vmlinux |grep __key.1[12]
  6080: ffffffff837ae610    16 OBJECT  LOCAL  DEFAULT   35 __key.12
  6081: ffffffff837ae600    16 OBJECT  LOCAL  DEFAULT   35 __key.11
  8402: ffffffff842624d0    16 OBJECT  LOCAL  DEFAULT   35 __key.11
  8693: ffffffff842626b0    16 OBJECT  LOCAL  DEFAULT   35 __key.11
  8703: ffffffff842626c0    16 OBJECT  LOCAL  DEFAULT   35 __key.12
  8975: ffffffff84262790    16 OBJECT  LOCAL  DEFAULT   35 __key.12
  8976: ffffffff84262780    16 OBJECT  LOCAL  DEFAULT   35 __key.11
 10437: ffffffff84265030    16 OBJECT  LOCAL  DEFAULT   35 __key.11
 12666: ffffffff8426ba60    16 OBJECT  LOCAL  DEFAULT   35 __key.12
 12916: ffffffff8426bc20    16 OBJECT  LOCAL  DEFAULT   35 __key.12
 20464: ffffffff8427b900    16 OBJECT  LOCAL  DEFAULT   35 __key.11
 21593: ffffffff8427bb50    16 OBJECT  LOCAL  DEFAULT   35 __key.12
 21594: ffffffff8427bb40    16 OBJECT  LOCAL  DEFAULT   35 __key.11
 23931: ffffffff8427d240    16 OBJECT  LOCAL  DEFAULT   35 __key.12
 23933: ffffffff8427d230    16 OBJECT  LOCAL  DEFAULT   35 __key.11
 27527: ffffffff8428cf50    16 OBJECT  LOCAL  DEFAULT   35 __key.11
 27902: ffffffff8428d050    16 OBJECT  LOCAL  DEFAULT   35 __key.12
 27904: ffffffff8428d040    16 OBJECT  LOCAL  DEFAULT   35 __key.11
 28675: ffffffff8428e1b0    16 OBJECT  LOCAL  DEFAULT   35 __key.11
 32713: ffffffff842a0b10    16 OBJECT  LOCAL  DEFAULT   35 __key.12
 32714: ffffffff842a0b00    16 OBJECT  LOCAL  DEFAULT   35 __key.11
 33307: ffffffff842a2d10    16 OBJECT  LOCAL  DEFAULT   35 __key.11
 42165: ffffffff842adb60    16 OBJECT  LOCAL  DEFAULT   35 __key.12
 42167: ffffffff842adb50    16 OBJECT  LOCAL  DEFAULT   35 __key.11
 44247: ffffffff842ae950    16 OBJECT  LOCAL  DEFAULT   35 __key.11
 44865: ffffffff842aee00    16 OBJECT  LOCAL  DEFAULT   35 __key.12
 44887: ffffffff842aedf0    16 OBJECT  LOCAL  DEFAULT   35 __key.11
 45016: ffffffff842aeed0    16 OBJECT  LOCAL  DEFAULT   35 __key.12
 45017: ffffffff842aeec0    16 OBJECT  LOCAL  DEFAULT   35 __key.11
 48389: ffffffff842b0760    16 OBJECT  LOCAL  DEFAULT   35 __key.12
 48390: ffffffff842b0750    16 OBJECT  LOCAL  DEFAULT   35 __key.11
 49274: ffffffff842b1500    16 OBJECT  LOCAL  DEFAULT   35 __key.11
 51779: ffffffff842b2820    16 OBJECT  LOCAL  DEFAULT   35 __key.12
 51780: ffffffff842b2810    16 OBJECT  LOCAL  DEFAULT   35 __key.11
 52060: ffffffff842b2cb0    16 OBJECT  LOCAL  DEFAULT   35 __key.12
 52061: ffffffff842b2ca0    16 OBJECT  LOCAL  DEFAULT   35 __key.11
 55853: ffffffff842b95c0    16 OBJECT  LOCAL  DEFAULT   35 __key.12
 62007: ffffffff842cf910    16 OBJECT  LOCAL  DEFAULT   35 __key.12
 62009: ffffffff842cf900    16 OBJECT  LOCAL  DEFAULT   35 __key.11
 63425: ffffffff842d6580    16 OBJECT  LOCAL  DEFAULT   35 __key.12
 63426: ffffffff842d6570    16 OBJECT  LOCAL  DEFAULT   35 __key.11
 64498: ffffffff842d7230    16 OBJECT  LOCAL  DEFAULT   35 __key.12
 64499: ffffffff842d7220    16 OBJECT  LOCAL  DEFAULT   35 __key.11
 66813: ffffffff842d8710    16 OBJECT  LOCAL  DEFAULT   35 __key.12
 66814: ffffffff842d8700    16 OBJECT  LOCAL  DEFAULT   35 __key.11
 69350: ffffffff842d88c0    16 OBJECT  LOCAL  DEFAULT   35 __key.12
 69351: ffffffff842d88b0    16 OBJECT  LOCAL  DEFAULT   35 __key.11

$ cat System.map| grep  static_call_initialized
ffffffff8426ba80 b static_call_initialized
$ cat System.map| grep  ffffffff8426ba80
ffffffff8426ba80 b static_call_initialized
  

as a consequence of the fact that lockdep structures are no more zero sized, the address conflict disappeared

Conclusion

The phenomena described above highlight how these lesser-known mechanisms induced a bug in the current implementation of the new ks-nav feature. It turns out ks-nav now needs a mechanism to detect zero-sized objects and skip them from evaluation. There's still work to do, but at least now I know what to blame for the hiccup. Time to teach ks-nav a new trick!

Saturday, May 4, 2024

Navigating the Syzkaller Experience: A Bug Chasing Adventure

Eisenbug hunting

Assigned with the task of chasing down a bug, an eisen one, I found myself delving into the intricate world of Syzkaller, that has been used to report it. Prompted by a report from a quality engineer tester and armed with only a kernel splat and a tarblob containing the documentation generated by Syzkaller supposed to reproduce the bug, I embarked on a journey of discovery.

Syzkaller, for the uninitiated, is a powerful tool designed for system call fuzzing and possibly discover new bugs in the Linux kernel. It utilizes a domain-specific language to describe syscalls that would deserve a shout, but I'm not yet good enough to describe it in details.
Back to the original task assignment, unwrap into the provided tarblob revealed a sparse landscape, with only a file named corpus.db to outstanding from others. Unclear on its significance, I initially assumed it to be a list of syscalls triggering the bug, only to learn that it referred to a set of minimal syscalls set inputs maximizing code coverage. Syzkaller is driven by the code coverage to direct the fuzzing, and the file is the current state of the coverage it found.

Undeterred, I resolved to set up a Syzkaller instance to unravel its mysteries and anticipate the bug-hunting process. Building Syzkaller from source was the first step, a process that required a fair amount of time.

In order to have the system ready to start the test you need:

  • syzkaller binaries for the target architecture (host and test machine)
  • qemu for the test machine architecture
  • Kernel image prepared for the test machine architecture
  • userspace system image for the test machine architecture

syzkaller build, is quite stright forward task:

git clone https://github.com/google/syzkaller.git cd syzkaller make HOSTOS=linux HOSTARCH=amd64 TARGETOS=linux TARGETARCH=arm64 -j$(nproc)

But for the test, syzkaller ssh identity is also needed.

ssh-keygen -f ./id-rsa

and provide a configuration:

{ "name": "QEMU-aarch64", "target": "linux/arm64", "http": ":56700", "workdir": "/home/alessandro/go/src/syzkaller/2test/corpus", "kernel_obj": "/home/alessandro/src/linux-6.8.9", "syzkaller": "/home/alessandro/go/src/syzkaller/", "enable_syscalls" : ["seccomp", "geteuid", "getresuid", "getegid", "getgid", "getgroups", "getresgid"], "sshkey": "/home/alessandro/go/src/syzkaller/2test/id_rsa", "image": "/home/alessandro/src/buildroot-2024.02.1/output/images/rootfs.ext2", "procs": 8, "type": "qemu", "vm": { "count": 1, "qemu": "/usr/local/bin/qemu-system-aarch64", "kernel": "/home/alessandro/src/linux-6.8.9/arch/arm64/boot/Image", "cpu": 2, "mem": 2048 } }

With Syzkaller primed for action, I turned my attention to preparing a suitable testing environment. Opting for a qemu instance as the target and a kernel syscall as the quarry, I embarked on a test aimed at gaining insight into Syzkaller. In other words, I sought to observe Syzkaller's behavior when encountering a bug, without investing my entire life in the process of searching a new bug. I chose the seccomp syscall due to its relatively low frequency in common workloads.

Seccomp, a mechanism for filtering system calls, served as the perfect candidate for my bug-hunting expedition. Armed with kernel code modifications, I prepared the groundwork for testing.

To expedite the bug-finding process, I intentionally created one. The following patch generates a crash in the seccomp syscall for 16 PIDs every 256.

diff --git a/kernel/seccomp.c b/kernel/seccomp.c index aca7b437882e..a0da35780eb8 100644 --- a/kernel/seccomp.c +++ b/kernel/seccomp.c @@ -2071,6 +2071,7 @@ static long do_seccomp(unsigned int op, unsigned int flags, SYSCALL_DEFINE3(seccomp, unsigned int, op, unsigned int, flags, void __user *, uargs) { + if ((current->pid & 0xff)<0x10) BUG(); return do_seccomp(op, flags, uargs); }

Next came the task of creating the userspace, for which I turned to Buildroot a nice tool for generating custom Linux userspaces. Using it, I opted for generating cpio and ext2 images to complement the kernel image.

After generating the userspace with Buildroot, my next objective was to create a kernel image that included the cpio as the initramfs. Since the scenario didn't require an elaborate userspace, my strategy was to merge the userspace directly into the kernel image. However, it seemed I counted my chickens before they hatched, as the 'image' argument in the configuration is mandatory. This meant that embedding the initramfs into the kernel made no difference as I had hoped. Now, I'm considering proposing a patch for syzkaller to make the 'image' argument optional. For the record, if you're looking to embed the initramfs into the kernel, CONFIG_INITRAMFS_SOURCE is the kernel config you'll need. Testing the image, however, presented a new challenge: incorporating the id_rsa.pub key to facilitate Syzkaller's access to the Linux image.

In tackling this obstacle, I explored two approaches: creating a new package or employing a post-build hook. Opting for the latter, I utilized the BR2_ROOTFS_POST_BUILD_SCRIPT symbol to integrate the required SSH key into the root filesystem.

The successive test I made, presented a new hurdle: my setup made syzkaller panic. Debugging revealed that Syzkaller expected debugfs to be mounted at its customary location, if not there it simply crashes. I, then, updated the post-build script to ensure debugfs was properly mounted.

Note for syzkaller users who want to use buildroot to create the userspace: watch for debugfs to be properly mounted!

Now that I had a working system at last, I delved into experimenting with Syzkaller, an impressive piece of software. However, upon examining the results, it became evident that the "bug reproduction" feature fell short in reproducing the bug I had intentionally inserted into the system. It seems that Syzkaller only considers the bug's dependency on syscall inputs, neglecting the kernel's internal state. The rather straightforward bug I introduced, where the bug's behavior depends on the PID value, renders the Syzkaller bug reproduction feature ineffective.

This is what you got hitting on "reproducing" link

Syzkaller hit 'kernel BUG in sys_seccomp' bug. The bug is not reproducible.

Sunday, March 3, 2024

How noexecstack became a Stack of Confusion

intro

Sometimes, what we perceive as a constant in our programming environments can undergo unexpected shifts, challenging our assumptions. In my journey to understand the mechanics behind stack overflow exploits, I encountered such a shift when grappling with the intricacies of the stack. Initially, as I delved into these techniques using machines devoid of MMUs, namely, plain m68k and x86 real mode, I paid little heed to memory flags. In those days, hackers could seamlessly inject binary payloads onto the stack, redirect program flow to the designated stack address housing their payloads, and execute their exploits with ease.

However, after setting aside these experiments for a time and revisiting them on early Linux machines, I encountered a surprising obstacle around 2005: the once-reliable technique suddenly ceased to function. Upon investigation, I came to the realization that assuming the executability of the stack was no longer tenable. Henceforth, I found myself grappling with the repercussions of this change, as the default behavior of compilers had shifted to render the stack non-executable. Or so I believed, until a recent inquiry from a client prompted me to revisit this assumption, revealing a truth starkly different from my prior expectations.

chapter 1 - What it seems like

So, what do we have here? Since 2005, something peculiar has emerged. When compiling a simple, trivial program using the C compiler, we observe the following:

$ echo -e "#include <stdio.h>\nint main(){printf(\"hello\\\n\");}"| gcc -x c -o hello - ;readelf -l hello Elf file type is EXEC (Executable file) Entry point 0x4004a0 There are 9 program headers, starting at offset 64 Program Headers: Type Offset VirtAddr PhysAddr FileSiz MemSiz Flags Align PHDR 0x0000000000000040 0x0000000000400040 0x0000000000400040 0x00000000000001f8 0x00000000000001f8 R 0x8 INTERP 0x0000000000000238 0x0000000000400238 0x0000000000400238 0x000000000000001c 0x000000000000001c R 0x1 [Requesting program interpreter: /lib64/ld-linux-x86-64.so.2] LOAD 0x0000000000000000 0x0000000000400000 0x0000000000400000 0x0000000000000768 0x0000000000000768 R E 0x200000 LOAD 0x0000000000000e00 0x0000000000600e00 0x0000000000600e00 0x0000000000000224 0x0000000000000228 RW 0x200000 DYNAMIC 0x0000000000000e10 0x0000000000600e10 0x0000000000600e10 0x00000000000001d0 0x00000000000001d0 RW 0x8 NOTE 0x0000000000000254 0x0000000000400254 0x0000000000400254 0x0000000000000044 0x0000000000000044 R 0x4 GNU_EH_FRAME 0x0000000000000640 0x0000000000400640 0x0000000000400640 0x000000000000003c 0x000000000000003c R 0x4 GNU_STACK 0x0000000000000000 0x0000000000000000 0x0000000000000000 0x0000000000000000 0x0000000000000000 RW 0x10 GNU_RELRO 0x0000000000000e00 0x0000000000600e00 0x0000000000600e00 0x0000000000000200 0x0000000000000200 R 0x1 Section to Segment mapping: Segment Sections... 00 01 .interp 02 .interp .note.ABI-tag .note.gnu.build-id .gnu.hash .dynsym .dynstr .gnu.version .gnu.version_r .rela.dyn .rela.plt .init .plt .text .fini .rodata .eh_frame_hdr .eh_frame 03 .init_array .fini_array .dynamic .got .got.plt .data .bss 04 .dynamic 05 .note.ABI-tag .note.gnu.build-id 06 .eh_frame_hdr 07 08 .init_array .fini_array .dynamic .got

Not much needs to be said; the stack lacks an executable flag: RW in the GNU_STACK section. Any attempt to execute code from this space inevitably results in a graceful crash, marked by the familiar segmentation fault (SIGSEGV).

Conversely, if our intention is to create an executable stack, we must explicitly instruct the compiler to do so.

$ echo -e "#include <stdio.h>\nint main(){printf(\"hello\\\n\");}"| gcc -x c -z execstack -o hello - ;readelf -l hello Elf file type is EXEC (Executable file) Entry point 0x4004a0 There are 9 program headers, starting at offset 64 Program Headers: Type Offset VirtAddr PhysAddr FileSiz MemSiz Flags Align PHDR 0x0000000000000040 0x0000000000400040 0x0000000000400040 0x00000000000001f8 0x00000000000001f8 R 0x8 INTERP 0x0000000000000238 0x0000000000400238 0x0000000000400238 0x000000000000001c 0x000000000000001c R 0x1 [Requesting program interpreter: /lib64/ld-linux-x86-64.so.2] LOAD 0x0000000000000000 0x0000000000400000 0x0000000000400000 0x0000000000000768 0x0000000000000768 R E 0x200000 LOAD 0x0000000000000e00 0x0000000000600e00 0x0000000000600e00 0x0000000000000224 0x0000000000000228 RW 0x200000 DYNAMIC 0x0000000000000e10 0x0000000000600e10 0x0000000000600e10 0x00000000000001d0 0x00000000000001d0 RW 0x8 NOTE 0x0000000000000254 0x0000000000400254 0x0000000000400254 0x0000000000000044 0x0000000000000044 R 0x4 GNU_EH_FRAME 0x0000000000000640 0x0000000000400640 0x0000000000400640 0x000000000000003c 0x000000000000003c R 0x4 GNU_STACK 0x0000000000000000 0x0000000000000000 0x0000000000000000 0x0000000000000000 0x0000000000000000 RWE 0x10 GNU_RELRO 0x0000000000000e00 0x0000000000600e00 0x0000000000600e00 0x0000000000000200 0x0000000000000200 R 0x1 Section to Segment mapping: Segment Sections... 00 01 .interp 02 .interp .note.ABI-tag .note.gnu.build-id .gnu.hash .dynsym .dynstr .gnu.version .gnu.version_r .rela.dyn .rela.plt .init .plt .text .fini .rodata .eh_frame_hdr .eh_frame 03 .init_array .fini_array .dynamic .got .got.plt .data .bss 04 .dynamic 05 .note.ABI-tag .note.gnu.build-id 06 .eh_frame_hdr 07 08 .init_array .fini_array .dynamic .got

Upon inspection, RWE in the GNU_STACK section, we confirm the presence of an executable stack.

In summary, the probability of encountering a new binary with an executable stack in contemporary settings is close to zero. Such instances may occur only if someone utilizes an outdated compiler or requires an executable stack for specific reasons. So, why would anyone desire an executable stack?

Perhaps solely to revisit the methods employed in old-fashioned stack overflow exploits!

Chapter 2 - Things are never a easy as they seems

Recently, a customer posed what initially appeared to be a trivial question: “What flag should I use to ensure that the stack remains non-executable?” I brushed it off as a simple matter, assuming that no action was needed since it was the default behavior.

However, the response from a knowledgeable individual surprised me: simply use -z nostackexec. This prompted me to question why such a flag even existed. After all, if the default behavior is to have a non-executable stack, what purpose does this flag serve?

Reflecting on past encounters with this flag, I had rationalized its existence by speculating, “Perhaps it’s necessary for exotic architectures where the default is to have an executable stack”.

However, I soon realized that the reality is far more complex than it initially seemed.

Let’s begin by clarifying: compilers do not manipulate stack flags; this task falls under the responsibility of the linker. The final executable is created by linking together all the object files generated by the compiler.

During the creation of ELF sections, the linker scans the input files for a specific section named .note.GNU-stack. This section conveys whether an executable stack is required or not.

According to the linker’s manual page, if an input file lacks a .note.GNU-stack section, then the default behavior is architecture-specific.

As I couldn’t find where this default behavior is specified, let’s conduct a couple of tests. You can find a collection of tests I’ve prepared in this repository.

Consider the gcc/asm_function executable file, which is a simple C executable that includes a basic function from an assembly file. Below is the relevant portion of the Makefile used to build it:

gcc/asm_function.o: src/asm_function.S gcc -g -c -o gcc/asm_function.o src/asm_function.S gcc/test_asm.o: src/test_asm.c gcc -g -c -o gcc/test_asm.o src/test_asm.c gcc/test_asm: gcc/test_asm.o gcc/asm_function.o gcc -g gcc/test_asm.o gcc/asm_function.o -o gcc/asm_function

Upon examining the generated object file, you’ll notice the absence of the .note.GNU-stack section. However, upon inspecting the resultant executable, you’ll observe that the stack is indeed marked as executable.

$ readelf -S gcc/asm_function.o There are 15 section headers, starting at offset 0x3b8: Section Headers: [Nr] Name Type Address Offset Size EntSize Flags Link Info Align [ 0] NULL 0000000000000000 00000000 0000000000000000 0000000000000000 0 0 0 [ 1] .text PROGBITS 0000000000000000 00000040 0000000000000006 0000000000000000 AX 0 0 1 [ 2] .data PROGBITS 0000000000000000 00000046 0000000000000000 0000000000000000 WA 0 0 1 [ 3] .bss NOBITS 0000000000000000 00000046 0000000000000000 0000000000000000 WA 0 0 1 [ 4] .debug_line PROGBITS 0000000000000000 00000046 0000000000000045 0000000000000000 0 0 1 [ 5] .rela.debug_line RELA 0000000000000000 00000248 0000000000000018 0000000000000018 I 12 4 8 [ 6] .debug_info PROGBITS 0000000000000000 0000008b 000000000000002e 0000000000000000 0 0 1 [ 7] .rela.debug_info RELA 0000000000000000 00000260 00000000000000a8 0000000000000018 I 12 6 8 [ 8] .debug_abbrev PROGBITS 0000000000000000 000000b9 0000000000000014 0000000000000000 0 0 1 [ 9] .debug_aranges PROGBITS 0000000000000000 000000d0 0000000000000030 0000000000000000 0 0 16 [10] .rela.debug_arang RELA 0000000000000000 00000308 0000000000000030 0000000000000018 I 12 9 8 [11] .debug_str PROGBITS 0000000000000000 00000100 0000000000000045 0000000000000001 MS 0 0 1 [12] .symtab SYMTAB 0000000000000000 00000148 00000000000000f0 0000000000000018 13 9 8 [13] .strtab STRTAB 0000000000000000 00000238 0000000000000009 0000000000000000 0 0 1 [14] .shstrtab STRTAB 0000000000000000 00000338 000000000000007b 0000000000000000 0 0 1 Key to Flags: W (write), A (alloc), X (execute), M (merge), S (strings), I (info), L (link order), O (extra OS processing required), G (group), T (TLS), C (compressed), x (unknown), o (OS specific), E (exclude), l (large), p (processor specific) $ readelf -l gcc/asm_function Elf file type is DYN (Shared object file) Entry point 0x1040 There are 11 program headers, starting at offset 64 Program Headers: Type Offset VirtAddr PhysAddr FileSiz MemSiz Flags Align PHDR 0x0000000000000040 0x0000000000000040 0x0000000000000040 0x0000000000000268 0x0000000000000268 R 0x8 INTERP 0x00000000000002a8 0x00000000000002a8 0x00000000000002a8 0x000000000000001c 0x000000000000001c R 0x1 [Requesting program interpreter: /lib64/ld-linux-x86-64.so.2] LOAD 0x0000000000000000 0x0000000000000000 0x0000000000000000 0x0000000000000530 0x0000000000000530 R 0x1000 LOAD 0x0000000000001000 0x0000000000001000 0x0000000000001000 0x00000000000001d5 0x00000000000001d5 R E 0x1000 LOAD 0x0000000000002000 0x0000000000002000 0x0000000000002000 0x0000000000000130 0x0000000000000130 R 0x1000 LOAD 0x0000000000002df0 0x0000000000003df0 0x0000000000003df0 0x0000000000000220 0x0000000000000228 RW 0x1000 DYNAMIC 0x0000000000002e00 0x0000000000003e00 0x0000000000003e00 0x00000000000001c0 0x00000000000001c0 RW 0x8 NOTE 0x00000000000002c4 0x00000000000002c4 0x00000000000002c4 0x0000000000000044 0x0000000000000044 R 0x4 GNU_EH_FRAME 0x0000000000002004 0x0000000000002004 0x0000000000002004 0x000000000000003c 0x000000000000003c R 0x4 GNU_STACK 0x0000000000000000 0x0000000000000000 0x0000000000000000 0x0000000000000000 0x0000000000000000 RWE 0x10 GNU_RELRO 0x0000000000002df0 0x0000000000003df0 0x0000000000003df0 0x0000000000000210 0x0000000000000210 R 0x1 Section to Segment mapping: Segment Sections... 00 01 .interp 02 .interp .note.gnu.build-id .note.ABI-tag .gnu.hash .dynsym .dynstr .gnu.version .gnu.version_r .rela.dyn 03 .init .plt .plt.got .text .fini 04 .rodata .eh_frame_hdr .eh_frame 05 .init_array .fini_array .dynamic .got .data .bss 06 .dynamic 07 .note.gnu.build-id .note.ABI-tag 08 .eh_frame_hdr 09 10 .init_array .fini_array .dynamic .got

This suggests that the default for x86_64 architecture is executable stack. Doing the same for aarch64, produces the followings:

$ readelf -S gcc/asm_function.aarch64.o There are 15 section headers, starting at offset 0x400: Section Headers: [Nr] Name Type Address Offset Size EntSize Flags Link Info Align [ 0] NULL 0000000000000000 00000000 0000000000000000 0000000000000000 0 0 0 [ 1] .text PROGBITS 0000000000000000 00000040 0000000000000010 0000000000000000 AX 0 0 8 [ 2] .data PROGBITS 0000000000000000 00000050 0000000000000000 0000000000000000 WA 0 0 1 [ 3] .bss NOBITS 0000000000000000 00000050 0000000000000000 0000000000000000 WA 0 0 1 [ 4] .debug_line PROGBITS 0000000000000000 00000050 000000000000004c 0000000000000000 0 0 1 [ 5] .rela.debug_line RELA 0000000000000000 00000290 0000000000000018 0000000000000018 I 12 4 8 [ 6] .debug_info PROGBITS 0000000000000000 0000009c 000000000000002e 0000000000000000 0 0 1 [ 7] .rela.debug_info RELA 0000000000000000 000002a8 00000000000000a8 0000000000000018 I 12 6 8 [ 8] .debug_abbrev PROGBITS 0000000000000000 000000ca 0000000000000014 0000000000000000 0 0 1 [ 9] .debug_aranges PROGBITS 0000000000000000 000000e0 0000000000000030 0000000000000000 0 0 16 [10] .rela.debug_arang RELA 0000000000000000 00000350 0000000000000030 0000000000000018 I 12 9 8 [11] .debug_str PROGBITS 0000000000000000 00000110 000000000000004d 0000000000000001 MS 0 0 1 [12] .symtab SYMTAB 0000000000000000 00000160 0000000000000120 0000000000000018 13 11 8 [13] .strtab STRTAB 0000000000000000 00000280 000000000000000f 0000000000000000 0 0 1 [14] .shstrtab STRTAB 0000000000000000 00000380 000000000000007b 0000000000000000 0 0 1 Key to Flags: W (write), A (alloc), X (execute), M (merge), S (strings), I (info), L (link order), O (extra OS processing required), G (group), T (TLS), C (compressed), x (unknown), o (OS specific), E (exclude), p (processor specific) $ readelf -l gcc/asm_function.aarch64 Elf file type is DYN (Shared object file) Entry point 0x610 There are 9 program headers, starting at offset 64 Program Headers: Type Offset VirtAddr PhysAddr FileSiz MemSiz Flags Align PHDR 0x0000000000000040 0x0000000000000040 0x0000000000000040 0x00000000000001f8 0x00000000000001f8 R 0x8 INTERP 0x0000000000000238 0x0000000000000238 0x0000000000000238 0x000000000000001b 0x000000000000001b R 0x1 [Requesting program interpreter: /lib/ld-linux-aarch64.so.1] LOAD 0x0000000000000000 0x0000000000000000 0x0000000000000000 0x000000000000090c 0x000000000000090c R E 0x10000 LOAD 0x0000000000000d88 0x0000000000010d88 0x0000000000010d88 0x0000000000000288 0x0000000000000290 RW 0x10000 DYNAMIC 0x0000000000000d98 0x0000000000010d98 0x0000000000010d98 0x00000000000001f0 0x00000000000001f0 RW 0x8 NOTE 0x0000000000000254 0x0000000000000254 0x0000000000000254 0x0000000000000044 0x0000000000000044 R 0x4 GNU_EH_FRAME 0x00000000000007e0 0x00000000000007e0 0x00000000000007e0 0x0000000000000044 0x0000000000000044 R 0x4 GNU_STACK 0x0000000000000000 0x0000000000000000 0x0000000000000000 0x0000000000000000 0x0000000000000000 RW 0x10 GNU_RELRO 0x0000000000000d88 0x0000000000010d88 0x0000000000010d88 0x0000000000000278 0x0000000000000278 R 0x1 Section to Segment mapping: Segment Sections... 00 01 .interp 02 .interp .note.gnu.build-id .note.ABI-tag .gnu.hash .dynsym .dynstr .gnu.version .gnu.version_r .rela.dyn .rela.plt .init .plt .text .fini .rodata .eh_frame_hdr .eh_frame 03 .init_array .fini_array .dynamic .got .data .bss 04 .dynamic 05 .note.gnu.build-id .note.ABI-tag 06 .eh_frame_hdr 07 08 .init_array .fini_array .dynamic .got

Which is the opposite, non-executable stack, suggesting that the default for this architecture is to have the stack not executable.

chapter 3 - is it all?

Returning to the original topic, based on the previous chapter observations, it is possible to deduce that the two architectures, x86_64 and aarch64, have different defaults regarding executable stacks. But is this the extent of the matter?

Apparently not. There are instances where the compiler needs to generate code and execute it, often utilizing the stack for this purpose. In such cases, the resulting executable file will indeed have an executable stack.

There might be other cases out there, but after a thorough search, I couldn’t find anything except for the GCC GNU extension “nested functions.” It’s possible that not many people are aware of this feature - I certainly wasn’t until recently. However, it appears that nested functions can be implemented in C, but only when using GCC, clang does not support them.

Nested functions are functions defined within the body of another function. These inner functions have access to the variables and parameters of the enclosing function and can only be invoked within its scope. GCC allows them to exist, but for them to work, the stack needs to be executable, at least when they are called indirectly from another function.

Let’s consider an example:

int nested_carrier(int a, int b, int n) { int loc_var = n; int multiply2(int z) { return z + z + loc_var; } return sum_func(multiply2, a, b); }

In this function, multiply2 is passed to be executed by the external function sum_func. Now, let’s examine the assembly implementation of nested_carrier.

┌ 151: dbg.nested_carrier (int64_t arg1, int64_t arg2, int64_t arg3, int64_t arg_10h); │ ; arg int64_t arg1 @ rdi │ ; arg int64_t arg2 @ rsi │ ; arg int64_t arg3 @ rdx │ ; arg int64_t arg_10h @ rbp+0x10 │ ; var int z @ rbp-0x4 │ ; var int64_t canary @ rbp-0x8 │ ; var int64_t var_10h @ rbp-0x10 │ ; var int loc_var @ rbp-0x30 │ ; var int a @ rbp-0x34 │ ; var int b @ rbp-0x38 │ ; var int n @ rbp-0x3c │ 0x00001187 f30f1efa endbr64 ; nested_local.c:5 int nested_carrier (int a, int b, int n) { │ ; int nested_carrier(int a,int b,int n); │ 0x0000118b 55 push rbp │ 0x0000118c 4889e5 mov rbp, rsp │ 0x0000118f 4883ec40 sub rsp, 0x40 │ 0x00001193 897dcc mov dword [a], edi ; arg1 │ 0x00001196 8975c8 mov dword [b], esi ; arg2 │ 0x00001199 8955c4 mov dword [n], edx ; arg3 │ 0x0000119c 64488b0425.. mov rax, qword fs:[0x28] │ 0x000011a5 488945f8 mov qword [canary], rax ; Just bought my self a new canary │ 0x000011a9 31c0 xor eax, eax │ 0x000011ab 488d4510 lea rax, [arg_10h] │ 0x000011af 488945f0 mov qword [var_10h], rax │ 0x000011b3 488d45d0 lea rax, [loc_var] │ 0x000011b7 4883c004 add rax, 4 │ 0x000011bb 488d55d0 lea rdx, [loc_var] │ 0x000011bf c700f30f1efa mov dword [rax], 0xfa1e0ff3 ; Here it is writing the trampoline, note the endbr64 opcode │ 0x000011c5 66c7400449bb mov word [rax + 4], 0xbb49 ; it stores in the stack │ 0x000011cb 488d0d97ff.. lea rcx, [dbg.multiply2] ; as the multiply2 address │ 0x000011d2 48894806 mov qword [rax + 6], rcx │ 0x000011d6 66c7400e49ba mov word [rax + 0xe], 0xba49 ; another opcode │ 0x000011dc 48895010 mov qword [rax + 0x10], rdx ; this is the base address to locate parent local vars │ 0x000011e0 c7401849ff.. mov dword [rax + 0x18], 0x90e3ff49 ; more opcodes │ 0x000011e7 8b45c4 mov eax, dword [n] ; nested_local.c:6 int loc_var = n; │ 0x000011ea 8945d0 mov dword [loc_var], eax │ 0x000011ed 488d45d0 lea rax, [loc_var] ; nested_local.c:8 return sum_func (multiply2, a, b); │ 0x000011f1 4883c004 add rax, 4 │ 0x000011f5 4889c1 mov rcx, rax ; save trampoline address │ 0x000011f8 8b55c8 mov edx, dword [b] ; int64_t arg3 = b │ 0x000011fb 8b45cc mov eax, dword [a] │ 0x000011fe 89c6 mov esi, eax ; int64_t arg2 = a │ 0x00001200 4889cf mov rdi, rcx ; int64_t arg1 = trampoline address! │ 0x00001203 e847000000 call dbg.sum_func │ 0x00001208 488b75f8 mov rsi, qword [canary] ; Hey canary, are you there?! │ 0x0000120c 6448333425.. xor rsi, qword fs:[0x28] ; are still you!? │ ┌─< 0x00001215 7405 je 0x121c ; stack overflow check │ │ 0x00001217 e844feffff call sym.imp.__stack_chk_fail ; crash if canary is failing │ └─> 0x0000121c c9 leave └ 0x0000121d c3 ret

Examining this code, we notice some “alien code” added by our trusty compiler friend. Let’s set aside the stack check with canary for now; our current focus is on the trampoline it’s constructing to facilitate the external call. Within the function body, we can clearly see the trampoline being constructed, followed by the point at which the trampoline address is utilized for the external function call.

(gdb) x/10i $pc => 0x7fffffffdcc0: endbr64 0x7fffffffdcc4: movabs $0x555555555169,%r11 0x7fffffffdcce: movabs $0x7fffffffdcc0,%r10 0x7fffffffdcd8: rex.WB jmpq *%r11

Let’s delve into how the trampoline is constructed using our buddy GDB. We’ll break it down into four instructions:

  1. The endbr64 instruction was introduced as part of the Intel Control-flow Enforcement Technology (CET) extension. Don’t confuse it with Cache Allocation Technology (CAT), another CPU feature. Phew, the acronyms are piling up! Anyway, this instruction isn’t pertinent to our analysis; it’s included because the machine executing this code expects it to be present. The endbr64 instruction marks the end of a code sequence and helps prevent ROP gadgets from being chained together.
  2. movabs $0x555555555169,%r11: This instruction loads our target function address, multiply2, into register r11.
  3. movabs $0x7fffffffdcc0,%r10: Let’s recall the x86_64 ABI: Parameters to functions are passed in the registers rdi, rsi, rdx, rcx, r8, r9, and additional values are passed on the stack in reverse order. This instruction deviates from the conventional ABI, using a register r10, to pass the base address for the parent’s local variables.
  4. rex.WB jmpq *%r11: This is a straightforward indirect call that we know will lead us to address 0x555555555169, corresponding to the multiply2 function.

Now that we are aware of at least one other scenario where the compiler may necessitate an executable stack, let’s explore how this is reflected in the executable:

$ readelf -l gcc/nested_local Elf file type is DYN (Shared object file) Entry point 0x1080 There are 13 program headers, starting at offset 64 Program Headers: Type Offset VirtAddr PhysAddr FileSiz MemSiz Flags Align PHDR 0x0000000000000040 0x0000000000000040 0x0000000000000040 0x00000000000002d8 0x00000000000002d8 R 0x8 INTERP 0x0000000000000318 0x0000000000000318 0x0000000000000318 0x000000000000001c 0x000000000000001c R 0x1 [Requesting program interpreter: /lib64/ld-linux-x86-64.so.2] LOAD 0x0000000000000000 0x0000000000000000 0x0000000000000000 0x0000000000000658 0x0000000000000658 R 0x1000 LOAD 0x0000000000001000 0x0000000000001000 0x0000000000001000 0x0000000000000315 0x0000000000000315 R E 0x1000 LOAD 0x0000000000002000 0x0000000000002000 0x0000000000002000 0x00000000000001e0 0x00000000000001e0 R 0x1000 LOAD 0x0000000000002db0 0x0000000000003db0 0x0000000000003db0 0x0000000000000260 0x0000000000000268 RW 0x1000 DYNAMIC 0x0000000000002dc0 0x0000000000003dc0 0x0000000000003dc0 0x00000000000001f0 0x00000000000001f0 RW 0x8 NOTE 0x0000000000000338 0x0000000000000338 0x0000000000000338 0x0000000000000020 0x0000000000000020 R 0x8 NOTE 0x0000000000000358 0x0000000000000358 0x0000000000000358 0x0000000000000044 0x0000000000000044 R 0x4 GNU_PROPERTY 0x0000000000000338 0x0000000000000338 0x0000000000000338 0x0000000000000020 0x0000000000000020 R 0x8 GNU_EH_FRAME 0x000000000000201c 0x000000000000201c 0x000000000000201c 0x000000000000005c 0x000000000000005c R 0x4 GNU_STACK 0x0000000000000000 0x0000000000000000 0x0000000000000000 0x0000000000000000 0x0000000000000000 RWE 0x10 GNU_RELRO 0x0000000000002db0 0x0000000000003db0 0x0000000000003db0 0x0000000000000250 0x0000000000000250 R 0x1 Section to Segment mapping: Segment Sections... 00 01 .interp 02 .interp .note.gnu.property .note.gnu.build-id .note.ABI-tag .gnu.hash .dynsym .dynstr .gnu.version .gnu.version_r .rela.dyn .rela.plt 03 .init .plt .plt.got .plt.sec .text .fini 04 .rodata .eh_frame_hdr .eh_frame 05 .init_array .fini_array .dynamic .got .data .bss 06 .dynamic 07 .note.gnu.property 08 .note.gnu.build-id .note.ABI-tag 09 .note.gnu.property 10 .eh_frame_hdr 11 12 .init_array .fini_array .dynamic .got $ readelf -l gcc/nested_local.ne Elf file type is DYN (Shared object file) Entry point 0x1080 There are 13 program headers, starting at offset 64 Program Headers: Type Offset VirtAddr PhysAddr FileSiz MemSiz Flags Align PHDR 0x0000000000000040 0x0000000000000040 0x0000000000000040 0x00000000000002d8 0x00000000000002d8 R 0x8 INTERP 0x0000000000000318 0x0000000000000318 0x0000000000000318 0x000000000000001c 0x000000000000001c R 0x1 [Requesting program interpreter: /lib64/ld-linux-x86-64.so.2] LOAD 0x0000000000000000 0x0000000000000000 0x0000000000000000 0x0000000000000658 0x0000000000000658 R 0x1000 LOAD 0x0000000000001000 0x0000000000001000 0x0000000000001000 0x0000000000000315 0x0000000000000315 R E 0x1000 LOAD 0x0000000000002000 0x0000000000002000 0x0000000000002000 0x00000000000001e0 0x00000000000001e0 R 0x1000 LOAD 0x0000000000002db0 0x0000000000003db0 0x0000000000003db0 0x0000000000000260 0x0000000000000268 RW 0x1000 DYNAMIC 0x0000000000002dc0 0x0000000000003dc0 0x0000000000003dc0 0x00000000000001f0 0x00000000000001f0 RW 0x8 NOTE 0x0000000000000338 0x0000000000000338 0x0000000000000338 0x0000000000000020 0x0000000000000020 R 0x8 NOTE 0x0000000000000358 0x0000000000000358 0x0000000000000358 0x0000000000000044 0x0000000000000044 R 0x4 GNU_PROPERTY 0x0000000000000338 0x0000000000000338 0x0000000000000338 0x0000000000000020 0x0000000000000020 R 0x8 GNU_EH_FRAME 0x000000000000201c 0x000000000000201c 0x000000000000201c 0x000000000000005c 0x000000000000005c R 0x4 GNU_STACK 0x0000000000000000 0x0000000000000000 0x0000000000000000 0x0000000000000000 0x0000000000000000 RW 0x10 GNU_RELRO 0x0000000000002db0 0x0000000000003db0 0x0000000000003db0 0x0000000000000250 0x0000000000000250 R 0x1 Section to Segment mapping: Segment Sections... 00 01 .interp 02 .interp .note.gnu.property .note.gnu.build-id .note.ABI-tag .gnu.hash .dynsym .dynstr .gnu.version .gnu.version_r .rela.dyn .rela.plt 03 .init .plt .plt.got .plt.sec .text .fini 04 .rodata .eh_frame_hdr .eh_frame 05 .init_array .fini_array .dynamic .got .data .bss 06 .dynamic 07 .note.gnu.property 08 .note.gnu.build-id .note.ABI-tag 09 .note.gnu.property 10 .eh_frame_hdr 11 12 .init_array .fini_array .dynamic .got

In the following, you can observe the ELF program header table of two executable files, both generated from the same source file, src/nested_local.c, in the repository . They differ because in one instance, I added -z noexecstack to enforce a non-executable stack. This is what’s happen if they get executed:

$ ./gcc/nested_local; echo Fancy calculation (34) alessandro@r5:~/tmp/stack/nested_prt$ ./gcc/nested_local.ne; echo Segmentation fault (core dumped)

Since the trampoline is in the stack, when the second file is executed it crashes because it tries to execute code from the stack. Here’s the proof the crash is caused by it:

$ gdb ./gcc/nested_local.ne GNU gdb (Ubuntu 9.2-0ubuntu1~20.04.1) 9.2 Copyright (C) 2020 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html> This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type "show copying" and "show warranty" for details. This GDB was configured as "x86_64-linux-gnu". Type "show configuration" for configuration details. For bug reporting instructions, please see: <http://www.gnu.org/software/gdb/bugs/>. Find the GDB manual and other documentation resources online at: <http://www.gnu.org/software/gdb/documentation/>. For help, type "help". Type "apropos word" to search for commands related to "word"... Reading symbols from ./gcc/nested_local.ne... (gdb) r Starting program: /home/alessandro/tmp/stack/nested_prt/gcc/nested_local.ne Program received signal SIGSEGV, Segmentation fault. 0x00007fffffffdc34 in ?? () (gdb) x/10i $pc => 0x7fffffffdc34: endbr64 0x7fffffffdc38: movabs $0x555555555169,%r11 0x7fffffffdc42: movabs $0x7fffffffdc30,%r10 0x7fffffffdc4c: rex.WB jmpq *%r11 0x7fffffffdc4f: nop 0x7fffffffdc50: jo 0x7fffffffdc2e 0x7fffffffdc52: (bad) 0x7fffffffdc53: (bad) 0x7fffffffdc54: (bad) 0x7fffffffdc55: jg 0x7fffffffdc57 (gdb)

Finally, let’s consider that to further complicate matters, GCC employs different conventions across architectures. Please do not expect this to be straightforward, as it certainly isn’t!

In x86_64, executable ELF files always contain an entry in the program header GNU_STACK, which reflects the actual permissions over the stack. When the linker combines objects to create the executable, it looks at .note.GNU-stack and its contents to set the stack accordingly. If .note.GNU-stack is missing, the stack defaults to executable.

Similarly, in aarch64, executable ELF files always include an entry in the program header GNU_STACK, with flags reflecting the stack’s permissions. The linker examines .note.GNU-stack during the executable creation process to determine the stack’s permissions. If .note.GNU-stack is absent, the stack defaults to non-executable.

On PPC64, executable ELF files only include an entry in the program header GNU_STACK if it needs to be executable; otherwise, it defaults to non-executable.

Conversely, in MIPS32, executable ELF files only have an entry in the program header GNU_STACK if it needs to be non-executable; otherwise, it defaults to executable.

As a final note for this extensive and perhaps tedious discussion on executable stacks, allow me to share what I discovered while verifying this information on a MIPS system.

Look at how some MIPS SoCs do not enforce the stack permissions

epilogue

If you want to make sure your stack is not executable, add -z noexecstack to your compiler's flags.