diff --git a/0.8.1/from_pydoc/generated/architectures/aarch64/aarch64_thread_context/index.html b/0.8.1/from_pydoc/generated/architectures/aarch64/aarch64_thread_context/index.html index 9bec759..7195b91 100644 --- a/0.8.1/from_pydoc/generated/architectures/aarch64/aarch64_thread_context/index.html +++ b/0.8.1/from_pydoc/generated/architectures/aarch64/aarch64_thread_context/index.html @@ -1915,7 +1915,7 @@
- Bases: ThreadContext
ThreadContext
This object represents a thread in the context of the target aarch64 process. It holds information about the thread's state, registers and stack.
diff --git a/0.8.1/from_pydoc/generated/architectures/amd64/amd64_thread_context/index.html b/0.8.1/from_pydoc/generated/architectures/amd64/amd64_thread_context/index.html index 77b9a51..21f0e4e 100644 --- a/0.8.1/from_pydoc/generated/architectures/amd64/amd64_thread_context/index.html +++ b/0.8.1/from_pydoc/generated/architectures/amd64/amd64_thread_context/index.html @@ -1915,7 +1915,7 @@
- Bases: ThreadContext
ThreadContext
This object represents a thread in the context of the target amd64 process. It holds information about the thread's state, registers and stack.
diff --git a/0.8.1/from_pydoc/generated/architectures/amd64/compat/i386_over_amd64_thread_context/index.html b/0.8.1/from_pydoc/generated/architectures/amd64/compat/i386_over_amd64_thread_context/index.html index 66e3d36..f4c0c75 100644 --- a/0.8.1/from_pydoc/generated/architectures/amd64/compat/i386_over_amd64_thread_context/index.html +++ b/0.8.1/from_pydoc/generated/architectures/amd64/compat/i386_over_amd64_thread_context/index.html @@ -1915,7 +1915,7 @@
- Bases: ThreadContext
ThreadContext
This object represents a thread in the context of the target i386 process when running on amd64. It holds information about the thread's state, registers and stack.
diff --git a/0.8.1/from_pydoc/generated/architectures/i386/i386_thread_context/index.html b/0.8.1/from_pydoc/generated/architectures/i386/i386_thread_context/index.html index b9b0f05..bbd048b 100644 --- a/0.8.1/from_pydoc/generated/architectures/i386/i386_thread_context/index.html +++ b/0.8.1/from_pydoc/generated/architectures/i386/i386_thread_context/index.html @@ -1915,7 +1915,7 @@
- Bases: ThreadContext
ThreadContext
This object represents a thread in the context of the target i386 process. It holds information about the thread's state, registers and stack.
diff --git a/0.8.1/from_pydoc/generated/snapshots/memory/memory_map_snapshot/index.html b/0.8.1/from_pydoc/generated/snapshots/memory/memory_map_snapshot/index.html index 04c20fb..58e1f82 100644 --- a/0.8.1/from_pydoc/generated/snapshots/memory/memory_map_snapshot/index.html +++ b/0.8.1/from_pydoc/generated/snapshots/memory/memory_map_snapshot/index.html @@ -1946,7 +1946,7 @@
- Bases: MemoryMap
MemoryMap
A snapshot of the memory map of the target process.
@@ -2007,7 +2007,7 @@offsetoffsetint
backing_filebacking_filestr
- Bases: AbstractMemoryView
AbstractMemoryView
Memory view for a thread / process snapshot.
@@ -3202,7 +3202,7 @@SymbolSymbol
+ Symbol
- Bases: Registers
Registers
Class that holds the state of the architectural-dependent registers of a snapshot.
diff --git a/0.8.1/from_pydoc/generated/snapshots/thread/lw_thread_snapshot/index.html b/0.8.1/from_pydoc/generated/snapshots/thread/lw_thread_snapshot/index.html index 553486e..cd1ef87 100644 --- a/0.8.1/from_pydoc/generated/snapshots/thread/lw_thread_snapshot/index.html +++ b/0.8.1/from_pydoc/generated/snapshots/thread/lw_thread_snapshot/index.html @@ -2211,7 +2211,7 @@ThreadContext
ThreadContext
+ ThreadContext
To install libdebug, you first need to have some dependencies that will not be automatically resolved. These dependencies are libraries, utilities and development headers which are required by libdebug to compile its internals during installation.
Welcome to libdebug! This powerful Python library can be used to debug your binary executables programmatically, providing a robust, user-friendly interface. Debugging multithreaded applications can be a nightmare, but libdebug has you covered. Hijack and manage signals and syscalls with a simple API.
Supported Systems
libdebug currently supports Linux under the x86_64, x86 and ARM64 architectures. Other operating systems and architectures are not supported at this time.
","boost":2},{"location":"#dependencies","title":"Dependencies","text":"To install libdebug, you first need to have some dependencies that will not be automatically resolved. These dependencies are libraries, utilities and development headers which are required by libdebug to compile its internals during installation.
Ubuntu Arch Linux Fedora Debiansudo apt install -y python3 python3-dev g++ libdwarf-dev libelf-dev libiberty-dev linux-headers-generic libc6-dbg\n sudo pacman -S python libelf libdwarf gcc make debuginfod\n sudo dnf install -y python3 python3-devel kernel-devel g++ binutils-devel libdwarf-devel\n sudo apt install -y python3 python3-dev g++ libdwarf-dev libelf-dev libiberty-dev linux-headers-generic libc6-dbg\n Is your distro missing?
If you are using a Linux distribution that is not included in this section, you can search for equivalent packages for your distro. Chances are the naming convention of your system's repository will only change a prefix or suffix.
","boost":2},{"location":"#installation","title":"Installation","text":"Installing libdebug once you have dependencies is as simple as running the following command:
stablepython3 -m pip install libdebug\n If you want to test your installation when installing from source, we provide a suite of tests that you can run:
Testing your installationgit clone https://github.com/libdebug/libdebug\ncd libdebug/test\npython run_suite.py\n For more advanced users, please refer to the Building libdebug from source page for more information on the build process.
","boost":2},{"location":"#your-first-script","title":"Your First Script","text":"Now that you have libdebug installed, you can start using it in your scripts. Here is a simple example of how to use libdebug to debug an executable:
libdebug's Hello World!from libdebug import debugger\n\nd = debugger(\"./test\") # (1)!\n\n# Start debugging from the entry point\nd.run() # (2)!\n\nmy_breakpoint = d.breakpoint(\"function\") # (3)!\n\n# Continue the execution until the breakpoint is hit\nd.cont() # (4)!\n\n# Print RAX\nprint(f\"RAX is {hex(d.regs.rax)}\") # (5)!\n test executable<function> in the binaryUsing pwntools alongside libdebug
The current version of libdebug is incompatible with pwntools.
While having both installed in your Python environment is not a problem, starting a process with pwntools in a libdebug script will cause unexpected behaviors as a result of some race conditions.
Examples of some known issues include:
ptrace not intercepting SIGTRAP signals when the process is run with pwntools. This behavior is described in Issue #48.shell=True will cause the process to attach to the shell process instead. This behavior is described in Issue #57.The documentation for versions of libdebug older that 0.7.0 has to be accessed manually at http://docs.libdebug.org/archive/VERSION, where VERSION is the version number you are looking for.
Need to cite libdebug as software used in your work? This is the way to cite us:
@software{libdebug_2024,\n title = {libdebug: {Build} {Your} {Own} {Debugger}},\n copyright = {MIT Licence},\n url = {https://libdebug.org},\n publisher = {libdebug.org},\n author = {Digregorio, Gabriele and Bertolini, Roberto Alessandro and Panebianco, Francesco and Polino, Mario},\n year = {2024},\n doi = {10.5281/zenodo.13151549},\n}\n We also have a poster on libdebug. If you use libdebug in your research, you can cite the associated short paper:
@inproceedings{10.1145/3658644.3691391,\nauthor = {Digregorio, Gabriele and Bertolini, Roberto Alessandro and Panebianco, Francesco and Polino, Mario},\ntitle = {Poster: libdebug, Build Your Own Debugger for a Better (Hello) World},\nyear = {2024},\nisbn = {9798400706363},\npublisher = {Association for Computing Machinery},\naddress = {New York, NY, USA},\nurl = {https://doi.org/10.1145/3658644.3691391},\ndoi = {10.1145/3658644.3691391},\nbooktitle = {Proceedings of the 2024 on ACM SIGSAC Conference on Computer and Communications Security},\npages = {4976\u20134978},\nnumpages = {3},\nkeywords = {debugging, reverse engineering, software security},\nlocation = {Salt Lake City, UT, USA},\nseries = {CCS '24}\n}\n","boost":2},{"location":"basics/command_queue/","title":"Default VS ASAP Mode","text":"For most commands that can be issued in libdebug, it is necessary that the traced process stops running. When the traced process stops running as a result of a stopping event, libdebug can inspect the state and intervene in its control flow. When one of these commands is used in the script as the process is still running, libdebug will wait for the process to stop before executing the command.
In the following example, the content of the RAX register is printed after the program hits the breakpoint or stops for any other reason:
from libdebug import debugger\n\nd = debugger(\"program\")\nd.run()\n\nd.breakpoint(\"func\", file=\"binary\")\n\nd.cont()\n\nprint(f\"RAX: {hex(d.regs.rax)}\")\n Script execution
Please note that, after resuming execution of the tracee process, the script will continue to run. This means that the script will not wait for the process to stop before continuing with the rest of the script. If the next command is a libdebug command that requires the process to be stopped, the script will then wait for a stopping event before executing that command.
In the following example, we make a similar scenario, but show how you can inspect the state of the process by arbitrarily stopping it in the default mode.
d = debugger(\"program\")\n\nd.run()\n\nd.breakpoint(\"func\", file=\"binary\")\n\nd.cont()\n\nprint(f\"RAX: {hex(d.regs.rax)}\") # (1)!\n\nd.cont()\nd.interrupt() # (2)!\n\nprint(f\"RAX: {hex(d.regs.rax)}\") # (3)!\n\nd.cont()\n\n[...]\n If you want the command to be executed As Soon As Possible (ASAP) instead of waiting for a stopping event, you can specify it when creating the Debugger object. In this mode, the debugger will stop the process and issue the command as it runs your script without waiting. The following script has the same behavior as the previous one, using the corresponding option:
d = debugger(\"program\", auto_interrupt_on_command=True)\n\nd.run()\n\nd.breakpoint(\"func\", file=\"binary\")\n\nd.cont()\nd.wait()\n\nprint(f\"RAX: {hex(d.regs.rax)}\") # (1)!\n\nd.cont()\n\nprint(f\"RAX: {hex(d.regs.rax)}\") # (2)!\n\nd.cont()\n\n[...]\n For the sake of this example the wait() method is used to wait for the stopping event (in this case, a breakpoint). This enforces the synchronization of the execution to the stopping point that we want to reach. Read more about the wait() method in the section dedicated to control flow commands.
Pwning with libdebug
Respectable pwners in the field find that the ASAP polling mode is particularly useful when writing exploits.
","boost":4},{"location":"basics/control_flow_commands/","title":"Control Flow Commands","text":"Control flow commands allow you to set step through the code, stop execution and resume it at your pleasure.
","boost":4},{"location":"basics/control_flow_commands/#stepping","title":"Stepping","text":"A basic feature of any debugger is the ability to step through the code. libdebug provides several methods to step, some of which will be familiar to users of other debuggers.
","boost":4},{"location":"basics/control_flow_commands/#single-step","title":"Single Step","text":"The step() command executes the instruction at the instruction pointer and stops the process. When possible, it uses the hardware single-step feature of the CPU for better performance.
Function Signature
d.step()\n","boost":4},{"location":"basics/control_flow_commands/#next","title":"Next","text":"The next() command executes the current instruction at the instruction pointer and stops the process. If the instruction is a function call, it will execute the whole function and stop at the instruction following the call. In other debuggers, this command is known as \"step over\".
Please note that the next() command resumes the execution of the program if the instruction is a function call. This means that the debugger can encounter stopping events in the middle of the function, causing the command to return before the function finishes.
Function Signature
d.next()\n Damn heuristics!
The next() command uses heuristics to determine if the instruction is a function call and to find the stopping point. This means that the command may not work as expected in some cases (e.g. functions called with a jump, non-returning calls).
The step_until() command executes single steps until a specific address is reached. Optionally, you can also limit steps to a maximum count (default value is -1, meaning no limit).
Function Signature
d.step_until(position, max_steps=-1, file='hybrid') \n The file parameter can be used to specify the choice on relative addressing. Refer to the memory access section for more information on addressing modes.
","boost":4},{"location":"basics/control_flow_commands/#continuing","title":"Continuing","text":"The cont() command continues the execution.
Function Signature
d.cont()\n For example, in the following script, libdebug will not wait for the process to stop before checking d.dead. To change this behavior, you can use the wait() command right after the cont().
from libdebug import debugger\n\nd = debugger(\"program_that_dies_tragically\")\n\nd.run()\n\nd.cont()\n\nif d.dead:\n print(\"The program is dead!\")\n","boost":4},{"location":"basics/control_flow_commands/#the-wait-method","title":"The wait() Method","text":"The wait() command is likely the most important in libdebug. Loved by most and hated by many, it instructs the debugger to wait for a stopping event before continuing with the execution of the script.
Example
In the following script, libdebug will wait for the process to stop before printing \"provola\".
from libdebug import debugger\n\nd = debugger(\"program_that_dies_tragically\")\n\nd.run()\n\nd.cont()\nd.wait()\n\nprint(\"provola\")\n","boost":4},{"location":"basics/control_flow_commands/#interrupt","title":"Interrupt","text":"You can manually issue a stopping signal to the program using the interrupt() command. Clearly, this command is issued as soon as it is executed within the script.
Function Signature
d.interrupt()\n","boost":4},{"location":"basics/control_flow_commands/#finish","title":"Finish","text":"The finish() command continues execution until the current function returns or a breakpoint is hit. In other debuggers, this command is known as \"step out\".
Function Signature
d.finish(heuristic='backtrace')\n Damn heuristics!
The finish() command uses heuristics to determine the end of a function. While libdebug allows to choose the heuristic, it is possible that none of the available options work in some specific cases. (e.g. tail-calls, non-returning calls).
The finish() command allows you to choose the heuristic to use. If you don't specify any, the \"backtrace\" heuristic will be used. The following heuristics are available:
backtrace The backtrace heuristic uses the return address on the function stack frame to determine the end of the function. This is the default heuristic but may fail in case of broken stack, rare execution flows, and obscure compiler optimizations. step-mode The step-mode heuristic uses repeated single steps to execute instructions until a ret instruction is reached. Nested calls are handled, when the calling convention is respected. This heuristic is slower and may fail in case of rare execution flows and obscure compiler optimizations.","boost":4},{"location":"basics/detach_and_gdb/","title":"Detach and GDB Migration","text":"In libdebug, you can detach from the debugged process and continue execution with the detach() method.
Function Signature
d.detach()\n Detaching from a running process
Remember that detaching from a process is meant to be used when the process is stopped. If the process is running, the command will wait for a stopping event. To forcibly stop the process, you can use the interrupt() method before migrating.
If at any time during your script you want to take a more traditional approach to debugging, you can seamlessly switch to GDB. This will temporarily detach libdebug from the program and give you control over the program using GDB. Quitting GDB or using the goback command will return control to libdebug.
Function Signature
d.gdb(\n migrate_breakpoints: bool = True,\n open_in_new_process: bool = True,\n blocking: bool = True,\n) -> GdbResumeEvent:\n Parameter Description migrate_breakpoints If set to True, libdebug will migrate the breakpoints to GDB. open_in_new_process If set to True, libdebug will open GDB in a new process. blocking If set to True, libdebug will wait for the user to terminate the GDB session to continue the script. Setting the blocking to False is useful when you want to continue using the pipe interaction and other parts of your script as you take control of the debugging process.
When blocking is set to False, the gdb() method will return a GdbResumeEvent object. This object can be used to wait for the GDB session to finish before continuing the script.
Example of using non-blocking GDB migration
from libdebug import debugger\nd = debugger(\"program\")\npipe = d.run()\n\n# Reach interesting point in the program\n[...]\n\ngdb_event = d.gdb(blocking = False)\n\npipe.sendline(b\"dump interpret\")\n\nwith open(\"dump.bin\", \"r\") as f:\n pipe.send(f.read())\n\ngdb_event.join() # (1)!\n Please consider a few requirements when opening GDB in a new process. For this mode to work, libdebug needs to know which terminal emulator you are using. If not set, libdebug will try to detect this automatically. In some cases, detection may fail. You can manually set the terminal command in libcontext. If instead of opening GDB in a new terminal window you want to use the current terminal, you can simply set the open_in_new_process parameter to False.
Example of setting the terminal with tmux
from libdebug import libcontext\n\nlibcontext.terminal = ['tmux', 'splitw', '-h']\n Migrating from a running process
Remember that GDB Migration is meant to be used when the process is stopped. If the process is running, the command will wait for a stopping event. To forcibly stop the process, you can use the interrupt() method before migrating.
If you are finished working with a Debugger object and wish to deallocate it, you can terminate it using the terminate() command.
Function Signature
d.terminate()\n What happens to the running process?
When you terminate a Debugger object, the process is forcibly killed. If you wish to detach from the process and continue the execution before terminating the debugger, you should use the detach() command before.
The default behavior in libdebug is to kill the debugged process when the script exits. This is done to prevent the process from running indefinitely if the debugging script terminates or you forget to kill it manually. When creating a Debugger object, you can set the kill_on_exit attribute to False to prevent this behavior:
from libdebug import Debugger\n\nd = debugger(\"test\", kill_on_exit=False)\n You can also change this attribute in an existing Debugger object at runtime:
d.kill_on_exit = False\n Behavior when attaching to a process
When debugging is initiated by attaching to an existing process, the kill_on_exit policy is enforced in the same way as when starting a new process.
You can kill the process any time the process is stopped using the kill() method:
Function Signature
d.kill()\n The method sends a SIGKILL signal to the process, which terminates it immediately. If the process is already dead, libdebug will throw an exception. When multiple threads are running, the kill() method will kill all threads under the parent process.
Process Stop
The kill() method will not stop a running process, unless libdebug is operating in ASAP Mode. Just like other commands, in the default mode, the kill() method will wait for the process to stop before executing.
You can check if the process is dead using the dead property:
if not d.dead:\n print(\"The process is not dead\")\nelse:\n print(\"The process is dead\")\n The running property
The Debugger object also exposes the running property. This is not the opposite of dead. The running property is True when the process is not stopped and False otherwise. If execution was stopped by a stopping event, the running property will be equal to False. However, in this case the process can still be alive.
Has your process passed away unexpectedly? We are sorry to hear that. If your process is indeed defunct, you can access the exit code and signal using exit_code and exit_signal. When there is no valid exit code or signal, these properties will return None.
if d.dead:\n print(f\"The process exited with code {d.exit_code}\")\n\nif d.dead:\n print(f\"The process exited with signal {d.exit_signal}\")\n","boost":4},{"location":"basics/kill_and_post_mortem/#zombie-processes-and-threads","title":"Zombie Processes and Threads","text":"When a process dies, it becomes a zombie process. This means that the process has terminated, but its parent process has not yet read its exit status. In libdebug, you can check if the process is a zombie using the zombie property of the Debugger object. This is particularly relevant in multi-threaded applications. To read more about this, check the dedicated section on zombie processes.
Example Code
if d.zombie:\n print(\"The process is a zombie\")\n","boost":4},{"location":"basics/libdebug101/","title":"libdebug 101","text":"Welcome to libdebug! When writing a script to debug a program, the first step is to create a Debugger object. This object will be your main interface for debugging commands.
from libdebug import debugger\n\ndebugger = debugger(argv=[\"./program\", \"arg1\", \"arg2\"]) # (1)!\n argv can either be a string (the name/path of the executable) or an array corresponding to the argument vector of the execution.Am I already debugging?
Creating a Debugger object will not start the execution automatically. You can reuse the same debugger to iteratively run multiple instances of the program. This is particularly useful for smart bruteforcing or fuzzing scripts.
Performing debugger initialization each time is not required and can be expensive.
To run the executable, refer to Running an Executable
","boost":4},{"location":"basics/libdebug101/#environment","title":"Environment","text":"Just as you would expect, you can also pass environment variables to the program using the env parameter. Here, the variables are passed as a string-string dictionary.
from libdebug import debugger\n\ndebugger = debugger(\"test\", env = {\"LD_PRELOAD\": \"musl_libc.so\"})\n","boost":4},{"location":"basics/libdebug101/#address-space-layout-randomization-aslr","title":"Address Space Layout Randomization (ASLR)","text":"Modern operating system kernels implement mitigations against predictable addresses in binary exploitation scenarios. One such feature is ASLR, which randomizes the base address of mapped virtual memory pages (e.g., binary, libraries, stack). When debugging, this feature can become a nuisance for the user.
By default, libdebug keeps ASLR enabled. The debugger aslr parameter can be used to change this behavior.
from libdebug import debugger\n\ndebugger = debugger(\"test\", aslr=False)\n","boost":4},{"location":"basics/libdebug101/#binary-entry-point","title":"Binary Entry Point","text":"When a child process is spawned on the Linux kernel through the ptrace system call, it is possible to trace it as soon as the loader has set up your executable. Debugging these first instructions inside the loader library is generally uninteresting.
For this reason, the default behavior for libdebug is to continue until the binary entry point (1) is reached. When you need to start debugging from the very beginning, you can simply disable this behavior in the following way:
_start / __rt_entry symbol in your binary executable. This function is the initial stub that calls the main() function in your executable, through a call to the standard library of your system (e.g., __libc_start_main, __rt_lib_init)from libdebug import debugger\n\ndebugger = debugger(\"test\", continue_to_binary_entrypoint=False)\n What the hell are you debugging?
Please note that this feature assumes the binary is well-formed. If the ELF header is corrupt, the binary entrypoint will not be resolved correctly. As such, setting this parameter to False is a good practice when you don't want libdebug to rely on this information.
The Debugger object has many more parameters it can take.
Function Signature
debugger(\n argv=[],\n aslr=True,\n env=None,\n escape_antidebug=False,\n continue_to_binary_entrypoint=True,\n auto_interrupt_on_command=False,\n fast_memory=False,\n kill_on_exit=True,\n follow_children=True\n) -> Debugger\n Parameter Type Description argv str | list[str] Path to the binary or argv list aslr bool Whether to enable ASLR. Defaults to True. env dict[str, str] The environment variables to use. Defaults to the same environment of the parent process. escape_antidebug bool Whether to automatically attempt to patch antidebugger detectors based on ptrace. continue_to_binary_entrypoint bool Whether to automatically continue to the binary entrypoint. auto_interrupt_on_command bool Whether to run libdebug in ASAP Mode. fast_memory bool Whether to use a faster memory reading method. Defaults to False. kill_on_exit bool Whether to kill the debugged process when the debugger exits. Defaults to True. follow_children bool Whether to automatically monitor child processes. Defaults to True. Return Value Debugger Debugger The debugger object","boost":4},{"location":"basics/memory_access/","title":"Memory Access","text":"In libdebug, memory access is performed via the memory attribute of the Debugger object or the Thread Context. When reading from memory, a bytes-like object is returned. The following methods are available:
Access a single byte of memory by providing the address as an integer.
d.memory[0x1000]\n Access a range of bytes by providing the start and end addresses as integers.
d.memory[0x1000:0x1010]\n Access a range of bytes by providing the base address and length as integers.
d.memory[0x1000, 0x10]\n Access memory using a symbol name.
d.memory[\"function\", 0x8]\n When specifying a symbol, you can also provide an offset. Contrary to what happens in GDB, the offset is always interpreted as hexadecimal.
d.memory[\"function+a8\"]\n Access a range of bytes using a symbol name.
d.memory[\"function\":\"function+0f\"]\n Please note that contrary to what happens in GDB, the offset is always interpreted as hexadecimal. Accessing memory with symbols
Please note that, unless otherwise specified, symbols are resolved in the debugged binary only. To resolve symbols in shared libraries, you need to indicate it in the third parameter of the function.
d.memory[\"__libc_start_main\", 0x8, \"libc\"]\n Writing to memory works similarly. You can write a bytes-like object to memory using the same addressing methods:
d.memory[d.rsp, 0x10] = b\"AAAAAAABC\"\nd.memory[\"main_arena\", 16, \"libc\"] = b\"12345678\"\n Length/Slice when writing
When writing to memory, slices and length are ignored in favor of the length of the specified bytes-like object.
In the following example, only 4 bytes are written:
d.memory[\"main_arena\", 50] = b\"\\x0a\\xeb\\x12\\xfc\"\n","boost":4},{"location":"basics/memory_access/#absolute-and-relative-addressing","title":"Absolute and Relative Addressing","text":"Just like with symbols, memory addresses can also be accessed relative to a certain file base. libdebug uses \"hybrid\" addressing by default. This means it first attempts to resolve addresses as absolute. If the address does not correspond to an absolute one, it considers it relative to the base of the binary.
You can use the third parameter of the memory access method to select the file you want to use as base (e.g., libc, ld, binary). If you want to force libdebug to use absolute addressing, you can specify \"absolute\" instead.
Examples of relative and absolute addressing
# Absolute addressing\nd.memory[0x7ffff7fcb200, 0x10, \"absolute\"]\n\n# Hybrid addressing\nd.memory[0x1000, 0x10, \"hybrid\"]\n\n# Relative addressing\nd.memory[0x1000, 0x10, \"binary\"]\nd.memory[0x1000, 0x10, \"libc\"]\n","boost":4},{"location":"basics/memory_access/#searching-inside-memory","title":"Searching inside Memory","text":"The memory attribute of the Debugger object also allows you to search for specific values in the memory of the process. You can search for integers, strings, or bytes-like objects.
Function Signature
d.memory.find(\n value: int | bytes | str,\n file: str = \"all\",\n start: int | None = None,\n end: int | None = None,\n) -> list[int]:\n Parameters:
Argument Type Descriptionvalue int | bytes | str The value to search for. file str The backing file to search in (e.g, binary, libc, stack). start int (optional) The start address of the search (works with both relative and absolute). end int (optional) The end address of the search (works with both relative and absolute). Returns:
Return Type DescriptionAddresses list[int] List of memory addresses where the value was found. Usage Example
binsh_string_addr = d.memory.find(\"/bin/sh\", file=\"libc\")\n\nvalue_address = d.memory.find(0x1234, file=\"stack\", start=d.regs.rsp)\n","boost":4},{"location":"basics/memory_access/#searching-pointers","title":"Searching Pointers","text":"The memory attribute of the Debugger object also allows you to search for values in a source memory map that are pointers to another memory map. One use case for this would be identifying potential leaks of memory addresses when libdebug is used for exploitation tasks.
Function Signature
def find_pointers(\n where: int | str = \"*\",\n target: int | str = \"*\",\n step: int = 1,\n ) -> list[tuple[int, int]]:\n Parameters:
Argument Type Descriptionwhere int | str The memory map where we want to search for references. Defaults to \"*\", which means all memory maps. target int | str The memory map whose pointers we want to find. Defaults to \"*\", which means all memory maps. step int The interval step size while iterating over the memory buffer. Defaults to 1. Returns:
Return Type DescriptionPointers list[tuple[int, int]] A list of tuples containing the address where the pointer was found and the pointer itself. Usage Example
pointers = d.memory.find_pointers(\"stack\", \"heap\")\n\nfor src, dst in pointers:\n print(f\"Heap leak to {dst} found at {src} points\")\n","boost":4},{"location":"basics/memory_access/#fast-and-slow-memory-access","title":"Fast and Slow Memory Access","text":"libdebug supports two different methods to access memory on Linux, controlled by the fast_memory parameter of the Debugger object. The two methods are:
fast_memory=False uses the ptrace system call interface, requiring a context switch from user space to kernel space for each architectural word-size read.fast_memory=True reduces the access latency by relying on Linux's procfs, which contains a virtual file as an interface to the process memory.As of version 0.8 Chutoro Nigiri , fast_memory=True is the default. The following examples show how to change the memory access method when creating the Debugger object or at runtime.
d = debugger(\"test\", fast_memory=False)\n d.fast_memory = False\n","boost":4},{"location":"basics/register_access/","title":"Register Access","text":"libdebug offers a simple register access interface for supported architectures. Registers are accessible through the regs attribute of the Debugger object or the Thread Context.
Multithreading
In multi-threaded debugging, the regs attribute of the Debugger object will return the registers of the main thread.
The following is an example of how to interact with the RAX register in a debugger object on AMD64:
read_value = d.regs.rax Writing d.regs.rax = read_value + 1 Note that the register values are read and written as Python integers. This is true for all registers except for floating point ones, which are coherent with their type. To avoid confusion, we list available registers and their types below. Related registers are available to access as well.
AMD64i386AArch64 Register Type Related Description General Purpose RAX Integer EAX, AX, AH, AL Accumulator register RBX Integer EBX, BX, BH, BL Base register RCX Integer ECX, CX, CH, CL Counter register RDX Integer EDX, DX, DH, DL Data register RSI Integer ESI, SI Source index for string operations RDI Integer EDI, DI Destination index for string operations RBP Integer EBP, BP Base pointer (frame pointer) RSP Integer ESP, SP Stack pointer R8 Integer R8D, R8W, R8B General-purpose register R9 Integer R9D, R9W, R9B General-purpose register R10 Integer R10D, R10W, R10B General-purpose register R11 Integer R11D, R11W, R11B General-purpose register R12 Integer R12D, R12W, R12B General-purpose register R13 Integer R13D, R13W, R13B General-purpose register R14 Integer R14D, R14W, R14B General-purpose register R15 Integer R15D, R15W, R15B General-purpose register RIP Integer EIP Instruction pointer Flags EFLAGS Integer Flags register Segment Registers CS Integer Code segment DS Integer Data segment ES Integer Extra segment FS Integer Additional segment GS Integer Additional segment SS Integer Stack segment FS_BASE Integer FS segment base address GS_BASE Integer GS segment base address Vector Registers XMM0 Integer Lower 128 bits of YMM0/ZMM0 XMM1 Integer Lower 128 bits of YMM1/ZMM1 XMM2 Integer Lower 128 bits of YMM2/ZMM2 XMM3 Integer Lower 128 bits of YMM3/ZMM3 XMM4 Integer Lower 128 bits of YMM4/ZMM4 XMM5 Integer Lower 128 bits of YMM5/ZMM5 XMM6 Integer Lower 128 bits of YMM6/ZMM6 XMM7 Integer Lower 128 bits of YMM7/ZMM7 XMM8 Integer Lower 128 bits of YMM8/ZMM8 XMM9 Integer Lower 128 bits of YMM9/ZMM9 XMM10 Integer Lower 128 bits of YMM10/ZMM10 XMM11 Integer Lower 128 bits of YMM11/ZMM11 XMM12 Integer Lower 128 bits of YMM12/ZMM12 XMM13 Integer Lower 128 bits of YMM13/ZMM13 XMM14 Integer Lower 128 bits of YMM14/ZMM14 XMM15 Integer Lower 128 bits of YMM15/ZMM15 YMM0 Integer 256-bit AVX extension of XMM0 YMM1 Integer 256-bit AVX extension of XMM1 YMM2 Integer 256-bit AVX extension of XMM2 YMM3 Integer 256-bit AVX extension of XMM3 YMM4 Integer 256-bit AVX extension of XMM4 YMM5 Integer 256-bit AVX extension of XMM5 YMM6 Integer 256-bit AVX extension of XMM6 YMM7 Integer 256-bit AVX extension of XMM7 YMM8 Integer 256-bit AVX extension of XMM8 YMM9 Integer 256-bit AVX extension of XMM9 YMM10 Integer 256-bit AVX extension of XMM10 YMM11 Integer 256-bit AVX extension of XMM11 YMM12 Integer 256-bit AVX extension of XMM12 YMM13 Integer 256-bit AVX extension of XMM13 YMM14 Integer 256-bit AVX extension of XMM14 YMM15 Integer 256-bit AVX extension of XMM15 ZMM0 Integer 512-bit AVX-512 extension of XMM0 ZMM1 Integer 512-bit AVX-512 extension of XMM1 ZMM2 Integer 512-bit AVX-512 extension of XMM2 ZMM3 Integer 512-bit AVX-512 extension of XMM3 ZMM4 Integer 512-bit AVX-512 extension of XMM4 ZMM5 Integer 512-bit AVX-512 extension of XMM5 ZMM6 Integer 512-bit AVX-512 extension of XMM6 ZMM7 Integer 512-bit AVX-512 extension of XMM7 ZMM8 Integer 512-bit AVX-512 extension of XMM8 ZMM9 Integer 512-bit AVX-512 extension of XMM9 ZMM10 Integer 512-bit AVX-512 extension of XMM10 ZMM11 Integer 512-bit AVX-512 extension of XMM11 ZMM12 Integer 512-bit AVX-512 extension of XMM12 ZMM13 Integer 512-bit AVX-512 extension of XMM13 ZMM14 Integer 512-bit AVX-512 extension of XMM14 ZMM15 Integer 512-bit AVX-512 extension of XMM15 Floating Point (Legacy x87) ST(0)-ST(7) Floating Point x87 FPU data registers MM0-MM7 Integer MMX registers Register Type Related Description General Purpose EAX Integer AX, AH, AL Accumulator register EBX Integer BX, BH, BL Base register ECX Integer CX, CH, CL Counter register EDX Integer DX, DH, DL Data register ESI Integer SI Source index for string operations EDI Integer DI Destination index for string operations EBP Integer BP Base pointer (frame pointer) ESP Integer SP Stack pointer EIP Integer IP Instruction pointer Flags EFLAGS Integer Flags register Segment Registers CS Integer Code segment DS Integer Data segment ES Integer Extra segment FS Integer Additional segment GS Integer Additional segment SS Integer Stack segment Floating Point Registers ST(0)-ST(7) Floating Point x87 FPU data registers Vector Registers XMM0 Integer Lower 128 bits of YMM0/ZMM0 XMM1 Integer Lower 128 bits of YMM1/ZMM1 XMM2 Integer Lower 128 bits of YMM2/ZMM2 XMM3 Integer Lower 128 bits of YMM3/ZMM3 XMM4 Integer Lower 128 bits of YMM4/ZMM4 XMM5 Integer Lower 128 bits of YMM5/ZMM5 XMM6 Integer Lower 128 bits of YMM6/ZMM6 XMM7 Integer Lower 128 bits of YMM7/ZMM7 YMM0 Integer 256-bit AVX extension of XMM0 YMM1 Integer 256-bit AVX extension of XMM1 YMM2 Integer 256-bit AVX extension of XMM2 YMM3 Integer 256-bit AVX extension of XMM3 YMM4 Integer 256-bit AVX extension of XMM4 YMM5 Integer 256-bit AVX extension of XMM5 YMM6 Integer 256-bit AVX extension of XMM6 YMM7 Integer 256-bit AVX extension of XMM7 Register Type Alias(es) Description General Purpose X0 Integer W0 Function result or argument X1 Integer W1 Function result or argument X2 Integer W2 Function result or argument X3 Integer W3 Function result or argument X4 Integer W4 Function result or argument X5 Integer W5 Function result or argument X6 Integer W6 Function result or argument X7 Integer W7 Function result or argument X8 Integer W8 Indirect result location (also called \"IP0\") X9 Integer W9 Temporary register X10 Integer W10 Temporary register X11 Integer W11 Temporary register X12 Integer W12 Temporary register X13 Integer W13 Temporary register X14 Integer W14 Temporary register X15 Integer W15 Temporary register (also called \"IP1\") X16 Integer W16 Platform Register (often used as scratch) X17 Integer W17 Platform Register (often used as scratch) X18 Integer W18 Platform Register X19 Integer W19 Callee-saved register X20 Integer W20 Callee-saved register X21 Integer W21 Callee-saved register X22 Integer W22 Callee-saved register X23 Integer W23 Callee-saved register X24 Integer W24 Callee-saved register X25 Integer W25 Callee-saved register X26 Integer W26 Callee-saved register X27 Integer W27 Callee-saved register X28 Integer W28 Callee-saved register X29 Integer W29, FP Frame pointer X30 Integer W30, LR Link register (return address) XZR Integer WZR, ZR Zero register (always reads as zero) SP Integer Stack pointer PC Integer Program counter Flags PSTATE Integer Processor state in exception handling Vector Registers (SIMD/FP) V0 Integer Vector or scalar register V1 Integer Vector or scalar register V2 Integer Vector or scalar register V3 Integer Vector or scalar register V4 Integer Vector or scalar register V5 Integer Vector or scalar register V6 Integer Vector or scalar register V7 Integer Vector or scalar register V8 Integer Vector or scalar register V9 Integer Vector or scalar register V10 Integer Vector or scalar register V11 Integer Vector or scalar register V12 Integer Vector or scalar register V13 Integer Vector or scalar register V14 Integer Vector or scalar register V15 Integer Vector or scalar register V16 Integer Vector or scalar register V17 Integer Vector or scalar register V18 Integer Vector or scalar register V19 Integer Vector or scalar register V20 Integer Vector or scalar register V21 Integer Vector or scalar register V22 Integer Vector or scalar register V23 Integer Vector or scalar register V24 Integer Vector or scalar register V25 Integer Vector or scalar register V26 Integer Vector or scalar register V27 Integer Vector or scalar register V28 Integer Vector or scalar register V29 Integer Vector or scalar register V30 Integer Vector or scalar register V31 Integer Vector or scalar register Q0 Integer Vector or scalar register Q1 Integer Vector or scalar register Q2 Integer Vector or scalar register Q3 Integer Vector or scalar register Q4 Integer Vector or scalar register Q5 Integer Vector or scalar register Q6 Integer Vector or scalar register Q7 Integer Vector or scalar register Q8 Integer Vector or scalar register Q9 Integer Vector or scalar register Q10 Integer Vector or scalar register Q11 Integer Vector or scalar register Q12 Integer Vector or scalar register Q13 Integer Vector or scalar register Q14 Integer Vector or scalar register Q15 Integer Vector or scalar register Q16 Integer Vector or scalar register Q17 Integer Vector or scalar register Q18 Integer Vector or scalar register Q19 Integer Vector or scalar register Q20 Integer Vector or scalar register Q21 Integer Vector or scalar register Q22 Integer Vector or scalar register Q23 Integer Vector or scalar register Q24 Integer Vector or scalar register Q25 Integer Vector or scalar register Q26 Integer Vector or scalar register Q27 Integer Vector or scalar register Q28 Integer Vector or scalar register Q29 Integer Vector or scalar register Q30 Integer Vector or scalar register Q31 Integer Vector or scalar register D0 Integer Vector or scalar register D1 Integer Vector or scalar register D2 Integer Vector or scalar register D3 Integer Vector or scalar register D4 Integer Vector or scalar register D5 Integer Vector or scalar register D6 Integer Vector or scalar register D7 Integer Vector or scalar register D8 Integer Vector or scalar register D9 Integer Vector or scalar register D10 Integer Vector or scalar register D11 Integer Vector or scalar register D12 Integer Vector or scalar register D13 Integer Vector or scalar register D14 Integer Vector or scalar register D15 Integer Vector or scalar register D16 Integer Vector or scalar register D17 Integer Vector or scalar register D18 Integer Vector or scalar register D19 Integer Vector or scalar register D20 Integer Vector or scalar register D21 Integer Vector or scalar register D22 Integer Vector or scalar register D23 Integer Vector or scalar register D24 Integer Vector or scalar register D25 Integer Vector or scalar register D26 Integer Vector or scalar register D27 Integer Vector or scalar register D28 Integer Vector or scalar register D29 Integer Vector or scalar register D30 Integer Vector or scalar register D31 Integer Vector or scalar register S0 Integer Vector or scalar register S1 Integer Vector or scalar register S2 Integer Vector or scalar register S3 Integer Vector or scalar register S4 Integer Vector or scalar register S5 Integer Vector or scalar register S6 Integer Vector or scalar register S7 Integer Vector or scalar register S8 Integer Vector or scalar register S9 Integer Vector or scalar register S10 Integer Vector or scalar register S11 Integer Vector or scalar register S12 Integer Vector or scalar register S13 Integer Vector or scalar register S14 Integer Vector or scalar register S15 Integer Vector or scalar register S16 Integer Vector or scalar register S17 Integer Vector or scalar register S18 Integer Vector or scalar register S19 Integer Vector or scalar register S20 Integer Vector or scalar register S21 Integer Vector or scalar register S22 Integer Vector or scalar register S23 Integer Vector or scalar register S24 Integer Vector or scalar register S25 Integer Vector or scalar register S26 Integer Vector or scalar register S27 Integer Vector or scalar register S28 Integer Vector or scalar register S29 Integer Vector or scalar register S30 Integer Vector or scalar register S31 Integer Vector or scalar register H0 Integer Vector or scalar register H1 Integer Vector or scalar register H2 Integer Vector or scalar register H3 Integer Vector or scalar register H4 Integer Vector or scalar register H5 Integer Vector or scalar register H6 Integer Vector or scalar register H7 Integer Vector or scalar register H8 Integer Vector or scalar register H9 Integer Vector or scalar register H10 Integer Vector or scalar register H11 Integer Vector or scalar register H12 Integer Vector or scalar register H13 Integer Vector or scalar register H14 Integer Vector or scalar register H15 Integer Vector or scalar register H16 Integer Vector or scalar register H17 Integer Vector or scalar register H18 Integer Vector or scalar register H19 Integer Vector or scalar register H20 Integer Vector or scalar register H21 Integer Vector or scalar register H22 Integer Vector or scalar register H23 Integer Vector or scalar register H24 Integer Vector or scalar register H25 Integer Vector or scalar register H26 Integer Vector or scalar register H27 Integer Vector or scalar register H28 Integer Vector or scalar register H29 Integer Vector or scalar register H30 Integer Vector or scalar register H31 Integer Vector or scalar register B0 Integer Vector or scalar register B1 Integer Vector or scalar register B2 Integer Vector or scalar register B3 Integer Vector or scalar register B4 Integer Vector or scalar register B5 Integer Vector or scalar register B6 Integer Vector or scalar register B7 Integer Vector or scalar register B8 Integer Vector or scalar register B9 Integer Vector or scalar register B10 Integer Vector or scalar register B11 Integer Vector or scalar register B12 Integer Vector or scalar register B13 Integer Vector or scalar register B14 Integer Vector or scalar register B15 Integer Vector or scalar register B16 Integer Vector or scalar register B17 Integer Vector or scalar register B18 Integer Vector or scalar register B19 Integer Vector or scalar register B20 Integer Vector or scalar register B21 Integer Vector or scalar register B22 Integer Vector or scalar register B23 Integer Vector or scalar register B24 Integer Vector or scalar register B25 Integer Vector or scalar register B26 Integer Vector or scalar register B27 Integer Vector or scalar register B28 Integer Vector or scalar register B29 Integer Vector or scalar register B30 Integer Vector or scalar register B31 Integer Vector or scalar registerHardware Support
libdebug only exposes registers which are available on your CPU model. For AMD64, the list of available AVX registers is determined by checking the CPU capabilities. If you believe your CPU supports AVX registers but they are not available, we encourage your to open an Issue with your hardware details.
","boost":4},{"location":"basics/register_access/#filtering-registers","title":"Filtering Registers","text":"The regs field of the Debugger object or the Thread Context can also be used to filter registers with specific values.
Function Signature
d.regs.filter(value: float) -> list[str]:\n The filtering routine will look for the given value in both integer and floating point registers.
Example of Filtering Registers
d.regs.rax = 0x1337\n\n# Filter the value 0x1337 in the registers\nresults = d.regs.filter(0x1337)\nprint(f\"Found in: {results}\")\n","boost":4},{"location":"basics/running_an_executable/","title":"Running an Executable","text":"You have created your first debugger object, and now you want to run the executable. Calling the run() method will spawn a new child process and prepare it for the execution of your binary.
from libdebug import debugger\n\nd = debugger(\"program\")\nd.run()\n At this point, the process execution is stopped, waiting for your commands. A few things to keep in mind
d.run(). You cannot set breakpoints before calling d.run().When execution is resumed, chances are that your process will need to take input and produce output. To interact with the standard input and output of the process, you can use the PipeManager returned by the run() function.
from libdebug import debugger\n\nd = debugger(\"program\")\npipe = d.run()\n\nd.cont()\nprint(pipe.recvline().decode())\nd.wait()\n All pipe receive-like methods have a timeout parameter that you can set. The default value, timeout_default, can be set globally as a parameter of the PipeManager object. By default, this value is set to 2 seconds.
Changing the global timeout
pipe = d.run()\n\npipe.timeout_default = 10 # (1)!\n You can interact with the process's pipe manager using the following methods:
Method Descriptionrecv Receives at most numb bytes from the target's stdout.Parameters:- numb (int) \u00a0\u00a0\u00a0 [default = 4096]- timeout (int) \u00a0\u00a0\u00a0 [default = timeout_default] recverr Receives at most numb bytes from the target's stderr.Parameters:- numb (int) \u00a0\u00a0\u00a0 [default = 4096]- timeout (int) \u00a0\u00a0\u00a0 [default = timeout_default] recvuntil Receives data from stdout until a specified delimiter is encountered for a certain number of occurrences.Parameters:- delims (bytes)- occurrences (int) \u00a0\u00a0\u00a0 [default = 1]- drop (bool) \u00a0\u00a0\u00a0 [default = False]- timeout (int) \u00a0\u00a0\u00a0 [default = timeout_default]- optional (bool) \u00a0\u00a0\u00a0 [default = False] recverruntil Receives data from stderr until a specified delimiter is encountered for a certain number of occurrences.Parameters:- delims (bytes)- occurrences (int) \u00a0\u00a0\u00a0 [default = 1]- drop (bool) \u00a0\u00a0\u00a0 [default = False]- timeout (int) \u00a0\u00a0\u00a0 [default = timeout_default]- optional (bool) \u00a0\u00a0\u00a0 [default = False] recvline Receives numlines lines from the target's stdout.Parameters:- numlines (int) \u00a0\u00a0\u00a0 [default = 1]- drop (bool) \u00a0\u00a0\u00a0 [default = True]- timeout (int) \u00a0\u00a0\u00a0 [default = timeout_default]- optional (bool) \u00a0\u00a0\u00a0 [default = False] recverrline Receives numlines lines from the target's stderr.Parameters:- numlines (int) \u00a0\u00a0\u00a0 [default = 1]- drop (bool) \u00a0\u00a0\u00a0 [default = True]- timeout (int) \u00a0\u00a0\u00a0 [default = timeout_default]- optional (bool) \u00a0\u00a0\u00a0 [default = False] send Sends data to the target's stdin.Parameters:- data (bytes) sendafter Sends data after receiving a specified number of occurrences of a delimiter from stdout.Parameters:- delims (bytes)- data (bytes)- occurrences (int) \u00a0\u00a0\u00a0 [default = 1]- drop (bool) \u00a0\u00a0\u00a0 [default = False]- timeout (int) \u00a0\u00a0\u00a0 [default = timeout_default]- optional (bool) \u00a0\u00a0\u00a0 [default = False] sendline Sends data followed by a newline to the target's stdin.Parameters:- data (bytes) sendlineafter Sends a line of data after receiving a specified number of occurrences of a delimiter from stdout.Parameters:- delims (bytes)- data (bytes)- occurrences (int) \u00a0\u00a0\u00a0 [default = 1]- drop (bool) \u00a0\u00a0\u00a0 [default = False]- timeout (int) \u00a0\u00a0\u00a0 [default = timeout_default]- optional (bool) \u00a0\u00a0\u00a0 [default = False] close Closes the connection to the target. interactive Enters interactive mode, allowing manual send/receive operations with the target. Read more in the dedicated section.Parameters:- prompt (str) \u00a0\u00a0\u00a0 [default = \"$ \"]- auto_quit (bool) \u00a0\u00a0\u00a0 [default = False] When process is stopped
When the process is stopped, the PipeManager will not be able to receive new (unbuffered) data from the target. For this reason, the API includes a parameter called optional.
When set to True, libdebug will not necessarily expect to receive data from the process when it is stopped. When set to False, any recv-like instruction (including sendafter and sendlineafter) will fail with an exception when the process is not running.
Operations on stdin like send and sendline are not affected by this limitation, since the kernel will buffer the data until the process is resumed.
The PipeManager contains a method called interactive() that allows you to directly interact with the process's standard I/O. This method will print characters from standard output and error and read your inputs, letting you interact naturally with the process. The interactive() method is blocking, so the execution of the script will wait for the user to terminate the interactive session. To quit an interactive session, you can press Ctrl+C or Ctrl+D.
Function Signature
pipe.interactive(prompt: str = prompt_default, auto_quit: bool = False):\n The prompt parameter sets the line prefix in the terminal (e.g. \"$ \" and \"> \" will produce $ cat flag and > cat flag respectively). By default, it is set to \"$ \". The auto_quit parameter, when set to True, will automatically quit the interactive session when the process is stopped.
If any of the file descriptors of standard input, output, or error are closed, a warning will be printed.
","boost":4},{"location":"basics/running_an_executable/#attaching-to-a-running-process","title":"Attaching to a Running Process","text":"If you want to attach to a running process instead of spawning a child, you can use the attach() method in the Debugger object. This method will attach to the process with the specified PID.
from libdebug import debugger\n\nd = debugger(\"test\")\n\npid = 1234\n\nd.attach(pid)\n The process will stop upon attachment, waiting for your commands.
Ptrace Scope
libdebug uses the ptrace system call to interact with the process. For security reasons, this system call is limited by the kernel according to a ptrace_scope parameter. Different systems have different default values for this parameter. If the ptrace system call is not allowed, the attach() method will raise an exception notifying you of this issue.
By default, libdebug redirects the standard input, output, and error of the process to pipes. This is how you can interact with these file descriptors using I/O commands. If you want to disable this behavior, you can set the redirect_pipes parameter of the run() method to False.
Usage
d.run(redirect_pipes=False)\n When set to False, the standard input, output, and error of the process will not be redirected to pipes. This means that you will not be able to interact with the process using the PipeManager object, and libdebug will act as a transparent proxy between the executable and its standard I/O.
Currently, libdebug only supports the GNU/Linux Operating System.
","boost":4},{"location":"basics/supported_systems/#architectures","title":"Architectures","text":"Architecture Alias Support x86_64 AMD64 Stable i386 over AMD64 32-bit compatibility mode Alpha i386 IA-32 Alpha ARM 64-bit AArch64 Beta ARM 32-bit ARM32 Not SupportedForcing a specific architecture
If for any reason you need to force libdebug to use a specific architecture (e.g., corrupted ELF), you can do so by setting the arch parameter in the Debugger object. For example, to force the debugger to use the x86_64 architecture, you can use the following code:
from libdebug import debugger\n\nd = debugger(\"program\", ...)\n\nd.arch = \"amd64\"\n","boost":4},{"location":"blog/","title":"Blogposts","text":""},{"location":"blog/2024/10/13/a-new-documentation/","title":"A New Documentation","text":"Hello, World! Thank for using libdebug. We are proud to roll out our new documentation along with version 0.7.0. This new documentation is powered by MkDocs and Material for MkDocs. We hope you find it more intuitive and easier to navigate.
We have expanded the documentation to cover more topics and provide more examples. We also tried to highlight some common difficulties that have been reported. Also, thanks to the mkdocs search plugin, you can more easily find what you are looking for, both in the documentation and pages generated from Pydoc.
We hope you enjoy the new documentation. If you find any mistakes or would like to suggest improvements, please let us know by opening an issue on our GitHub repository.
"},{"location":"blog/2024/10/14/see-you-at-acm-ccs-2024/","title":"See you at ACM CCS 2024!","text":"We are excited to announce that we will be presenting a poster on libdebug at the 2024 ACM Conference on Computer and Communications Security (ACM CCS 2024). The conference will be held in Salt Lake City, Utah. The poster session is October 16th at 16:30. We will be presenting the rationale behind libdebug and demonstrating how it can be used in some cool use cases.
If you are attending the conference, please stop by our poster and say hello. We would love to meet you and hear about your ideas. We are also looking forward to hearing about your research and how libdebug can help you in your work. Come by and grab some swag!
Link to the conference: ACM CCS 2024 Link to the poster information: libdebug Poster Link to the proceedings: ACM Digital Library
"},{"location":"blog/2025/03/26/release-08---chutoro-nigiri/","title":"Release 0.8 - Chutoro Nigiri","text":"Hello, debuggers! It's been a while since our last release, but we are excited to announce libdebug version 0.8, codename Chutoro Nigiri . This release brings several new features, improvements, and bug fixes. Here is a summary of the changes:
"},{"location":"blog/2025/03/26/release-08---chutoro-nigiri/#features","title":"Features","text":"fork(), attaching new debuggers to them. This behavior can be customized with the Debugger parameter follow_children.d.memory.find_pointers to identify all pointers in a memory region that reference another region, useful for detecting memory leaks in cybersecurity applictions.fast_memory=True): Improves performance of memory access. Can be disabled using the fast_memory parameter in Debugger.d.gdb(open_in_new_process=True): Ensures GDB opens correctly in a newly detected terminal without user-defined commands. zombie attribute in ThreadContext: Allows users to check if a thread is a zombie.SymbolList Slicing: Properly supports slice operations.debuginfod Handling: Enhanced caching logic when a file is not available on debuginfod, improving compatibility with other binaries that use debuginfod on your system.SyscallHandler, SignalCatcher).d.gdb for Edge Cases: Fixed several inconsistencies in execution.step, finish, and next Operations in Callbacks: Now executed correctly.This script was used to showcase the power of libdebug during the Workshop at the CyberChallenge.IT 2024 Finals. An explanation of the script, along with a brief introduction to libdebug, is available in the official stream of the event, starting from timestamp 2:17:00.
from libdebug import debugger\nfrom string import ascii_letters, digits\n\n# Enable the escape_antidebug option to bypass the ptrace call\nd = debugger(\"main\", escape_antidebug=True)\n\ndef callback(_, __):\n # This will automatically issue a continue when the breakpoint is hit\n pass\n\ndef on_enter_nanosleep(t, _):\n # This sets every argument to NULL to make the syscall fail\n t.syscall_arg0 = 0\n t.syscall_arg1 = 0\n t.syscall_arg2 = 0\n t.syscall_arg3 = 0\n\nalphabet = ascii_letters + digits + \"_{}\"\n\nflag = b\"\"\nbest_hit_count = 0\n\nwhile True:\n for c in alphabet:\n r = d.run()\n\n # Any time we call run() we have to reset the breakpoint and syscall handler\n bp = d.breakpoint(0x13e1, hardware=True, callback=callback, file=\"binary\")\n d.handle_syscall(\"clock_nanosleep\", on_enter=on_enter_nanosleep)\n\n d.cont()\n\n r.sendline(flag + c.encode())\n\n # This makes the debugger wait for the process to terminate\n d.wait()\n\n response = r.recvline()\n\n # `run()` will automatically kill any still-running process, but it's good practice to do it manually\n d.kill()\n\n if b\"Yeah\" in response:\n # The flag is correct\n flag += c.encode()\n print(flag)\n break\n\n if bp.hit_count > best_hit_count:\n # We have found a new flag character\n best_hit_count = bp.hit_count\n flag += c.encode()\n print(flag)\n break\n\n if c == \"}\":\n break\n\nprint(flag)\n","boost":0.8},{"location":"code_examples/example_nlinks/","title":"DEF CON Quals 2023 - nlinks","text":"This is a script that solves the challenge nlinks from DEF CON Quals 2023. Please find the binary executables here.
def get_passsphrase_from_class_1_binaries(previous_flag):\n flag = b\"\"\n\n d = debugger(\"CTF/1\")\n r = d.run()\n\n bp = d.breakpoint(0x7EF1, hardware=True, file=\"binary\")\n\n d.cont()\n\n r.recvuntil(b\"Passphrase:\\n\")\n\n # We send a fake flag after the valid password\n r.send(previous_flag + b\"a\" * 8)\n\n for _ in range(8):\n # Here we reached the breakpoint\n if not bp.hit_on(d):\n print(\"Here we should have hit the breakpoint\")\n\n offset = ord(\"a\") ^ d.regs.rbp\n d.regs.rbp = d.regs.r13\n\n # We calculate the correct character value and append it to the flag\n flag += (offset ^ d.regs.r13).to_bytes(1, \"little\")\n\n d.cont()\n\n r.recvline()\n\n d.kill()\n\n # Here the value of flag is b\"\\x00\\x006\\x00\\x00\\x00(\\x00\"\n return flag\n\ndef get_passsphrase_from_class_2_binaries(previous_flag):\n bitmap = {}\n lastpos = 0\n flag = b\"\"\n\n d = debugger(\"CTF/2\")\n r = d.run()\n\n bp1 = d.breakpoint(0xD8C1, hardware=True, file=\"binary\")\n bp2 = d.breakpoint(0x1858, hardware=True, file=\"binary\")\n bp3 = d.breakpoint(0xDBA1, hardware=True, file=\"binary\")\n\n d.cont()\n\n r.recvuntil(b\"Passphrase:\\n\")\n r.send(previous_flag + b\"a\" * 8)\n\n while True:\n if d.regs.rip == bp1.address:\n # Prepare for the next element in the bitmap\n lastpos = d.regs.rbp\n d.regs.rbp = d.regs.r13 + 1\n elif d.regs.rip == bp2.address:\n # Update the bitmap\n bitmap[d.regs.r12 & 0xFF] = lastpos & 0xFF\n elif d.regs.rip == bp3.address:\n # Use the bitmap to calculate the expected character\n d.regs.rbp = d.regs.r13\n wanted = d.regs.rbp\n needed = 0\n for i in range(8):\n if wanted & (2**i):\n needed |= bitmap[2**i]\n flag += chr(needed).encode()\n\n if bp3.hit_count == 8:\n # We have found all the characters\n d.cont()\n break\n\n d.cont()\n\n d.kill()\n\n # Here the value of flag is b\"\\x00\\x00\\x00\\x01\\x00\\x00a\\x00\"\n return flag\n\ndef get_passsphrase_from_class_3_binaries():\n flag = b\"\"\n\n d = debugger(\"CTF/0\")\n r = d.run()\n\n bp = d.breakpoint(0x91A1, hardware=True, file=\"binary\")\n\n d.cont()\n\n r.send(b\"a\" * 8)\n\n for _ in range(8):\n\n # Here we reached the breakpoint\n if not bp.hit_on(d):\n print(\"Here we should have hit the breakpoint\")\n\n offset = ord(\"a\") - d.regs.rbp\n d.regs.rbp = d.regs.r13\n\n # We calculate the correct character value and append it to the flag\n flag += chr((d.regs.r13 + offset) % 256).encode(\"latin-1\")\n\n d.cont()\n\n r.recvline()\n\n d.kill()\n\n # Here the value of flag is b\"BM8\\xd3\\x02\\x00\\x00\\x00\"\n return flag\n\ndef run_nlinks():\n flag0 = get_passsphrase_from_class_3_binaries()\n flag1 = get_passsphrase_from_class_1_binaries(flag0)\n flag2 = get_passsphrase_from_class_2_binaries(flag1)\n\n print(flag0, flag1, flag2)\n","boost":0.8},{"location":"code_examples/examples_index/","title":"Examples Index","text":"This chapter contains a collection of examples showcasing the power of libdebug in various scenarios. Each example is a script that uses the library to solve a specific challenge or demonstrate a particular feature.
","boost":1},{"location":"code_examples/examples_sudo_kurl/","title":"Execution Hijacking Example - TRX CTF 2025","text":"This code example shows how to hijack the exection flow of the program to retrieve the state of a Sudoku game and solve it with Z3. This is a challenge from the TRX CTF 2025. The full writeup, written by Luca Padalino (padawan), can be found here.
","boost":1},{"location":"code_examples/examples_sudo_kurl/#context-of-the-challenge","title":"Context of the challenge","text":"The attachment is an AMD64 ELF binary that simulates a futuristic scenario where the New Roman Empire faces alien invaders. Upon execution, the program prompts users to deploy legions by specifying row and column indices, along with troop values, within a 25x25 grid. The goal is to determine the correct deployment strategy to secure victory against the alien threat. The constraints for the deployment are actually those of a Sudoku game. The challenge is to solve the Sudoku puzzle to deploy the legions correctly.
The following table summarizes the main functions and their roles within the binary:
Function Description main() Prints the initial welcome message and then calls the game loop by invokingplay(). play() Implements the main game loop: it repeatedly validates the board state via isValid(), collects user input using askInput(), and upon receiving the win-check signal (-1), verifies the board via checkWin(). Depending on the result, it either displays a defeat message or computes and prints the flag via getFlag(). isValid(board) Checks the board\u2019s validity (a 25\u00d725 grid) by verifying that each row, column, and 5\u00d75 sub-grid has correct values without duplicates\u2014akin to a Sudoku verification. askInput(board) Prompts the user to input a row, column, and number of troops (values between 1 and 25). It updates the board if the target cell is empty or shows an error if the cell is already occupied. Using -1 for the row index signals that the user wants to check for a win. checkWin(board) Scans the board to ensure that no cell contains a 0 and that the board remains valid. It returns a status indicating whether the win condition has been met. getFlag(board) Processes the board along with an internal vector (named A) by splitting it into segments, performing matrix\u2013vector multiplications (via matrixVectorMultiply()), and converting the resulting numbers into characters to form the flag string. matrixVectorMultiply(matrix, vector) Multiplies a matrix with a vector and returns the result. This operation is used within getFlag() to transform part of the internal vector into a sequence that contributes to the flag. This table provides an at-a-glance reference to the main functions and their roles within the binary.
","boost":1},{"location":"code_examples/examples_sudo_kurl/#the-solution","title":"The solution","text":"The following is the initial state of the Sudoku board retrieved by the script:
initial_board = [\n 0,0,0,21,0,11,0,0,3,24,9,20,23,0,7,22,0,5,18,0,15,2,16,13,0,\n 24,4,0,20,15,0,0,5,0,16,2,25,22,0,17,6,21,0,14,0,8,10,1,19,18,\n 0,0,10,0,5,0,21,19,22,0,3,13,1,16,0,15,4,7,23,24,12,0,14,0,0,\n 0,0,13,6,12,14,4,1,0,0,24,18,19,5,0,0,17,0,0,0,7,22,0,9,21,\n 0,23,19,7,0,0,6,0,0,20,15,4,0,21,0,0,0,0,16,10,24,3,0,17,5,\n 12,15,21,0,0,0,16,6,18,5,7,0,17,3,9,14,0,4,24,22,13,0,0,0,0,\n 14,10,11,2,24,1,25,22,20,0,0,23,6,19,0,13,5,8,12,0,17,0,7,15,9,\n 0,0,0,0,1,24,0,3,15,10,20,8,5,0,25,9,16,19,21,0,2,6,0,12,14,\n 0,0,5,0,3,0,23,14,8,0,0,2,15,0,12,0,7,1,17,6,22,21,4,0,19,\n 13,0,0,4,20,0,0,0,17,0,11,16,0,0,22,0,10,18,15,23,0,25,8,1,3,\n 20,25,7,22,0,23,0,10,1,0,0,0,0,13,4,21,0,6,19,0,3,9,15,8,0,\n 1,24,0,0,0,4,0,20,13,0,8,0,3,0,19,16,2,12,9,5,0,14,10,25,22,\n 0,0,0,0,0,0,0,9,24,0,25,6,0,2,16,4,8,10,0,17,18,7,21,0,1,\n 0,8,0,10,14,16,3,25,6,0,0,7,18,9,11,0,13,0,20,0,19,24,5,0,17,\n 17,3,0,15,9,5,0,0,11,0,0,21,0,0,23,7,0,22,0,0,20,13,12,4,6,\n 15,0,20,11,21,10,0,0,5,22,16,0,0,8,3,24,0,13,2,19,0,0,0,0,0,\n 0,13,8,0,19,17,0,0,0,0,0,12,7,24,6,0,15,23,22,4,14,5,9,0,0,\n 9,1,23,14,4,0,24,0,7,8,19,0,2,0,13,17,3,20,5,0,0,15,0,16,10,\n 10,0,2,12,0,13,18,15,0,0,17,5,0,20,21,8,1,16,0,7,0,19,0,11,0,\n 7,5,17,24,16,20,2,11,19,3,23,0,4,15,1,18,14,0,10,0,0,8,13,21,12,\n 0,20,9,0,7,15,22,17,10,0,12,19,0,0,24,25,0,14,4,8,16,18,2,0,0,\n 19,2,24,8,0,0,20,7,4,0,0,0,9,0,15,5,0,21,11,16,1,0,0,14,25,\n 0,0,25,1,0,8,5,23,14,6,4,17,16,0,2,0,20,0,13,9,10,12,24,7,15,\n 0,0,14,0,0,0,0,0,0,2,6,10,13,0,5,12,0,24,0,0,9,11,0,3,8,\n 6,0,15,0,13,0,0,24,0,9,1,0,8,25,0,10,18,17,0,2,0,4,19,0,23\n]\n The solution script uses libdebug to force the binary to print the state of the board. This state is then parsed and used to create a Z3 model that solves the Sudoku. The solution is then sent back to the binary to solve the game.
from z3 import *\nfrom libdebug import debugger\n\nd = debugger(\"./chall\")\npipe = d.run()\n\n# 0) Hijack the instruction pointer to the displayBoard function\n# Yes...the parenteses are part of the symbol name\nbp = d.breakpoint(\"play()+26\", file=\"binary\", hardware=True)\nwhile not d.dead:\n d.cont()\n d.wait()\n\n if bp.hit_on(d.threads[0]):\n d.step()\n print(\"Hit on play()+0x26\")\n d.regs.rip = d.maps[0].base + 0x2469\n\n# 1) Get information from the board\npipe.recvline(numlines=4)\ninitial_board = pipe.recvline(25).decode().strip().split(\" \")\ninitial_board = [int(x) if x != \".\" else 0 for x in initial_board]\n\nBOARD_SIZE = 25\nBOARD_STEP = 5\n\n# 2) Solve using Z3\ns = Solver()\n\n# 2.1) Create board\nboard = [[Int(f\"board_{i}_{j}\") for i in range(25)] for j in range(25)]\n# 2.2) Add constraints\nfor i in range(BOARD_SIZE):\n for j in range(25):\n # 2.2.1) All the numbers must be between 1 and 25\n s.add(board[i][j] >= 1, board[i][j] <= 25)\n # 2.2.2) If the number is already given, it must be the same \n if initial_board[i*25+j] != 0:\n s.add(board[i][j] == initial_board[i*25+j])\n # 2.2.3) All the numbers in the row must be different\n s.add(Distinct(board[i]))\n # 2.2.4) All the numbers in the column must be different\n s.add(Distinct([board[j][i] for j in range(BOARD_SIZE)]))\n\n# 2.2.5) All the numbers in the 5x5 blocks must be different\nfor i in range(0, BOARD_SIZE, BOARD_STEP):\n for j in range(0, BOARD_SIZE, BOARD_STEP):\n block = [board[i+k][j+l] for k in range(BOARD_STEP) for l in range(BOARD_STEP)]\n s.add(Distinct(block))\n\n# 2.3) Check if the board is solvable\nif s.check() == sat:\n m = s.model()\n\n # 3) Solve the game\n pipe = d.run()\n d.cont()\n pipe.recvuntil(\"deploy.\\n\")\n\n # Send found solution\n for i in range(BOARD_SIZE):\n for j in range(BOARD_SIZE):\n if initial_board[i*25+j] == 0:\n pipe.recvuntil(\": \")\n pipe.sendline(f\"{i+1}\")\n pipe.recvuntil(\": \")\n pipe.sendline(f\"{j+1}\")\n pipe.recvuntil(\": \")\n pipe.sendline(str(m[board[i][j]]))\n print(f\"Row {i+1} - Col {j+1}: {m[board[i][j]]}\")\n\n pipe.recvuntil(\": \")\n pipe.sendline(f\"0\")\n\n # Receive final messages and the flag\n print(pipe.recvline().decode())\n print(pipe.recvline().decode())\n print(pipe.recvline().decode())\n print(pipe.recvline().decode())\n print(pipe.recvline().decode())\nelse:\n print(\"No solution found\")\n\nd.terminate()\n","boost":1},{"location":"development/building_libdebug/","title":"Building libdebug from source","text":"Building libdebug from source is a straightforward process. This guide will walk you through the steps required to compile and install libdebug on your system.
","boost":4},{"location":"development/building_libdebug/#resolving-dependencies","title":"Resolving Dependencies","text":"To install libdebug, you first need to have some dependencies that will not be automatically resolved. These dependencies are libraries, utilities and development headers which are required by libdebug to compile its internals during installation.
Ubuntu Arch Linux Fedora Debian openSUSE Alpine Linuxsudo apt install -y python3 python3-dev g++ libdwarf-dev libelf-dev libiberty-dev\n sudo pacman -S base-devel python3 elfutils libdwarf binutils\n sudo dnf install -y python3 python3-devel g++ elfutils-devel libdwarf-devel binutils-devel\n sudo apt install -y python3 python3-dev g++ libdwarf-dev libelf-dev libiberty-dev\n sudo zypper install -y gcc-c++ make python3 python3-devel libelf-devel libdwarf-devel binutils-devel\n sudo apk add -y python3 python3-dev py3-pip linux-headers elfutils-dev libdwarf-dev binutils-dev\n Is your distro missing?
If you are using a Linux distribution that is not included in this section, you can search for equivalent packages for your distro. Chances are the naming convention of your system's repository will only change a prefix or suffix.
","boost":4},{"location":"development/building_libdebug/#building","title":"Building","text":"To build libdebug from source, from the root directory of the repository, simply run the following command:
python3 -m pip install .\n Alternatively, without cloning the repository, you can directly install libdebug from the GitHub repository using the following command:
python3 -m pip install git+https://github.com/libdebug/libdebug.git@<branch_or_commit>\n Replace <branch_or_commit> with the desired branch or commit hash you want to install. If not specified, the default branch will be used. Editable Install
If you want to install libdebug in editable mode, allowing you to modify the source code and have those changes reflected immediately, you can use the following command, exclusively from a local clone of the repository:
python3 -m pip install --no-build-isolation -Ceditable.rebuild=true -ve .\n This will ensure that every time you make changes to the source code, they will be immediately available without needing to reinstall the package, even for the compiled C++ extensions.
","boost":4},{"location":"development/building_libdebug/#build-options","title":"Build Options","text":"There are some configurable build options that can be set during the installation process, to avoid linking against certain libraries or to enable/disable specific features. These options can be set using environment variables before running the installation command.
Option Description Default ValueUSE_LIBDWARF Include libdwarf, which is used for symbol resolution and debugging information. True USE_LIBELF Include libelf, which is used for reading ELF files. True USE_LIBIBERTY Include libiberty, which is used for demangling C++ symbols. True Changing these options can be done by setting the environment variable before running the installation command. For example, to disable libdwarf, you can run:
CMAKE_ARGS=-USE_LIBDWARF=OFF python3 -m pip install .\n","boost":4},{"location":"development/building_libdebug/#testing-your-installation","title":"Testing Your Installation","text":"We provide a comprehensive suite of tests to ensure that your installation is working correctly. Here's how you can run the tests:
cd test\npython3 run_suite.py <suite>\n We have different test suites available. By default, we run the fast, that skips some tests which require a lot of time to run. You can specify which test suite to run using the suite option. The available test suites are:
fast Runs all but a few tests to verify full functionality of the library. slow Runs the complete set of tests, including those that may take longer to execute. stress Runs a set of tests designed to detect issues in multithreaded processes. memory Runs a set of tests designed to detect memory leaks in libdebug.","boost":4},{"location":"from_pydoc/generated/architectures/thread_context_helper/","title":"libdebug.architectures.thread_context_helper","text":""},{"location":"from_pydoc/generated/architectures/thread_context_helper/#libdebug.architectures.thread_context_helper.thread_context_class_provider","title":"thread_context_class_provider(architecture)","text":"Returns the class of the thread context to be used by the _InternalDebugger class.
libdebug/architectures/thread_context_helper.py def thread_context_class_provider(\n architecture: str,\n) -> type[ThreadContext]:\n \"\"\"Returns the class of the thread context to be used by the `_InternalDebugger` class.\"\"\"\n match architecture:\n case \"amd64\":\n return Amd64ThreadContext\n case \"aarch64\":\n return Aarch64ThreadContext\n case \"i386\":\n if libcontext.platform == \"amd64\":\n return I386OverAMD64ThreadContext\n else:\n return I386ThreadContext\n case _:\n raise NotImplementedError(f\"Architecture {architecture} not available.\")\n"},{"location":"from_pydoc/generated/architectures/aarch64/aarch64_thread_context/","title":"libdebug.architectures.aarch64.aarch64_thread_context","text":""},{"location":"from_pydoc/generated/architectures/aarch64/aarch64_thread_context/#libdebug.architectures.aarch64.aarch64_thread_context.Aarch64ThreadContext","title":"Aarch64ThreadContext","text":" Bases: ThreadContext
This object represents a thread in the context of the target aarch64 process. It holds information about the thread's state, registers and stack.
Source code inlibdebug/architectures/aarch64/aarch64_thread_context.py class Aarch64ThreadContext(ThreadContext):\n \"\"\"This object represents a thread in the context of the target aarch64 process. It holds information about the thread's state, registers and stack.\"\"\"\n\n def __init__(self: Aarch64ThreadContext, thread_id: int, registers: Aarch64PtraceRegisterHolder) -> None:\n \"\"\"Initialize the thread context with the given thread id.\"\"\"\n super().__init__(thread_id, registers)\n\n # Register the thread properties\n self._register_holder.apply_on_thread(self, Aarch64ThreadContext)\n"},{"location":"from_pydoc/generated/architectures/aarch64/aarch64_thread_context/#libdebug.architectures.aarch64.aarch64_thread_context.Aarch64ThreadContext.__init__","title":"__init__(thread_id, registers)","text":"Initialize the thread context with the given thread id.
Source code inlibdebug/architectures/aarch64/aarch64_thread_context.py def __init__(self: Aarch64ThreadContext, thread_id: int, registers: Aarch64PtraceRegisterHolder) -> None:\n \"\"\"Initialize the thread context with the given thread id.\"\"\"\n super().__init__(thread_id, registers)\n\n # Register the thread properties\n self._register_holder.apply_on_thread(self, Aarch64ThreadContext)\n"},{"location":"from_pydoc/generated/architectures/amd64/amd64_thread_context/","title":"libdebug.architectures.amd64.amd64_thread_context","text":""},{"location":"from_pydoc/generated/architectures/amd64/amd64_thread_context/#libdebug.architectures.amd64.amd64_thread_context.Amd64ThreadContext","title":"Amd64ThreadContext","text":" Bases: ThreadContext
This object represents a thread in the context of the target amd64 process. It holds information about the thread's state, registers and stack.
Source code inlibdebug/architectures/amd64/amd64_thread_context.py class Amd64ThreadContext(ThreadContext):\n \"\"\"This object represents a thread in the context of the target amd64 process. It holds information about the thread's state, registers and stack.\"\"\"\n\n def __init__(self: Amd64ThreadContext, thread_id: int, registers: Amd64PtraceRegisterHolder) -> None:\n \"\"\"Initialize the thread context with the given thread id.\"\"\"\n super().__init__(thread_id, registers)\n\n # Register the thread properties\n self._register_holder.apply_on_thread(self, Amd64ThreadContext)\n"},{"location":"from_pydoc/generated/architectures/amd64/amd64_thread_context/#libdebug.architectures.amd64.amd64_thread_context.Amd64ThreadContext.__init__","title":"__init__(thread_id, registers)","text":"Initialize the thread context with the given thread id.
Source code inlibdebug/architectures/amd64/amd64_thread_context.py def __init__(self: Amd64ThreadContext, thread_id: int, registers: Amd64PtraceRegisterHolder) -> None:\n \"\"\"Initialize the thread context with the given thread id.\"\"\"\n super().__init__(thread_id, registers)\n\n # Register the thread properties\n self._register_holder.apply_on_thread(self, Amd64ThreadContext)\n"},{"location":"from_pydoc/generated/architectures/amd64/compat/i386_over_amd64_thread_context/","title":"libdebug.architectures.amd64.compat.i386_over_amd64_thread_context","text":""},{"location":"from_pydoc/generated/architectures/amd64/compat/i386_over_amd64_thread_context/#libdebug.architectures.amd64.compat.i386_over_amd64_thread_context.I386OverAMD64ThreadContext","title":"I386OverAMD64ThreadContext","text":" Bases: ThreadContext
This object represents a thread in the context of the target i386 process when running on amd64. It holds information about the thread's state, registers and stack.
Source code inlibdebug/architectures/amd64/compat/i386_over_amd64_thread_context.py class I386OverAMD64ThreadContext(ThreadContext):\n \"\"\"This object represents a thread in the context of the target i386 process when running on amd64. It holds information about the thread's state, registers and stack.\"\"\"\n\n def __init__(\n self: I386OverAMD64ThreadContext,\n thread_id: int,\n registers: I386OverAMD64PtraceRegisterHolder,\n ) -> None:\n \"\"\"Initialize the thread context with the given thread id.\"\"\"\n super().__init__(thread_id, registers)\n\n # Register the thread properties\n self._register_holder.apply_on_thread(self, I386OverAMD64ThreadContext)\n"},{"location":"from_pydoc/generated/architectures/amd64/compat/i386_over_amd64_thread_context/#libdebug.architectures.amd64.compat.i386_over_amd64_thread_context.I386OverAMD64ThreadContext.__init__","title":"__init__(thread_id, registers)","text":"Initialize the thread context with the given thread id.
Source code inlibdebug/architectures/amd64/compat/i386_over_amd64_thread_context.py def __init__(\n self: I386OverAMD64ThreadContext,\n thread_id: int,\n registers: I386OverAMD64PtraceRegisterHolder,\n) -> None:\n \"\"\"Initialize the thread context with the given thread id.\"\"\"\n super().__init__(thread_id, registers)\n\n # Register the thread properties\n self._register_holder.apply_on_thread(self, I386OverAMD64ThreadContext)\n"},{"location":"from_pydoc/generated/architectures/i386/i386_thread_context/","title":"libdebug.architectures.i386.i386_thread_context","text":""},{"location":"from_pydoc/generated/architectures/i386/i386_thread_context/#libdebug.architectures.i386.i386_thread_context.I386ThreadContext","title":"I386ThreadContext","text":" Bases: ThreadContext
This object represents a thread in the context of the target i386 process. It holds information about the thread's state, registers and stack.
Source code inlibdebug/architectures/i386/i386_thread_context.py class I386ThreadContext(ThreadContext):\n \"\"\"This object represents a thread in the context of the target i386 process. It holds information about the thread's state, registers and stack.\"\"\"\n\n def __init__(self: I386ThreadContext, thread_id: int, registers: I386PtraceRegisterHolder) -> None:\n \"\"\"Initialize the thread context with the given thread id.\"\"\"\n super().__init__(thread_id, registers)\n\n # Register the thread properties\n self._register_holder.apply_on_thread(self, I386ThreadContext)\n"},{"location":"from_pydoc/generated/architectures/i386/i386_thread_context/#libdebug.architectures.i386.i386_thread_context.I386ThreadContext.__init__","title":"__init__(thread_id, registers)","text":"Initialize the thread context with the given thread id.
Source code inlibdebug/architectures/i386/i386_thread_context.py def __init__(self: I386ThreadContext, thread_id: int, registers: I386PtraceRegisterHolder) -> None:\n \"\"\"Initialize the thread context with the given thread id.\"\"\"\n super().__init__(thread_id, registers)\n\n # Register the thread properties\n self._register_holder.apply_on_thread(self, I386ThreadContext)\n"},{"location":"from_pydoc/generated/snapshots/diff/","title":"libdebug.snapshots.diff","text":""},{"location":"from_pydoc/generated/snapshots/diff/#libdebug.snapshots.diff.Diff","title":"Diff","text":"This object represents a diff between two snapshots.
Source code inlibdebug/snapshots/diff.py class Diff:\n \"\"\"This object represents a diff between two snapshots.\"\"\"\n\n def __init__(self: Diff, snapshot1: Snapshot, snapshot2: Snapshot) -> None:\n \"\"\"Initialize the Diff object with two snapshots.\n\n Args:\n snapshot1 (Snapshot): The first snapshot.\n snapshot2 (Snapshot): The second snapshot.\n \"\"\"\n if snapshot1.snapshot_id < snapshot2.snapshot_id:\n self.snapshot1 = snapshot1\n self.snapshot2 = snapshot2\n else:\n self.snapshot1 = snapshot2\n self.snapshot2 = snapshot1\n\n # The level of the diff is the lowest level among the two snapshots\n if snapshot1.level == \"base\" or snapshot2.level == \"base\":\n self.level = \"base\"\n elif snapshot1.level == \"writable\" or snapshot2.level == \"writable\":\n self.level = \"writable\"\n else:\n self.level = \"full\"\n\n if self.snapshot1.arch != self.snapshot2.arch:\n raise ValueError(\"Snapshots have different architectures. Automatic diff is not supported.\")\n\n def _save_reg_diffs(self: Snapshot) -> None:\n self.regs = RegisterDiffAccessor(\n self.snapshot1.regs._generic_regs,\n self.snapshot1.regs._special_regs,\n self.snapshot1.regs._vec_fp_regs,\n )\n\n all_regs = dir(self.snapshot1.regs)\n all_regs = [reg for reg in all_regs if isinstance(self.snapshot1.regs.__getattribute__(reg), int | float)]\n\n for reg_name in all_regs:\n old_value = self.snapshot1.regs.__getattribute__(reg_name)\n new_value = self.snapshot2.regs.__getattribute__(reg_name)\n has_changed = old_value != new_value\n\n diff = RegisterDiff(\n old_value=old_value,\n new_value=new_value,\n has_changed=has_changed,\n )\n\n # Create diff object\n self.regs.__setattr__(reg_name, diff)\n\n def _resolve_maps_diff(self: Diff) -> None:\n # Handle memory maps\n all_maps_diffs = []\n handled_map2_indices = []\n\n for map1 in self.snapshot1.maps:\n # Find the corresponding map in the second snapshot\n map2 = None\n\n for map2_index, candidate in enumerate(self.snapshot2.maps):\n if map1.is_same_identity(candidate):\n map2 = candidate\n handled_map2_indices.append(map2_index)\n break\n\n if map2 is None:\n diff = MemoryMapDiff(\n old_map_state=map1,\n new_map_state=None,\n has_changed=True,\n )\n else:\n diff = MemoryMapDiff(\n old_map_state=map1,\n new_map_state=map2,\n has_changed=(map1 != map2),\n )\n\n all_maps_diffs.append(diff)\n\n new_pages = [self.snapshot2.maps[i] for i in range(len(self.snapshot2.maps)) if i not in handled_map2_indices]\n\n for new_page in new_pages:\n diff = MemoryMapDiff(\n old_map_state=None,\n new_map_state=new_page,\n has_changed=True,\n )\n\n all_maps_diffs.append(diff)\n\n # Convert the list to a MemoryMapDiffList\n self.maps = MemoryMapDiffList(\n all_maps_diffs,\n self.snapshot1._process_name,\n self.snapshot1._process_full_path,\n )\n\n @property\n def registers(self: Snapshot) -> SnapshotRegisters:\n \"\"\"Alias for regs.\"\"\"\n return self.regs\n\n def pprint_maps(self: Diff) -> None:\n \"\"\"Pretty print the memory maps diff.\"\"\"\n has_prev_changed = False\n\n for diff in self.maps:\n ref = diff.old_map_state if diff.old_map_state is not None else diff.new_map_state\n\n map_state_str = \"\"\n map_state_str += \"Memory Map:\\n\"\n map_state_str += f\" start: {ref.start:#x}\\n\"\n map_state_str += f\" end: {ref.end:#x}\\n\"\n map_state_str += f\" permissions: {ref.permissions}\\n\"\n map_state_str += f\" size: {ref.size:#x}\\n\"\n map_state_str += f\" offset: {ref.offset:#x}\\n\"\n map_state_str += f\" backing_file: {ref.backing_file}\\n\"\n\n # If is added\n if diff.old_map_state is None:\n pprint_diff_line(map_state_str, is_added=True)\n\n has_prev_changed = True\n # If is removed\n elif diff.new_map_state is None:\n pprint_diff_line(map_state_str, is_added=False)\n\n has_prev_changed = True\n elif diff.old_map_state.end != diff.new_map_state.end:\n printed_line = map_state_str\n\n new_map_end = diff.new_map_state.end\n\n start_strike = printed_line.find(\"end:\") + 4\n end_strike = printed_line.find(\"\\n\", start_strike)\n\n pprint_inline_diff(printed_line, start_strike, end_strike, f\"{hex(new_map_end)}\")\n\n has_prev_changed = True\n elif diff.old_map_state.permissions != diff.new_map_state.permissions:\n printed_line = map_state_str\n\n new_map_permissions = diff.new_map_state.permissions\n\n start_strike = printed_line.find(\"permissions:\") + 12\n end_strike = printed_line.find(\"\\n\", start_strike)\n\n pprint_inline_diff(printed_line, start_strike, end_strike, new_map_permissions)\n\n has_prev_changed = True\n elif diff.old_map_state.content != diff.new_map_state.content:\n printed_line = map_state_str + \" [content changed]\\n\"\n color_start = printed_line.find(\"[content changed]\")\n\n pprint_diff_substring(printed_line, color_start, color_start + len(\"[content changed]\"))\n\n has_prev_changed = True\n else:\n if has_prev_changed:\n print(\"\\n[...]\\n\")\n\n has_prev_changed = False\n\n def pprint_memory(\n self: Diff,\n start: int,\n end: int,\n file: str = \"hybrid\",\n override_word_size: int = None,\n integer_mode: bool = False,\n ) -> None:\n \"\"\"Pretty print the memory diff.\n\n Args:\n start (int): The start address of the memory diff.\n end (int): The end address of the memory diff.\n file (str, optional): The backing file for relative / absolute addressing. Defaults to \"hybrid\".\n override_word_size (int, optional): The word size to use for the diff in place of the ISA word size. Defaults to None.\n integer_mode (bool, optional): If True, the diff will be printed as hex integers (system endianness applies). Defaults to False.\n \"\"\"\n if self.level == \"base\":\n raise ValueError(\"Memory diff is not available at base snapshot level.\")\n\n if start > end:\n tmp = start\n start = end\n end = tmp\n\n word_size = (\n get_platform_gp_register_size(self.snapshot1.arch) if override_word_size is None else override_word_size\n )\n\n # Resolve the address\n if file == \"absolute\":\n address_start = start\n elif file == \"hybrid\":\n try:\n # Try to resolve the address as absolute\n self.snapshot1.memory[start, 1, \"absolute\"]\n address_start = start\n except ValueError:\n # If the address is not in the maps, we use the binary file\n address_start = start + self.snapshot1.maps.filter(\"binary\")[0].start\n file = \"binary\"\n else:\n map_file = self.snapshot1.maps.filter(file)[0]\n address_start = start + map_file.base\n file = map_file.backing_file if file != \"binary\" else \"binary\"\n\n extract_before = self.snapshot1.memory[start:end, file]\n extract_after = self.snapshot2.memory[start:end, file]\n\n file_info = f\" (file: {file})\" if file not in (\"absolute\", \"hybrid\") else \"\"\n print(f\"Memory diff from {start:#x} to {end:#x}{file_info}:\")\n\n pprint_memory_diff_util(\n address_start,\n extract_before,\n extract_after,\n word_size,\n self.snapshot1.maps,\n integer_mode=integer_mode,\n )\n\n def pprint_regs(self: Diff) -> None:\n \"\"\"Pretty print the general_purpose registers diffs.\"\"\"\n # Header with column alignment\n print(\"{:<19} {:<24} {:<20}\\n\".format(\"Register\", \"Old Value\", \"New Value\"))\n print(\"-\" * 58 + \"\")\n\n # Log all integer changes\n for attr_name in self.regs._generic_regs:\n attr = self.regs.__getattribute__(attr_name)\n\n if attr.has_changed:\n pprint_reg_diff_util(\n attr_name,\n self.snapshot1.maps,\n self.snapshot2.maps,\n attr.old_value,\n attr.new_value,\n )\n\n def pprint_regs_all(self: Diff) -> None:\n \"\"\"Pretty print the registers diffs (including special and vector registers).\"\"\"\n # Header with column alignment\n print(\"{:<19} {:<24} {:<20}\\n\".format(\"Register\", \"Old Value\", \"New Value\"))\n print(\"-\" * 58 + \"\")\n\n # Log all integer changes\n for attr_name in self.regs._generic_regs + self.regs._special_regs:\n attr = self.regs.__getattribute__(attr_name)\n\n if attr.has_changed:\n pprint_reg_diff_util(\n attr_name,\n self.snapshot1.maps,\n self.snapshot2.maps,\n attr.old_value,\n attr.new_value,\n )\n\n print()\n\n # Log all vector changes\n for attr1_name, attr2_name in self.regs._vec_fp_regs:\n attr1 = self.regs.__getattribute__(attr1_name)\n attr2 = self.regs.__getattribute__(attr2_name)\n\n if attr1.has_changed or attr2.has_changed:\n pprint_reg_diff_large_util(\n (attr1_name, attr2_name),\n (attr1.old_value, attr2.old_value),\n (attr1.new_value, attr2.new_value),\n )\n\n def pprint_registers(self: Diff) -> None:\n \"\"\"Alias afor pprint_regs.\"\"\"\n self.pprint_regs()\n\n def pprint_registers_all(self: Diff) -> None:\n \"\"\"Alias for pprint_regs_all.\"\"\"\n self.pprint_regs_all()\n\n def pprint_backtrace(self: Diff) -> None:\n \"\"\"Pretty print the backtrace diff.\"\"\"\n if self.level == \"base\":\n raise ValueError(\"Backtrace is not available at base level. Stack is not available\")\n\n prev_log_level = libcontext.general_logger\n libcontext.general_logger = \"SILENT\"\n stack_unwinder = stack_unwinding_provider(self.snapshot1.arch)\n backtrace1 = stack_unwinder.unwind(self.snapshot1)\n backtrace2 = stack_unwinder.unwind(self.snapshot2)\n\n maps1 = self.snapshot1.maps\n maps2 = self.snapshot2.maps\n\n symbols = self.snapshot1.memory._symbol_ref\n\n # Columns are Before, Unchanged, After\n # __ __\n # |__| |__|\n # |__| |__|\n # |__|__|__|\n # |__|__|__|\n # |__|__|__|\n column1 = []\n column2 = []\n column3 = []\n\n for addr1, addr2 in zip_longest(reversed(backtrace1), reversed(backtrace2)):\n col1 = get_colored_saved_address_util(addr1, maps1, symbols).strip() if addr1 else None\n col2 = None\n col3 = None\n\n if addr2:\n if addr1 == addr2:\n col2 = col1\n col1 = None\n else:\n col3 = get_colored_saved_address_util(addr2, maps2, symbols).strip()\n\n column1.append(col1)\n column2.append(col2)\n column3.append(col3)\n\n max_str_len = max([len(x) if x else 0 for x in column1 + column2 + column3])\n\n print(\"Backtrace diff:\")\n print(\"-\" * (max_str_len * 3 + 6))\n print(f\"{'Before':<{max_str_len}} | {'Unchanged':<{max_str_len}} | {'After':<{max_str_len}}\")\n for col1_val, col2_val, col3_val in zip(reversed(column1), reversed(column2), reversed(column3), strict=False):\n col1 = pad_colored_string(col1_val, max_str_len) if col1_val else \" \" * max_str_len\n col2 = pad_colored_string(col2_val, max_str_len) if col2_val else \" \" * max_str_len\n col3 = pad_colored_string(col3_val, max_str_len) if col3_val else \" \" * max_str_len\n\n print(f\"{col1} | {col2} | {col3}\")\n\n print(\"-\" * (max_str_len * 3 + 6))\n\n libcontext.general_logger = prev_log_level\n"},{"location":"from_pydoc/generated/snapshots/diff/#libdebug.snapshots.diff.Diff.registers","title":"registers property","text":"Alias for regs.
"},{"location":"from_pydoc/generated/snapshots/diff/#libdebug.snapshots.diff.Diff.__init__","title":"__init__(snapshot1, snapshot2)","text":"Initialize the Diff object with two snapshots.
Parameters:
Name Type Description Defaultsnapshot1 Snapshot The first snapshot.
requiredsnapshot2 Snapshot The second snapshot.
required Source code inlibdebug/snapshots/diff.py def __init__(self: Diff, snapshot1: Snapshot, snapshot2: Snapshot) -> None:\n \"\"\"Initialize the Diff object with two snapshots.\n\n Args:\n snapshot1 (Snapshot): The first snapshot.\n snapshot2 (Snapshot): The second snapshot.\n \"\"\"\n if snapshot1.snapshot_id < snapshot2.snapshot_id:\n self.snapshot1 = snapshot1\n self.snapshot2 = snapshot2\n else:\n self.snapshot1 = snapshot2\n self.snapshot2 = snapshot1\n\n # The level of the diff is the lowest level among the two snapshots\n if snapshot1.level == \"base\" or snapshot2.level == \"base\":\n self.level = \"base\"\n elif snapshot1.level == \"writable\" or snapshot2.level == \"writable\":\n self.level = \"writable\"\n else:\n self.level = \"full\"\n\n if self.snapshot1.arch != self.snapshot2.arch:\n raise ValueError(\"Snapshots have different architectures. Automatic diff is not supported.\")\n"},{"location":"from_pydoc/generated/snapshots/diff/#libdebug.snapshots.diff.Diff.pprint_backtrace","title":"pprint_backtrace()","text":"Pretty print the backtrace diff.
Source code inlibdebug/snapshots/diff.py def pprint_backtrace(self: Diff) -> None:\n \"\"\"Pretty print the backtrace diff.\"\"\"\n if self.level == \"base\":\n raise ValueError(\"Backtrace is not available at base level. Stack is not available\")\n\n prev_log_level = libcontext.general_logger\n libcontext.general_logger = \"SILENT\"\n stack_unwinder = stack_unwinding_provider(self.snapshot1.arch)\n backtrace1 = stack_unwinder.unwind(self.snapshot1)\n backtrace2 = stack_unwinder.unwind(self.snapshot2)\n\n maps1 = self.snapshot1.maps\n maps2 = self.snapshot2.maps\n\n symbols = self.snapshot1.memory._symbol_ref\n\n # Columns are Before, Unchanged, After\n # __ __\n # |__| |__|\n # |__| |__|\n # |__|__|__|\n # |__|__|__|\n # |__|__|__|\n column1 = []\n column2 = []\n column3 = []\n\n for addr1, addr2 in zip_longest(reversed(backtrace1), reversed(backtrace2)):\n col1 = get_colored_saved_address_util(addr1, maps1, symbols).strip() if addr1 else None\n col2 = None\n col3 = None\n\n if addr2:\n if addr1 == addr2:\n col2 = col1\n col1 = None\n else:\n col3 = get_colored_saved_address_util(addr2, maps2, symbols).strip()\n\n column1.append(col1)\n column2.append(col2)\n column3.append(col3)\n\n max_str_len = max([len(x) if x else 0 for x in column1 + column2 + column3])\n\n print(\"Backtrace diff:\")\n print(\"-\" * (max_str_len * 3 + 6))\n print(f\"{'Before':<{max_str_len}} | {'Unchanged':<{max_str_len}} | {'After':<{max_str_len}}\")\n for col1_val, col2_val, col3_val in zip(reversed(column1), reversed(column2), reversed(column3), strict=False):\n col1 = pad_colored_string(col1_val, max_str_len) if col1_val else \" \" * max_str_len\n col2 = pad_colored_string(col2_val, max_str_len) if col2_val else \" \" * max_str_len\n col3 = pad_colored_string(col3_val, max_str_len) if col3_val else \" \" * max_str_len\n\n print(f\"{col1} | {col2} | {col3}\")\n\n print(\"-\" * (max_str_len * 3 + 6))\n\n libcontext.general_logger = prev_log_level\n"},{"location":"from_pydoc/generated/snapshots/diff/#libdebug.snapshots.diff.Diff.pprint_maps","title":"pprint_maps()","text":"Pretty print the memory maps diff.
Source code inlibdebug/snapshots/diff.py def pprint_maps(self: Diff) -> None:\n \"\"\"Pretty print the memory maps diff.\"\"\"\n has_prev_changed = False\n\n for diff in self.maps:\n ref = diff.old_map_state if diff.old_map_state is not None else diff.new_map_state\n\n map_state_str = \"\"\n map_state_str += \"Memory Map:\\n\"\n map_state_str += f\" start: {ref.start:#x}\\n\"\n map_state_str += f\" end: {ref.end:#x}\\n\"\n map_state_str += f\" permissions: {ref.permissions}\\n\"\n map_state_str += f\" size: {ref.size:#x}\\n\"\n map_state_str += f\" offset: {ref.offset:#x}\\n\"\n map_state_str += f\" backing_file: {ref.backing_file}\\n\"\n\n # If is added\n if diff.old_map_state is None:\n pprint_diff_line(map_state_str, is_added=True)\n\n has_prev_changed = True\n # If is removed\n elif diff.new_map_state is None:\n pprint_diff_line(map_state_str, is_added=False)\n\n has_prev_changed = True\n elif diff.old_map_state.end != diff.new_map_state.end:\n printed_line = map_state_str\n\n new_map_end = diff.new_map_state.end\n\n start_strike = printed_line.find(\"end:\") + 4\n end_strike = printed_line.find(\"\\n\", start_strike)\n\n pprint_inline_diff(printed_line, start_strike, end_strike, f\"{hex(new_map_end)}\")\n\n has_prev_changed = True\n elif diff.old_map_state.permissions != diff.new_map_state.permissions:\n printed_line = map_state_str\n\n new_map_permissions = diff.new_map_state.permissions\n\n start_strike = printed_line.find(\"permissions:\") + 12\n end_strike = printed_line.find(\"\\n\", start_strike)\n\n pprint_inline_diff(printed_line, start_strike, end_strike, new_map_permissions)\n\n has_prev_changed = True\n elif diff.old_map_state.content != diff.new_map_state.content:\n printed_line = map_state_str + \" [content changed]\\n\"\n color_start = printed_line.find(\"[content changed]\")\n\n pprint_diff_substring(printed_line, color_start, color_start + len(\"[content changed]\"))\n\n has_prev_changed = True\n else:\n if has_prev_changed:\n print(\"\\n[...]\\n\")\n\n has_prev_changed = False\n"},{"location":"from_pydoc/generated/snapshots/diff/#libdebug.snapshots.diff.Diff.pprint_memory","title":"pprint_memory(start, end, file='hybrid', override_word_size=None, integer_mode=False)","text":"Pretty print the memory diff.
Parameters:
Name Type Description Defaultstart int The start address of the memory diff.
requiredend int The end address of the memory diff.
requiredfile str The backing file for relative / absolute addressing. Defaults to \"hybrid\".
'hybrid' override_word_size int The word size to use for the diff in place of the ISA word size. Defaults to None.
None integer_mode bool If True, the diff will be printed as hex integers (system endianness applies). Defaults to False.
False Source code in libdebug/snapshots/diff.py def pprint_memory(\n self: Diff,\n start: int,\n end: int,\n file: str = \"hybrid\",\n override_word_size: int = None,\n integer_mode: bool = False,\n) -> None:\n \"\"\"Pretty print the memory diff.\n\n Args:\n start (int): The start address of the memory diff.\n end (int): The end address of the memory diff.\n file (str, optional): The backing file for relative / absolute addressing. Defaults to \"hybrid\".\n override_word_size (int, optional): The word size to use for the diff in place of the ISA word size. Defaults to None.\n integer_mode (bool, optional): If True, the diff will be printed as hex integers (system endianness applies). Defaults to False.\n \"\"\"\n if self.level == \"base\":\n raise ValueError(\"Memory diff is not available at base snapshot level.\")\n\n if start > end:\n tmp = start\n start = end\n end = tmp\n\n word_size = (\n get_platform_gp_register_size(self.snapshot1.arch) if override_word_size is None else override_word_size\n )\n\n # Resolve the address\n if file == \"absolute\":\n address_start = start\n elif file == \"hybrid\":\n try:\n # Try to resolve the address as absolute\n self.snapshot1.memory[start, 1, \"absolute\"]\n address_start = start\n except ValueError:\n # If the address is not in the maps, we use the binary file\n address_start = start + self.snapshot1.maps.filter(\"binary\")[0].start\n file = \"binary\"\n else:\n map_file = self.snapshot1.maps.filter(file)[0]\n address_start = start + map_file.base\n file = map_file.backing_file if file != \"binary\" else \"binary\"\n\n extract_before = self.snapshot1.memory[start:end, file]\n extract_after = self.snapshot2.memory[start:end, file]\n\n file_info = f\" (file: {file})\" if file not in (\"absolute\", \"hybrid\") else \"\"\n print(f\"Memory diff from {start:#x} to {end:#x}{file_info}:\")\n\n pprint_memory_diff_util(\n address_start,\n extract_before,\n extract_after,\n word_size,\n self.snapshot1.maps,\n integer_mode=integer_mode,\n )\n"},{"location":"from_pydoc/generated/snapshots/diff/#libdebug.snapshots.diff.Diff.pprint_registers","title":"pprint_registers()","text":"Alias afor pprint_regs.
Source code inlibdebug/snapshots/diff.py def pprint_registers(self: Diff) -> None:\n \"\"\"Alias afor pprint_regs.\"\"\"\n self.pprint_regs()\n"},{"location":"from_pydoc/generated/snapshots/diff/#libdebug.snapshots.diff.Diff.pprint_registers_all","title":"pprint_registers_all()","text":"Alias for pprint_regs_all.
Source code inlibdebug/snapshots/diff.py def pprint_registers_all(self: Diff) -> None:\n \"\"\"Alias for pprint_regs_all.\"\"\"\n self.pprint_regs_all()\n"},{"location":"from_pydoc/generated/snapshots/diff/#libdebug.snapshots.diff.Diff.pprint_regs","title":"pprint_regs()","text":"Pretty print the general_purpose registers diffs.
Source code inlibdebug/snapshots/diff.py def pprint_regs(self: Diff) -> None:\n \"\"\"Pretty print the general_purpose registers diffs.\"\"\"\n # Header with column alignment\n print(\"{:<19} {:<24} {:<20}\\n\".format(\"Register\", \"Old Value\", \"New Value\"))\n print(\"-\" * 58 + \"\")\n\n # Log all integer changes\n for attr_name in self.regs._generic_regs:\n attr = self.regs.__getattribute__(attr_name)\n\n if attr.has_changed:\n pprint_reg_diff_util(\n attr_name,\n self.snapshot1.maps,\n self.snapshot2.maps,\n attr.old_value,\n attr.new_value,\n )\n"},{"location":"from_pydoc/generated/snapshots/diff/#libdebug.snapshots.diff.Diff.pprint_regs_all","title":"pprint_regs_all()","text":"Pretty print the registers diffs (including special and vector registers).
Source code inlibdebug/snapshots/diff.py def pprint_regs_all(self: Diff) -> None:\n \"\"\"Pretty print the registers diffs (including special and vector registers).\"\"\"\n # Header with column alignment\n print(\"{:<19} {:<24} {:<20}\\n\".format(\"Register\", \"Old Value\", \"New Value\"))\n print(\"-\" * 58 + \"\")\n\n # Log all integer changes\n for attr_name in self.regs._generic_regs + self.regs._special_regs:\n attr = self.regs.__getattribute__(attr_name)\n\n if attr.has_changed:\n pprint_reg_diff_util(\n attr_name,\n self.snapshot1.maps,\n self.snapshot2.maps,\n attr.old_value,\n attr.new_value,\n )\n\n print()\n\n # Log all vector changes\n for attr1_name, attr2_name in self.regs._vec_fp_regs:\n attr1 = self.regs.__getattribute__(attr1_name)\n attr2 = self.regs.__getattribute__(attr2_name)\n\n if attr1.has_changed or attr2.has_changed:\n pprint_reg_diff_large_util(\n (attr1_name, attr2_name),\n (attr1.old_value, attr2.old_value),\n (attr1.new_value, attr2.new_value),\n )\n"},{"location":"from_pydoc/generated/snapshots/snapshot/","title":"libdebug.snapshots.snapshot","text":""},{"location":"from_pydoc/generated/snapshots/snapshot/#libdebug.snapshots.snapshot.Snapshot","title":"Snapshot","text":"This object represents a snapshot of a system task.
Snapshot levels: - base: Registers - writable: Registers, writable memory contents - full: Registers, all readable memory contents
Source code inlibdebug/snapshots/snapshot.py class Snapshot:\n \"\"\"This object represents a snapshot of a system task.\n\n Snapshot levels:\n - base: Registers\n - writable: Registers, writable memory contents\n - full: Registers, all readable memory contents\n \"\"\"\n\n def _save_regs(self: Snapshot, thread: ThreadContext) -> None:\n # Create a register field for the snapshot\n self.regs = SnapshotRegisters(\n thread.thread_id,\n thread._register_holder.provide_regs(),\n thread._register_holder.provide_special_regs(),\n thread._register_holder.provide_vector_fp_regs(),\n )\n\n # Set all registers in the field\n all_regs = dir(thread.regs)\n all_regs = [reg for reg in all_regs if isinstance(thread.regs.__getattribute__(reg), int | float)]\n\n for reg_name in all_regs:\n reg_value = thread.regs.__getattribute__(reg_name)\n self.regs.__setattr__(reg_name, reg_value)\n\n def _save_memory_maps(self: Snapshot, debugger: InternalDebugger, writable_only: bool) -> None:\n \"\"\"Saves memory maps of the process to the snapshot.\"\"\"\n process_name = debugger._process_name\n full_process_path = debugger._process_full_path\n self.maps = MemoryMapSnapshotList([], process_name, full_process_path)\n\n for curr_map in debugger.maps:\n # Skip non-writable maps if requested\n # Always skip maps that fail on read\n if not writable_only or \"w\" in curr_map.permissions:\n try:\n contents = debugger.memory[curr_map.start : curr_map.end, \"absolute\"]\n except (ValueError, OSError, OverflowError):\n # There are some memory regions that cannot be read, such as [vvar], [vdso], etc.\n contents = None\n else:\n contents = None\n\n saved_map = MemoryMapSnapshot(\n curr_map.start,\n curr_map.end,\n curr_map.permissions,\n curr_map.size,\n curr_map.offset,\n curr_map.backing_file,\n contents,\n )\n self.maps.append(saved_map)\n\n @property\n def registers(self: Snapshot) -> SnapshotRegisters:\n \"\"\"Alias for regs.\"\"\"\n return self.regs\n\n @property\n def memory(self: Snapshot) -> SnapshotMemoryView:\n \"\"\"Returns a view of the memory of the thread.\"\"\"\n if self._memory is None:\n if self.level != \"base\":\n liblog.error(\"Inconsistent snapshot state: memory snapshot is not available.\")\n\n raise ValueError(\"Memory snapshot is not available at base level.\")\n\n return self._memory\n\n @property\n def mem(self: Snapshot) -> SnapshotMemoryView:\n \"\"\"Alias for memory.\"\"\"\n return self.memory\n\n @abstractmethod\n def diff(self: Snapshot, other: Snapshot) -> Diff:\n \"\"\"Creates a diff object between two snapshots.\"\"\"\n\n def save(self: Snapshot, file_path: str) -> None:\n \"\"\"Saves the snapshot object to a file.\"\"\"\n self._serialization_helper.save(self, file_path)\n\n def backtrace(self: Snapshot) -> list[int]:\n \"\"\"Returns the current backtrace of the thread.\"\"\"\n if self.level == \"base\":\n raise ValueError(\"Backtrace is not available at base level. Stack is not available.\")\n\n stack_unwinder = stack_unwinding_provider(self.arch)\n return stack_unwinder.unwind(self)\n\n def pprint_registers(self: Snapshot) -> None:\n \"\"\"Pretty prints the thread's registers.\"\"\"\n pprint_registers_util(self.regs, self.maps, self.regs._generic_regs)\n\n def pprint_regs(self: Snapshot) -> None:\n \"\"\"Alias for the `pprint_registers` method.\n\n Pretty prints the thread's registers.\n \"\"\"\n self.pprint_registers()\n\n def pprint_registers_all(self: Snapshot) -> None:\n \"\"\"Pretty prints all the thread's registers.\"\"\"\n pprint_registers_all_util(\n self.regs,\n self.maps,\n self.regs._generic_regs,\n self.regs._special_regs,\n self.regs._vec_fp_regs,\n )\n\n def pprint_regs_all(self: Snapshot) -> None:\n \"\"\"Alias for the `pprint_registers_all` method.\n\n Pretty prints all the thread's registers.\n \"\"\"\n self.pprint_registers_all()\n\n def pprint_backtrace(self: ThreadContext) -> None:\n \"\"\"Pretty prints the current backtrace of the thread.\"\"\"\n if self.level == \"base\":\n raise ValueError(\"Backtrace is not available at base level. Stack is not available.\")\n\n stack_unwinder = stack_unwinding_provider(self.arch)\n backtrace = stack_unwinder.unwind(self)\n pprint_backtrace_util(backtrace, self.maps, self._memory._symbol_ref)\n\n def pprint_maps(self: Snapshot) -> None:\n \"\"\"Prints the memory maps of the process.\"\"\"\n pprint_maps_util(self.maps)\n\n def pprint_memory(\n self: Snapshot,\n start: int,\n end: int,\n file: str = \"hybrid\",\n override_word_size: int | None = None,\n integer_mode: bool = False,\n ) -> None:\n \"\"\"Pretty print the memory diff.\n\n Args:\n start (int): The start address of the memory diff.\n end (int): The end address of the memory diff.\n file (str, optional): The backing file for relative / absolute addressing. Defaults to \"hybrid\".\n override_word_size (int, optional): The word size to use for the diff in place of the ISA word size. Defaults to None.\n integer_mode (bool, optional): If True, the diff will be printed as hex integers (system endianness applies). Defaults to False.\n \"\"\"\n if self.level == \"base\":\n raise ValueError(\"Memory is not available at base level.\")\n\n if start > end:\n tmp = start\n start = end\n end = tmp\n\n word_size = get_platform_gp_register_size(self.arch) if override_word_size is None else override_word_size\n\n # Resolve the address\n if file == \"absolute\":\n address_start = start\n elif file == \"hybrid\":\n try:\n # Try to resolve the address as absolute\n self.memory[start, 1, \"absolute\"]\n address_start = start\n except ValueError:\n # If the address is not in the maps, we use the binary file\n address_start = start + self.maps.filter(\"binary\")[0].start\n file = \"binary\"\n else:\n map_file = self.maps.filter(file)[0]\n address_start = start + map_file.base\n file = map_file.backing_file if file != \"binary\" else \"binary\"\n\n extract = self.memory[start:end, file]\n\n file_info = f\" (file: {file})\" if file not in (\"absolute\", \"hybrid\") else \"\"\n print(f\"Memory from {start:#x} to {end:#x}{file_info}:\")\n\n pprint_memory_util(\n address_start,\n extract,\n word_size,\n self.maps,\n integer_mode=integer_mode,\n )\n"},{"location":"from_pydoc/generated/snapshots/snapshot/#libdebug.snapshots.snapshot.Snapshot.mem","title":"mem property","text":"Alias for memory.
"},{"location":"from_pydoc/generated/snapshots/snapshot/#libdebug.snapshots.snapshot.Snapshot.memory","title":"memory property","text":"Returns a view of the memory of the thread.
"},{"location":"from_pydoc/generated/snapshots/snapshot/#libdebug.snapshots.snapshot.Snapshot.registers","title":"registers property","text":"Alias for regs.
"},{"location":"from_pydoc/generated/snapshots/snapshot/#libdebug.snapshots.snapshot.Snapshot._save_memory_maps","title":"_save_memory_maps(debugger, writable_only)","text":"Saves memory maps of the process to the snapshot.
Source code inlibdebug/snapshots/snapshot.py def _save_memory_maps(self: Snapshot, debugger: InternalDebugger, writable_only: bool) -> None:\n \"\"\"Saves memory maps of the process to the snapshot.\"\"\"\n process_name = debugger._process_name\n full_process_path = debugger._process_full_path\n self.maps = MemoryMapSnapshotList([], process_name, full_process_path)\n\n for curr_map in debugger.maps:\n # Skip non-writable maps if requested\n # Always skip maps that fail on read\n if not writable_only or \"w\" in curr_map.permissions:\n try:\n contents = debugger.memory[curr_map.start : curr_map.end, \"absolute\"]\n except (ValueError, OSError, OverflowError):\n # There are some memory regions that cannot be read, such as [vvar], [vdso], etc.\n contents = None\n else:\n contents = None\n\n saved_map = MemoryMapSnapshot(\n curr_map.start,\n curr_map.end,\n curr_map.permissions,\n curr_map.size,\n curr_map.offset,\n curr_map.backing_file,\n contents,\n )\n self.maps.append(saved_map)\n"},{"location":"from_pydoc/generated/snapshots/snapshot/#libdebug.snapshots.snapshot.Snapshot.backtrace","title":"backtrace()","text":"Returns the current backtrace of the thread.
Source code inlibdebug/snapshots/snapshot.py def backtrace(self: Snapshot) -> list[int]:\n \"\"\"Returns the current backtrace of the thread.\"\"\"\n if self.level == \"base\":\n raise ValueError(\"Backtrace is not available at base level. Stack is not available.\")\n\n stack_unwinder = stack_unwinding_provider(self.arch)\n return stack_unwinder.unwind(self)\n"},{"location":"from_pydoc/generated/snapshots/snapshot/#libdebug.snapshots.snapshot.Snapshot.diff","title":"diff(other) abstractmethod","text":"Creates a diff object between two snapshots.
Source code inlibdebug/snapshots/snapshot.py @abstractmethod\ndef diff(self: Snapshot, other: Snapshot) -> Diff:\n \"\"\"Creates a diff object between two snapshots.\"\"\"\n"},{"location":"from_pydoc/generated/snapshots/snapshot/#libdebug.snapshots.snapshot.Snapshot.pprint_backtrace","title":"pprint_backtrace()","text":"Pretty prints the current backtrace of the thread.
Source code inlibdebug/snapshots/snapshot.py def pprint_backtrace(self: ThreadContext) -> None:\n \"\"\"Pretty prints the current backtrace of the thread.\"\"\"\n if self.level == \"base\":\n raise ValueError(\"Backtrace is not available at base level. Stack is not available.\")\n\n stack_unwinder = stack_unwinding_provider(self.arch)\n backtrace = stack_unwinder.unwind(self)\n pprint_backtrace_util(backtrace, self.maps, self._memory._symbol_ref)\n"},{"location":"from_pydoc/generated/snapshots/snapshot/#libdebug.snapshots.snapshot.Snapshot.pprint_maps","title":"pprint_maps()","text":"Prints the memory maps of the process.
Source code inlibdebug/snapshots/snapshot.py def pprint_maps(self: Snapshot) -> None:\n \"\"\"Prints the memory maps of the process.\"\"\"\n pprint_maps_util(self.maps)\n"},{"location":"from_pydoc/generated/snapshots/snapshot/#libdebug.snapshots.snapshot.Snapshot.pprint_memory","title":"pprint_memory(start, end, file='hybrid', override_word_size=None, integer_mode=False)","text":"Pretty print the memory diff.
Parameters:
Name Type Description Defaultstart int The start address of the memory diff.
requiredend int The end address of the memory diff.
requiredfile str The backing file for relative / absolute addressing. Defaults to \"hybrid\".
'hybrid' override_word_size int The word size to use for the diff in place of the ISA word size. Defaults to None.
None integer_mode bool If True, the diff will be printed as hex integers (system endianness applies). Defaults to False.
False Source code in libdebug/snapshots/snapshot.py def pprint_memory(\n self: Snapshot,\n start: int,\n end: int,\n file: str = \"hybrid\",\n override_word_size: int | None = None,\n integer_mode: bool = False,\n) -> None:\n \"\"\"Pretty print the memory diff.\n\n Args:\n start (int): The start address of the memory diff.\n end (int): The end address of the memory diff.\n file (str, optional): The backing file for relative / absolute addressing. Defaults to \"hybrid\".\n override_word_size (int, optional): The word size to use for the diff in place of the ISA word size. Defaults to None.\n integer_mode (bool, optional): If True, the diff will be printed as hex integers (system endianness applies). Defaults to False.\n \"\"\"\n if self.level == \"base\":\n raise ValueError(\"Memory is not available at base level.\")\n\n if start > end:\n tmp = start\n start = end\n end = tmp\n\n word_size = get_platform_gp_register_size(self.arch) if override_word_size is None else override_word_size\n\n # Resolve the address\n if file == \"absolute\":\n address_start = start\n elif file == \"hybrid\":\n try:\n # Try to resolve the address as absolute\n self.memory[start, 1, \"absolute\"]\n address_start = start\n except ValueError:\n # If the address is not in the maps, we use the binary file\n address_start = start + self.maps.filter(\"binary\")[0].start\n file = \"binary\"\n else:\n map_file = self.maps.filter(file)[0]\n address_start = start + map_file.base\n file = map_file.backing_file if file != \"binary\" else \"binary\"\n\n extract = self.memory[start:end, file]\n\n file_info = f\" (file: {file})\" if file not in (\"absolute\", \"hybrid\") else \"\"\n print(f\"Memory from {start:#x} to {end:#x}{file_info}:\")\n\n pprint_memory_util(\n address_start,\n extract,\n word_size,\n self.maps,\n integer_mode=integer_mode,\n )\n"},{"location":"from_pydoc/generated/snapshots/snapshot/#libdebug.snapshots.snapshot.Snapshot.pprint_registers","title":"pprint_registers()","text":"Pretty prints the thread's registers.
Source code inlibdebug/snapshots/snapshot.py def pprint_registers(self: Snapshot) -> None:\n \"\"\"Pretty prints the thread's registers.\"\"\"\n pprint_registers_util(self.regs, self.maps, self.regs._generic_regs)\n"},{"location":"from_pydoc/generated/snapshots/snapshot/#libdebug.snapshots.snapshot.Snapshot.pprint_registers_all","title":"pprint_registers_all()","text":"Pretty prints all the thread's registers.
Source code inlibdebug/snapshots/snapshot.py def pprint_registers_all(self: Snapshot) -> None:\n \"\"\"Pretty prints all the thread's registers.\"\"\"\n pprint_registers_all_util(\n self.regs,\n self.maps,\n self.regs._generic_regs,\n self.regs._special_regs,\n self.regs._vec_fp_regs,\n )\n"},{"location":"from_pydoc/generated/snapshots/snapshot/#libdebug.snapshots.snapshot.Snapshot.pprint_regs","title":"pprint_regs()","text":"Alias for the pprint_registers method.
Pretty prints the thread's registers.
Source code inlibdebug/snapshots/snapshot.py def pprint_regs(self: Snapshot) -> None:\n \"\"\"Alias for the `pprint_registers` method.\n\n Pretty prints the thread's registers.\n \"\"\"\n self.pprint_registers()\n"},{"location":"from_pydoc/generated/snapshots/snapshot/#libdebug.snapshots.snapshot.Snapshot.pprint_regs_all","title":"pprint_regs_all()","text":"Alias for the pprint_registers_all method.
Pretty prints all the thread's registers.
Source code inlibdebug/snapshots/snapshot.py def pprint_regs_all(self: Snapshot) -> None:\n \"\"\"Alias for the `pprint_registers_all` method.\n\n Pretty prints all the thread's registers.\n \"\"\"\n self.pprint_registers_all()\n"},{"location":"from_pydoc/generated/snapshots/snapshot/#libdebug.snapshots.snapshot.Snapshot.save","title":"save(file_path)","text":"Saves the snapshot object to a file.
Source code inlibdebug/snapshots/snapshot.py def save(self: Snapshot, file_path: str) -> None:\n \"\"\"Saves the snapshot object to a file.\"\"\"\n self._serialization_helper.save(self, file_path)\n"},{"location":"from_pydoc/generated/snapshots/memory/memory_map_diff/","title":"libdebug.snapshots.memory.memory_map_diff","text":""},{"location":"from_pydoc/generated/snapshots/memory/memory_map_diff/#libdebug.snapshots.memory.memory_map_diff.MemoryMapDiff","title":"MemoryMapDiff dataclass","text":"This object represents a diff between memory contents in a memory map.
Source code inlibdebug/snapshots/memory/memory_map_diff.py @dataclass\nclass MemoryMapDiff:\n \"\"\"This object represents a diff between memory contents in a memory map.\"\"\"\n\n old_map_state: MemoryMapSnapshot\n \"\"\"The old state of the memory map.\"\"\"\n\n new_map_state: MemoryMapSnapshot\n \"\"\"The new state of the memory map.\"\"\"\n\n has_changed: bool\n \"\"\"Whether the memory map has changed.\"\"\"\n\n _cached_diffs: list[slice] = None\n \"\"\"Cached diff slices.\"\"\"\n\n @property\n def content_diff(self: MemoryMapDiff) -> list[slice]:\n \"\"\"Resolve the content diffs of a memory map between two snapshots.\n\n Returns:\n list[slice]: The list of slices representing the relative positions of diverging content.\n \"\"\"\n # If the diff has already been computed, return it\n if self._cached_diffs is not None:\n return self._cached_diffs\n\n if self.old_map_state is None:\n raise ValueError(\"Cannot resolve content diff for a new memory map.\")\n if self.new_map_state is None:\n raise ValueError(\"Cannot resolve content diff for a removed memory map.\")\n\n if self.old_map_state.content is None or self.new_map_state.content is None:\n raise ValueError(\"Memory contents not available for this memory page.\")\n\n old_content = self.old_map_state.content\n new_content = self.new_map_state.content\n\n work_len = min(len(old_content), len(new_content))\n\n found_slices = []\n\n # Find all the slices\n cursor = 0\n while cursor < work_len:\n # Find the first differing byte of the sequence\n if old_content[cursor] == new_content[cursor]:\n cursor += 1\n continue\n\n start = cursor\n # Find the last non-zero byte of the sequence\n while cursor < work_len and old_content[cursor] != new_content[cursor]:\n cursor += 1\n\n end = cursor\n\n found_slices.append(slice(start, end))\n\n # Cache the diff slices\n self._cached_diffs = found_slices\n\n return found_slices\n"},{"location":"from_pydoc/generated/snapshots/memory/memory_map_diff/#libdebug.snapshots.memory.memory_map_diff.MemoryMapDiff._cached_diffs","title":"_cached_diffs = None class-attribute instance-attribute","text":"Cached diff slices.
"},{"location":"from_pydoc/generated/snapshots/memory/memory_map_diff/#libdebug.snapshots.memory.memory_map_diff.MemoryMapDiff.content_diff","title":"content_diff property","text":"Resolve the content diffs of a memory map between two snapshots.
Returns:
Type Descriptionlist[slice] list[slice]: The list of slices representing the relative positions of diverging content.
"},{"location":"from_pydoc/generated/snapshots/memory/memory_map_diff/#libdebug.snapshots.memory.memory_map_diff.MemoryMapDiff.has_changed","title":"has_changed instance-attribute","text":"Whether the memory map has changed.
"},{"location":"from_pydoc/generated/snapshots/memory/memory_map_diff/#libdebug.snapshots.memory.memory_map_diff.MemoryMapDiff.new_map_state","title":"new_map_state instance-attribute","text":"The new state of the memory map.
"},{"location":"from_pydoc/generated/snapshots/memory/memory_map_diff/#libdebug.snapshots.memory.memory_map_diff.MemoryMapDiff.old_map_state","title":"old_map_state instance-attribute","text":"The old state of the memory map.
"},{"location":"from_pydoc/generated/snapshots/memory/memory_map_diff_list/","title":"libdebug.snapshots.memory.memory_map_diff_list","text":""},{"location":"from_pydoc/generated/snapshots/memory/memory_map_diff_list/#libdebug.snapshots.memory.memory_map_diff_list.MemoryMapDiffList","title":"MemoryMapDiffList","text":" Bases: list
A list of memory map snapshot diffs from the target process.
Source code inlibdebug/snapshots/memory/memory_map_diff_list.py class MemoryMapDiffList(list):\n \"\"\"A list of memory map snapshot diffs from the target process.\"\"\"\n\n def __init__(\n self: MemoryMapDiffList,\n memory_maps: list[MemoryMapDiff],\n process_name: str,\n full_process_path: str,\n ) -> None:\n \"\"\"Initializes the MemoryMapSnapshotList.\"\"\"\n super().__init__(memory_maps)\n self._process_full_path = full_process_path\n self._process_name = process_name\n\n def _search_by_address(self: MemoryMapDiffList, address: int) -> list[MemoryMapDiff]:\n \"\"\"Searches for a memory map diff by address.\n\n Args:\n address (int): The address to search for.\n\n Returns:\n list[MemoryMapDiff]: The memory map diff matching the specified address.\n \"\"\"\n for vmap_diff in self:\n if vmap_diff.old_map_state.start <= address < vmap_diff.new_map_state.end:\n return [vmap_diff]\n return []\n\n def _search_by_backing_file(self: MemoryMapDiffList, backing_file: str) -> list[MemoryMapDiff]:\n \"\"\"Searches for a memory map diff by backing file.\n\n Args:\n backing_file (str): The backing file to search for.\n\n Returns:\n list[MemoryMapDiff]: The memory map diff matching the specified backing file.\n \"\"\"\n if backing_file in [\"binary\", self._process_name]:\n backing_file = self._process_full_path\n\n filtered_maps = []\n unique_files = set()\n\n for vmap_diff in self:\n compare_with_old = vmap_diff.old_map_state is not None\n compare_with_new = vmap_diff.new_map_state is not None\n\n if compare_with_old and backing_file in vmap_diff.old_map_state.backing_file:\n filtered_maps.append(vmap_diff)\n unique_files.add(vmap_diff.old_map_state.backing_file)\n elif compare_with_new and backing_file in vmap_diff.new_map_state.backing_file:\n filtered_maps.append(vmap_diff)\n unique_files.add(vmap_diff.new_map_state.backing_file)\n\n if len(unique_files) > 1:\n liblog.warning(\n f\"The substring {backing_file} is present in multiple, different backing files. The address resolution cannot be accurate. The matching backing files are: {', '.join(unique_files)}.\",\n )\n\n return filtered_maps\n\n def filter(self: MemoryMapDiffList, value: int | str) -> MemoryMapDiffList[MemoryMapDiff]:\n \"\"\"Filters the memory maps according to the specified value.\n\n If the value is an integer, it is treated as an address.\n If the value is a string, it is treated as a backing file.\n\n Args:\n value (int | str): The value to search for.\n\n Returns:\n MemoryMapDiffList[MemoryMapDiff]: The memory maps matching the specified value.\n \"\"\"\n if isinstance(value, int):\n filtered_maps = self._search_by_address(value)\n elif isinstance(value, str):\n filtered_maps = self._search_by_backing_file(value)\n else:\n raise TypeError(\"The value must be an integer or a string.\")\n\n return MemoryMapDiffList(filtered_maps, self._process_name, self._process_full_path)\n"},{"location":"from_pydoc/generated/snapshots/memory/memory_map_diff_list/#libdebug.snapshots.memory.memory_map_diff_list.MemoryMapDiffList.__init__","title":"__init__(memory_maps, process_name, full_process_path)","text":"Initializes the MemoryMapSnapshotList.
Source code inlibdebug/snapshots/memory/memory_map_diff_list.py def __init__(\n self: MemoryMapDiffList,\n memory_maps: list[MemoryMapDiff],\n process_name: str,\n full_process_path: str,\n) -> None:\n \"\"\"Initializes the MemoryMapSnapshotList.\"\"\"\n super().__init__(memory_maps)\n self._process_full_path = full_process_path\n self._process_name = process_name\n"},{"location":"from_pydoc/generated/snapshots/memory/memory_map_diff_list/#libdebug.snapshots.memory.memory_map_diff_list.MemoryMapDiffList._search_by_address","title":"_search_by_address(address)","text":"Searches for a memory map diff by address.
Parameters:
Name Type Description Defaultaddress int The address to search for.
requiredReturns:
Type Descriptionlist[MemoryMapDiff] list[MemoryMapDiff]: The memory map diff matching the specified address.
Source code inlibdebug/snapshots/memory/memory_map_diff_list.py def _search_by_address(self: MemoryMapDiffList, address: int) -> list[MemoryMapDiff]:\n \"\"\"Searches for a memory map diff by address.\n\n Args:\n address (int): The address to search for.\n\n Returns:\n list[MemoryMapDiff]: The memory map diff matching the specified address.\n \"\"\"\n for vmap_diff in self:\n if vmap_diff.old_map_state.start <= address < vmap_diff.new_map_state.end:\n return [vmap_diff]\n return []\n"},{"location":"from_pydoc/generated/snapshots/memory/memory_map_diff_list/#libdebug.snapshots.memory.memory_map_diff_list.MemoryMapDiffList._search_by_backing_file","title":"_search_by_backing_file(backing_file)","text":"Searches for a memory map diff by backing file.
Parameters:
Name Type Description Defaultbacking_file str The backing file to search for.
requiredReturns:
Type Descriptionlist[MemoryMapDiff] list[MemoryMapDiff]: The memory map diff matching the specified backing file.
Source code inlibdebug/snapshots/memory/memory_map_diff_list.py def _search_by_backing_file(self: MemoryMapDiffList, backing_file: str) -> list[MemoryMapDiff]:\n \"\"\"Searches for a memory map diff by backing file.\n\n Args:\n backing_file (str): The backing file to search for.\n\n Returns:\n list[MemoryMapDiff]: The memory map diff matching the specified backing file.\n \"\"\"\n if backing_file in [\"binary\", self._process_name]:\n backing_file = self._process_full_path\n\n filtered_maps = []\n unique_files = set()\n\n for vmap_diff in self:\n compare_with_old = vmap_diff.old_map_state is not None\n compare_with_new = vmap_diff.new_map_state is not None\n\n if compare_with_old and backing_file in vmap_diff.old_map_state.backing_file:\n filtered_maps.append(vmap_diff)\n unique_files.add(vmap_diff.old_map_state.backing_file)\n elif compare_with_new and backing_file in vmap_diff.new_map_state.backing_file:\n filtered_maps.append(vmap_diff)\n unique_files.add(vmap_diff.new_map_state.backing_file)\n\n if len(unique_files) > 1:\n liblog.warning(\n f\"The substring {backing_file} is present in multiple, different backing files. The address resolution cannot be accurate. The matching backing files are: {', '.join(unique_files)}.\",\n )\n\n return filtered_maps\n"},{"location":"from_pydoc/generated/snapshots/memory/memory_map_diff_list/#libdebug.snapshots.memory.memory_map_diff_list.MemoryMapDiffList.filter","title":"filter(value)","text":"Filters the memory maps according to the specified value.
If the value is an integer, it is treated as an address. If the value is a string, it is treated as a backing file.
Parameters:
Name Type Description Defaultvalue int | str The value to search for.
requiredReturns:
Type DescriptionMemoryMapDiffList[MemoryMapDiff] MemoryMapDiffList[MemoryMapDiff]: The memory maps matching the specified value.
Source code inlibdebug/snapshots/memory/memory_map_diff_list.py def filter(self: MemoryMapDiffList, value: int | str) -> MemoryMapDiffList[MemoryMapDiff]:\n \"\"\"Filters the memory maps according to the specified value.\n\n If the value is an integer, it is treated as an address.\n If the value is a string, it is treated as a backing file.\n\n Args:\n value (int | str): The value to search for.\n\n Returns:\n MemoryMapDiffList[MemoryMapDiff]: The memory maps matching the specified value.\n \"\"\"\n if isinstance(value, int):\n filtered_maps = self._search_by_address(value)\n elif isinstance(value, str):\n filtered_maps = self._search_by_backing_file(value)\n else:\n raise TypeError(\"The value must be an integer or a string.\")\n\n return MemoryMapDiffList(filtered_maps, self._process_name, self._process_full_path)\n"},{"location":"from_pydoc/generated/snapshots/memory/memory_map_snapshot/","title":"libdebug.snapshots.memory.memory_map_snapshot","text":""},{"location":"from_pydoc/generated/snapshots/memory/memory_map_snapshot/#libdebug.snapshots.memory.memory_map_snapshot.MemoryMapSnapshot","title":"MemoryMapSnapshot dataclass","text":" Bases: MemoryMap
A snapshot of the memory map of the target process.
Attributes:
Name Type Descriptionstart int The start address of the memory map. You can access it also with the 'base' attribute.
end int The end address of the memory map.
permissions str The permissions of the memory map.
size int The size of the memory map.
offset int The relative offset of the memory map.
backing_file str The backing file of the memory map, or the symbolic name of the memory map.
content bytes The content of the memory map, used for snapshotted pages.
Source code inlibdebug/snapshots/memory/memory_map_snapshot.py @dataclass\nclass MemoryMapSnapshot(MemoryMap):\n \"\"\"A snapshot of the memory map of the target process.\n\n Attributes:\n start (int): The start address of the memory map. You can access it also with the 'base' attribute.\n end (int): The end address of the memory map.\n permissions (str): The permissions of the memory map.\n size (int): The size of the memory map.\n offset (int): The relative offset of the memory map.\n backing_file (str): The backing file of the memory map, or the symbolic name of the memory map.\n content (bytes): The content of the memory map, used for snapshotted pages.\n \"\"\"\n\n content: bytes = None\n \"\"\"The content of the memory map, used for snapshotted pages.\"\"\"\n\n def is_same_identity(self: MemoryMapSnapshot, other: MemoryMap) -> bool:\n \"\"\"Check if the memory map corresponds to another memory map.\"\"\"\n return self.start == other.start and self.backing_file == other.backing_file\n\n def __repr__(self: MemoryMapSnapshot) -> str:\n \"\"\"Return the string representation of the memory map.\"\"\"\n str_repr = super().__repr__()\n\n if self.content is not None:\n str_repr = str_repr[:-1] + \", content=...)\"\n\n return str_repr\n\n def __eq__(self, value: object) -> bool:\n \"\"\"Check if this MemoryMap is equal to another object.\n\n Args:\n value (object): The object to compare to.\n\n Returns:\n bool: True if the objects are equal, False otherwise.\n \"\"\"\n if not isinstance(value, MemoryMap):\n return False\n\n is_snapshot_map = isinstance(value, MemoryMapSnapshot)\n\n is_content_map_1 = self.content is not None\n is_content_map_2 = is_snapshot_map and value.content is not None\n\n if is_content_map_1 != is_content_map_2:\n liblog.warning(\"Comparing a memory map snapshot with content with a memory map without content. Equality will not take into account the content.\") \n\n # Check if the content is available and if it is the same\n should_compare_content = is_snapshot_map and is_content_map_1 and is_content_map_2\n same_content = not should_compare_content or self.content == value.content\n\n return (\n self.start == value.start\n and self.end == value.end\n and self.permissions == value.permissions\n and self.size == value.size\n and self.offset == value.offset\n and self.backing_file == value.backing_file\n and same_content\n )\n"},{"location":"from_pydoc/generated/snapshots/memory/memory_map_snapshot/#libdebug.snapshots.memory.memory_map_snapshot.MemoryMapSnapshot.content","title":"content = None class-attribute instance-attribute","text":"The content of the memory map, used for snapshotted pages.
"},{"location":"from_pydoc/generated/snapshots/memory/memory_map_snapshot/#libdebug.snapshots.memory.memory_map_snapshot.MemoryMapSnapshot.__eq__","title":"__eq__(value)","text":"Check if this MemoryMap is equal to another object.
Parameters:
Name Type Description Defaultvalue object The object to compare to.
requiredReturns:
Name Type Descriptionbool bool True if the objects are equal, False otherwise.
Source code inlibdebug/snapshots/memory/memory_map_snapshot.py def __eq__(self, value: object) -> bool:\n \"\"\"Check if this MemoryMap is equal to another object.\n\n Args:\n value (object): The object to compare to.\n\n Returns:\n bool: True if the objects are equal, False otherwise.\n \"\"\"\n if not isinstance(value, MemoryMap):\n return False\n\n is_snapshot_map = isinstance(value, MemoryMapSnapshot)\n\n is_content_map_1 = self.content is not None\n is_content_map_2 = is_snapshot_map and value.content is not None\n\n if is_content_map_1 != is_content_map_2:\n liblog.warning(\"Comparing a memory map snapshot with content with a memory map without content. Equality will not take into account the content.\") \n\n # Check if the content is available and if it is the same\n should_compare_content = is_snapshot_map and is_content_map_1 and is_content_map_2\n same_content = not should_compare_content or self.content == value.content\n\n return (\n self.start == value.start\n and self.end == value.end\n and self.permissions == value.permissions\n and self.size == value.size\n and self.offset == value.offset\n and self.backing_file == value.backing_file\n and same_content\n )\n"},{"location":"from_pydoc/generated/snapshots/memory/memory_map_snapshot/#libdebug.snapshots.memory.memory_map_snapshot.MemoryMapSnapshot.__repr__","title":"__repr__()","text":"Return the string representation of the memory map.
Source code inlibdebug/snapshots/memory/memory_map_snapshot.py def __repr__(self: MemoryMapSnapshot) -> str:\n \"\"\"Return the string representation of the memory map.\"\"\"\n str_repr = super().__repr__()\n\n if self.content is not None:\n str_repr = str_repr[:-1] + \", content=...)\"\n\n return str_repr\n"},{"location":"from_pydoc/generated/snapshots/memory/memory_map_snapshot/#libdebug.snapshots.memory.memory_map_snapshot.MemoryMapSnapshot.is_same_identity","title":"is_same_identity(other)","text":"Check if the memory map corresponds to another memory map.
Source code inlibdebug/snapshots/memory/memory_map_snapshot.py def is_same_identity(self: MemoryMapSnapshot, other: MemoryMap) -> bool:\n \"\"\"Check if the memory map corresponds to another memory map.\"\"\"\n return self.start == other.start and self.backing_file == other.backing_file\n"},{"location":"from_pydoc/generated/snapshots/memory/memory_map_snapshot_list/","title":"libdebug.snapshots.memory.memory_map_snapshot_list","text":""},{"location":"from_pydoc/generated/snapshots/memory/memory_map_snapshot_list/#libdebug.snapshots.memory.memory_map_snapshot_list.MemoryMapSnapshotList","title":"MemoryMapSnapshotList","text":" Bases: list[MemoryMapSnapshot]
A list of memory map snapshot from the target process.
Source code inlibdebug/snapshots/memory/memory_map_snapshot_list.py class MemoryMapSnapshotList(list[MemoryMapSnapshot]):\n \"\"\"A list of memory map snapshot from the target process.\"\"\"\n\n def __init__(\n self: MemoryMapSnapshotList,\n memory_maps: list[MemoryMapSnapshot],\n process_name: str,\n full_process_path: str,\n ) -> None:\n \"\"\"Initializes the MemoryMapSnapshotList.\"\"\"\n super().__init__(memory_maps)\n self._process_full_path = full_process_path\n self._process_name = process_name\n\n def _search_by_address(self: MemoryMapSnapshotList, address: int) -> list[MemoryMapSnapshot]:\n \"\"\"Searches for a memory map by address.\n\n Args:\n address (int): The address to search for.\n\n Returns:\n list[MemoryMapSnapshot]: The memory map matching the specified address.\n \"\"\"\n for vmap in self:\n if vmap.start <= address < vmap.end:\n return [vmap]\n return []\n\n def _search_by_backing_file(self: MemoryMapSnapshotList, backing_file: str) -> list[MemoryMapSnapshot]:\n \"\"\"Searches for a memory map by backing file.\n\n Args:\n backing_file (str): The backing file to search for.\n\n Returns:\n list[MemoryMapSnapshot]: The memory map matching the specified backing file.\n \"\"\"\n if backing_file in [\"binary\", self._process_name]:\n backing_file = self._process_full_path\n\n filtered_maps = []\n unique_files = set()\n\n for vmap in self:\n if backing_file in vmap.backing_file:\n filtered_maps.append(vmap)\n unique_files.add(vmap.backing_file)\n\n if len(unique_files) > 1:\n liblog.warning(\n f\"The substring {backing_file} is present in multiple, different backing files. The address resolution cannot be accurate. The matching backing files are: {', '.join(unique_files)}.\",\n )\n\n return filtered_maps\n\n def filter(self: MemoryMapSnapshotList, value: int | str) -> MemoryMapSnapshotList[MemoryMapSnapshot]:\n \"\"\"Filters the memory maps according to the specified value.\n\n If the value is an integer, it is treated as an address.\n If the value is a string, it is treated as a backing file.\n\n Args:\n value (int | str): The value to search for.\n\n Returns:\n MemoryMapSnapshotList[MemoryMapSnapshot]: The memory map snapshots matching the specified value.\n \"\"\"\n if isinstance(value, int):\n filtered_maps = self._search_by_address(value)\n elif isinstance(value, str):\n filtered_maps = self._search_by_backing_file(value)\n else:\n raise TypeError(\"The value must be an integer or a string.\")\n\n return MemoryMapSnapshotList(filtered_maps, self._process_name, self._process_full_path)\n"},{"location":"from_pydoc/generated/snapshots/memory/memory_map_snapshot_list/#libdebug.snapshots.memory.memory_map_snapshot_list.MemoryMapSnapshotList.__init__","title":"__init__(memory_maps, process_name, full_process_path)","text":"Initializes the MemoryMapSnapshotList.
Source code inlibdebug/snapshots/memory/memory_map_snapshot_list.py def __init__(\n self: MemoryMapSnapshotList,\n memory_maps: list[MemoryMapSnapshot],\n process_name: str,\n full_process_path: str,\n) -> None:\n \"\"\"Initializes the MemoryMapSnapshotList.\"\"\"\n super().__init__(memory_maps)\n self._process_full_path = full_process_path\n self._process_name = process_name\n"},{"location":"from_pydoc/generated/snapshots/memory/memory_map_snapshot_list/#libdebug.snapshots.memory.memory_map_snapshot_list.MemoryMapSnapshotList._search_by_address","title":"_search_by_address(address)","text":"Searches for a memory map by address.
Parameters:
Name Type Description Defaultaddress int The address to search for.
requiredReturns:
Type Descriptionlist[MemoryMapSnapshot] list[MemoryMapSnapshot]: The memory map matching the specified address.
Source code inlibdebug/snapshots/memory/memory_map_snapshot_list.py def _search_by_address(self: MemoryMapSnapshotList, address: int) -> list[MemoryMapSnapshot]:\n \"\"\"Searches for a memory map by address.\n\n Args:\n address (int): The address to search for.\n\n Returns:\n list[MemoryMapSnapshot]: The memory map matching the specified address.\n \"\"\"\n for vmap in self:\n if vmap.start <= address < vmap.end:\n return [vmap]\n return []\n"},{"location":"from_pydoc/generated/snapshots/memory/memory_map_snapshot_list/#libdebug.snapshots.memory.memory_map_snapshot_list.MemoryMapSnapshotList._search_by_backing_file","title":"_search_by_backing_file(backing_file)","text":"Searches for a memory map by backing file.
Parameters:
Name Type Description Defaultbacking_file str The backing file to search for.
requiredReturns:
Type Descriptionlist[MemoryMapSnapshot] list[MemoryMapSnapshot]: The memory map matching the specified backing file.
Source code inlibdebug/snapshots/memory/memory_map_snapshot_list.py def _search_by_backing_file(self: MemoryMapSnapshotList, backing_file: str) -> list[MemoryMapSnapshot]:\n \"\"\"Searches for a memory map by backing file.\n\n Args:\n backing_file (str): The backing file to search for.\n\n Returns:\n list[MemoryMapSnapshot]: The memory map matching the specified backing file.\n \"\"\"\n if backing_file in [\"binary\", self._process_name]:\n backing_file = self._process_full_path\n\n filtered_maps = []\n unique_files = set()\n\n for vmap in self:\n if backing_file in vmap.backing_file:\n filtered_maps.append(vmap)\n unique_files.add(vmap.backing_file)\n\n if len(unique_files) > 1:\n liblog.warning(\n f\"The substring {backing_file} is present in multiple, different backing files. The address resolution cannot be accurate. The matching backing files are: {', '.join(unique_files)}.\",\n )\n\n return filtered_maps\n"},{"location":"from_pydoc/generated/snapshots/memory/memory_map_snapshot_list/#libdebug.snapshots.memory.memory_map_snapshot_list.MemoryMapSnapshotList.filter","title":"filter(value)","text":"Filters the memory maps according to the specified value.
If the value is an integer, it is treated as an address. If the value is a string, it is treated as a backing file.
Parameters:
Name Type Description Defaultvalue int | str The value to search for.
requiredReturns:
Type DescriptionMemoryMapSnapshotList[MemoryMapSnapshot] MemoryMapSnapshotList[MemoryMapSnapshot]: The memory map snapshots matching the specified value.
Source code inlibdebug/snapshots/memory/memory_map_snapshot_list.py def filter(self: MemoryMapSnapshotList, value: int | str) -> MemoryMapSnapshotList[MemoryMapSnapshot]:\n \"\"\"Filters the memory maps according to the specified value.\n\n If the value is an integer, it is treated as an address.\n If the value is a string, it is treated as a backing file.\n\n Args:\n value (int | str): The value to search for.\n\n Returns:\n MemoryMapSnapshotList[MemoryMapSnapshot]: The memory map snapshots matching the specified value.\n \"\"\"\n if isinstance(value, int):\n filtered_maps = self._search_by_address(value)\n elif isinstance(value, str):\n filtered_maps = self._search_by_backing_file(value)\n else:\n raise TypeError(\"The value must be an integer or a string.\")\n\n return MemoryMapSnapshotList(filtered_maps, self._process_name, self._process_full_path)\n"},{"location":"from_pydoc/generated/snapshots/memory/snapshot_memory_view/","title":"libdebug.snapshots.memory.snapshot_memory_view","text":""},{"location":"from_pydoc/generated/snapshots/memory/snapshot_memory_view/#libdebug.snapshots.memory.snapshot_memory_view.SnapshotMemoryView","title":"SnapshotMemoryView","text":" Bases: AbstractMemoryView
Memory view for a thread / process snapshot.
Source code inlibdebug/snapshots/memory/snapshot_memory_view.py class SnapshotMemoryView(AbstractMemoryView):\n \"\"\"Memory view for a thread / process snapshot.\"\"\"\n\n def __init__(self: SnapshotMemoryView, snapshot: ThreadSnapshot | ProcessSnapshot, symbols: SymbolList) -> None:\n \"\"\"Initializes the MemoryView.\"\"\"\n self._snap_ref = snapshot\n self._symbol_ref = symbols\n\n def read(self: SnapshotMemoryView, address: int, size: int) -> bytes:\n \"\"\"Reads memory from the target snapshot.\n\n Args:\n address (int): The address to read from.\n size (int): The number of bytes to read.\n\n Returns:\n bytes: The read bytes.\n \"\"\"\n snapshot_maps = self._snap_ref.maps\n\n start_index = 0\n start_map = None\n has_failed = True\n\n # Find the start map index\n while start_index < len(snapshot_maps):\n start_map = snapshot_maps[start_index]\n\n if address < start_map.start:\n break\n elif start_map.start <= address < start_map.end:\n has_failed = False\n break\n start_index += 1\n\n if has_failed:\n raise ValueError(\"No mapped memory at the specified start address.\")\n\n end_index = start_index\n end_address = address + size - 1\n end_map = None\n has_failed = True\n\n # Find the end map index\n while end_index < len(snapshot_maps):\n end_map = snapshot_maps[end_index]\n\n if end_address < end_map.start:\n break\n elif end_map.start <= end_address < end_map.end:\n has_failed = False\n break\n end_index += 1\n\n if has_failed:\n raise ValueError(\"No mapped memory at the specified address.\")\n\n target_maps = self._snap_ref.maps[start_index:end_index + 1]\n\n if not target_maps:\n raise ValueError(\"No mapped memory at the specified address.\")\n\n for target_map in target_maps:\n # The memory of the target map cannot be retrieved\n if target_map.content is None:\n error = \"One or more of the memory maps involved was not snapshotted\"\n\n if self._snap_ref.level == \"base\":\n error += \", snapshot level is base, no memory contents were saved.\"\n elif self._snap_ref.level == \"writable\" and \"w\" not in target_map.permissions:\n error += \", snapshot level is writable but the target page corresponds to non-writable memory.\"\n else:\n error += \" (it could be a priviledged memory map e.g. [vvar]).\"\n\n raise ValueError(error)\n\n start_offset = address - target_maps[0].start\n\n if len(target_maps) == 1:\n end_offset = start_offset + size\n return target_maps[0].content[start_offset:end_offset]\n else:\n data = target_maps[0].content[start_offset:]\n\n for target_map in target_maps[1:-1]:\n data += target_map.content\n\n end_offset = size - len(data)\n data += target_maps[-1].content[:end_offset]\n\n return data\n\n def write(self: SnapshotMemoryView, address: int, data: bytes) -> None:\n \"\"\"Writes memory to the target snapshot.\n\n Args:\n address (int): The address to write to.\n data (bytes): The data to write.\n \"\"\"\n raise NotImplementedError(\"Snapshot memory is read-only, duh.\")\n\n def find(\n self: SnapshotMemoryView,\n value: bytes | str | int,\n file: str = \"all\",\n start: int | None = None,\n end: int | None = None,\n ) -> list[int]:\n \"\"\"Searches for the given value in the saved memory maps of the snapshot.\n\n The start and end addresses can be used to limit the search to a specific range.\n If not specified, the search will be performed on the whole memory map.\n\n Args:\n value (bytes | str | int): The value to search for.\n file (str): The backing file to search the value in. Defaults to \"all\", which means all memory.\n start (int | None): The start address of the search. Defaults to None.\n end (int | None): The end address of the search. Defaults to None.\n\n Returns:\n list[int]: A list of offset where the value was found.\n \"\"\"\n if self._snap_ref.level == \"base\":\n raise ValueError(\"Memory snapshot is not available at base level.\")\n\n return super().find(value, file, start, end)\n\n def resolve_symbol(self: SnapshotMemoryView, symbol: str, file: str) -> Symbol:\n \"\"\"Resolve a symbol from the symbol list.\n\n Args:\n symbol (str): The symbol to resolve.\n file (str): The backing file to resolve the address in.\n\n Returns:\n Symbol: The resolved address.\n \"\"\"\n offset = 0\n\n if \"+\" in symbol:\n symbol, offset = symbol.split(\"+\")\n offset = int(offset, 16)\n\n results = self._symbol_ref.filter(symbol)\n\n # Get the first result that matches the backing file\n results = [result for result in results if file in result.backing_file]\n\n if len(results) == 0:\n raise ValueError(f\"Symbol {symbol} not found in snaphot memory.\")\n\n page_base = self._snap_ref.maps.filter(results[0].backing_file)[0].start\n\n return page_base + results[0].start + offset\n\n def resolve_address(\n self: SnapshotMemoryView,\n address: int,\n backing_file: str,\n skip_absolute_address_validation: bool = False,\n ) -> int:\n \"\"\"Normalizes and validates the specified address.\n\n Args:\n address (int): The address to normalize and validate.\n backing_file (str): The backing file to resolve the address in.\n skip_absolute_address_validation (bool, optional): Whether to skip bounds checking for absolute addresses. Defaults to False.\n\n Returns:\n int: The normalized and validated address.\n\n Raises:\n ValueError: If the substring `backing_file` is present in multiple backing files.\n \"\"\"\n if skip_absolute_address_validation and backing_file == \"absolute\":\n return address\n\n maps = self._snap_ref.maps\n\n if backing_file in [\"hybrid\", \"absolute\"]:\n if maps.filter(address):\n # If the address is absolute, we can return it directly\n return address\n elif backing_file == \"absolute\":\n # The address is explicitly an absolute address but we did not find it\n raise ValueError(\n \"The specified absolute address does not exist. Check the address or specify a backing file.\",\n )\n else:\n # If the address was not found and the backing file is not \"absolute\",\n # we have to assume it is in the main map\n backing_file = self._snap_ref._process_full_path\n liblog.warning(\n f\"No backing file specified and no corresponding absolute address found for {hex(address)}. Assuming {backing_file}.\",\n )\n\n filtered_maps = maps.filter(backing_file)\n\n return normalize_and_validate_address(address, filtered_maps)\n\n @property\n def maps(self: SnapshotMemoryView) -> MemoryMapSnapshotList:\n \"\"\"Returns a list of memory maps in the target process.\n\n Returns:\n MemoryMapList: The memory maps.\n \"\"\"\n return self._snap_ref.maps\n"},{"location":"from_pydoc/generated/snapshots/memory/snapshot_memory_view/#libdebug.snapshots.memory.snapshot_memory_view.SnapshotMemoryView.maps","title":"maps property","text":"Returns a list of memory maps in the target process.
Returns:
Name Type DescriptionMemoryMapList MemoryMapSnapshotList The memory maps.
"},{"location":"from_pydoc/generated/snapshots/memory/snapshot_memory_view/#libdebug.snapshots.memory.snapshot_memory_view.SnapshotMemoryView.__init__","title":"__init__(snapshot, symbols)","text":"Initializes the MemoryView.
Source code inlibdebug/snapshots/memory/snapshot_memory_view.py def __init__(self: SnapshotMemoryView, snapshot: ThreadSnapshot | ProcessSnapshot, symbols: SymbolList) -> None:\n \"\"\"Initializes the MemoryView.\"\"\"\n self._snap_ref = snapshot\n self._symbol_ref = symbols\n"},{"location":"from_pydoc/generated/snapshots/memory/snapshot_memory_view/#libdebug.snapshots.memory.snapshot_memory_view.SnapshotMemoryView.find","title":"find(value, file='all', start=None, end=None)","text":"Searches for the given value in the saved memory maps of the snapshot.
The start and end addresses can be used to limit the search to a specific range. If not specified, the search will be performed on the whole memory map.
Parameters:
Name Type Description Defaultvalue bytes | str | int The value to search for.
requiredfile str The backing file to search the value in. Defaults to \"all\", which means all memory.
'all' start int | None The start address of the search. Defaults to None.
None end int | None The end address of the search. Defaults to None.
None Returns:
Type Descriptionlist[int] list[int]: A list of offset where the value was found.
Source code inlibdebug/snapshots/memory/snapshot_memory_view.py def find(\n self: SnapshotMemoryView,\n value: bytes | str | int,\n file: str = \"all\",\n start: int | None = None,\n end: int | None = None,\n) -> list[int]:\n \"\"\"Searches for the given value in the saved memory maps of the snapshot.\n\n The start and end addresses can be used to limit the search to a specific range.\n If not specified, the search will be performed on the whole memory map.\n\n Args:\n value (bytes | str | int): The value to search for.\n file (str): The backing file to search the value in. Defaults to \"all\", which means all memory.\n start (int | None): The start address of the search. Defaults to None.\n end (int | None): The end address of the search. Defaults to None.\n\n Returns:\n list[int]: A list of offset where the value was found.\n \"\"\"\n if self._snap_ref.level == \"base\":\n raise ValueError(\"Memory snapshot is not available at base level.\")\n\n return super().find(value, file, start, end)\n"},{"location":"from_pydoc/generated/snapshots/memory/snapshot_memory_view/#libdebug.snapshots.memory.snapshot_memory_view.SnapshotMemoryView.read","title":"read(address, size)","text":"Reads memory from the target snapshot.
Parameters:
Name Type Description Defaultaddress int The address to read from.
requiredsize int The number of bytes to read.
requiredReturns:
Name Type Descriptionbytes bytes The read bytes.
Source code inlibdebug/snapshots/memory/snapshot_memory_view.py def read(self: SnapshotMemoryView, address: int, size: int) -> bytes:\n \"\"\"Reads memory from the target snapshot.\n\n Args:\n address (int): The address to read from.\n size (int): The number of bytes to read.\n\n Returns:\n bytes: The read bytes.\n \"\"\"\n snapshot_maps = self._snap_ref.maps\n\n start_index = 0\n start_map = None\n has_failed = True\n\n # Find the start map index\n while start_index < len(snapshot_maps):\n start_map = snapshot_maps[start_index]\n\n if address < start_map.start:\n break\n elif start_map.start <= address < start_map.end:\n has_failed = False\n break\n start_index += 1\n\n if has_failed:\n raise ValueError(\"No mapped memory at the specified start address.\")\n\n end_index = start_index\n end_address = address + size - 1\n end_map = None\n has_failed = True\n\n # Find the end map index\n while end_index < len(snapshot_maps):\n end_map = snapshot_maps[end_index]\n\n if end_address < end_map.start:\n break\n elif end_map.start <= end_address < end_map.end:\n has_failed = False\n break\n end_index += 1\n\n if has_failed:\n raise ValueError(\"No mapped memory at the specified address.\")\n\n target_maps = self._snap_ref.maps[start_index:end_index + 1]\n\n if not target_maps:\n raise ValueError(\"No mapped memory at the specified address.\")\n\n for target_map in target_maps:\n # The memory of the target map cannot be retrieved\n if target_map.content is None:\n error = \"One or more of the memory maps involved was not snapshotted\"\n\n if self._snap_ref.level == \"base\":\n error += \", snapshot level is base, no memory contents were saved.\"\n elif self._snap_ref.level == \"writable\" and \"w\" not in target_map.permissions:\n error += \", snapshot level is writable but the target page corresponds to non-writable memory.\"\n else:\n error += \" (it could be a priviledged memory map e.g. [vvar]).\"\n\n raise ValueError(error)\n\n start_offset = address - target_maps[0].start\n\n if len(target_maps) == 1:\n end_offset = start_offset + size\n return target_maps[0].content[start_offset:end_offset]\n else:\n data = target_maps[0].content[start_offset:]\n\n for target_map in target_maps[1:-1]:\n data += target_map.content\n\n end_offset = size - len(data)\n data += target_maps[-1].content[:end_offset]\n\n return data\n"},{"location":"from_pydoc/generated/snapshots/memory/snapshot_memory_view/#libdebug.snapshots.memory.snapshot_memory_view.SnapshotMemoryView.resolve_address","title":"resolve_address(address, backing_file, skip_absolute_address_validation=False)","text":"Normalizes and validates the specified address.
Parameters:
Name Type Description Defaultaddress int The address to normalize and validate.
requiredbacking_file str The backing file to resolve the address in.
requiredskip_absolute_address_validation bool Whether to skip bounds checking for absolute addresses. Defaults to False.
False Returns:
Name Type Descriptionint int The normalized and validated address.
Raises:
Type DescriptionValueError If the substring backing_file is present in multiple backing files.
libdebug/snapshots/memory/snapshot_memory_view.py def resolve_address(\n self: SnapshotMemoryView,\n address: int,\n backing_file: str,\n skip_absolute_address_validation: bool = False,\n) -> int:\n \"\"\"Normalizes and validates the specified address.\n\n Args:\n address (int): The address to normalize and validate.\n backing_file (str): The backing file to resolve the address in.\n skip_absolute_address_validation (bool, optional): Whether to skip bounds checking for absolute addresses. Defaults to False.\n\n Returns:\n int: The normalized and validated address.\n\n Raises:\n ValueError: If the substring `backing_file` is present in multiple backing files.\n \"\"\"\n if skip_absolute_address_validation and backing_file == \"absolute\":\n return address\n\n maps = self._snap_ref.maps\n\n if backing_file in [\"hybrid\", \"absolute\"]:\n if maps.filter(address):\n # If the address is absolute, we can return it directly\n return address\n elif backing_file == \"absolute\":\n # The address is explicitly an absolute address but we did not find it\n raise ValueError(\n \"The specified absolute address does not exist. Check the address or specify a backing file.\",\n )\n else:\n # If the address was not found and the backing file is not \"absolute\",\n # we have to assume it is in the main map\n backing_file = self._snap_ref._process_full_path\n liblog.warning(\n f\"No backing file specified and no corresponding absolute address found for {hex(address)}. Assuming {backing_file}.\",\n )\n\n filtered_maps = maps.filter(backing_file)\n\n return normalize_and_validate_address(address, filtered_maps)\n"},{"location":"from_pydoc/generated/snapshots/memory/snapshot_memory_view/#libdebug.snapshots.memory.snapshot_memory_view.SnapshotMemoryView.resolve_symbol","title":"resolve_symbol(symbol, file)","text":"Resolve a symbol from the symbol list.
Parameters:
Name Type Description Defaultsymbol str The symbol to resolve.
requiredfile str The backing file to resolve the address in.
requiredReturns:
Name Type DescriptionSymbol Symbol The resolved address.
Source code inlibdebug/snapshots/memory/snapshot_memory_view.py def resolve_symbol(self: SnapshotMemoryView, symbol: str, file: str) -> Symbol:\n \"\"\"Resolve a symbol from the symbol list.\n\n Args:\n symbol (str): The symbol to resolve.\n file (str): The backing file to resolve the address in.\n\n Returns:\n Symbol: The resolved address.\n \"\"\"\n offset = 0\n\n if \"+\" in symbol:\n symbol, offset = symbol.split(\"+\")\n offset = int(offset, 16)\n\n results = self._symbol_ref.filter(symbol)\n\n # Get the first result that matches the backing file\n results = [result for result in results if file in result.backing_file]\n\n if len(results) == 0:\n raise ValueError(f\"Symbol {symbol} not found in snaphot memory.\")\n\n page_base = self._snap_ref.maps.filter(results[0].backing_file)[0].start\n\n return page_base + results[0].start + offset\n"},{"location":"from_pydoc/generated/snapshots/memory/snapshot_memory_view/#libdebug.snapshots.memory.snapshot_memory_view.SnapshotMemoryView.write","title":"write(address, data)","text":"Writes memory to the target snapshot.
Parameters:
Name Type Description Defaultaddress int The address to write to.
requireddata bytes The data to write.
required Source code inlibdebug/snapshots/memory/snapshot_memory_view.py def write(self: SnapshotMemoryView, address: int, data: bytes) -> None:\n \"\"\"Writes memory to the target snapshot.\n\n Args:\n address (int): The address to write to.\n data (bytes): The data to write.\n \"\"\"\n raise NotImplementedError(\"Snapshot memory is read-only, duh.\")\n"},{"location":"from_pydoc/generated/snapshots/process/process_shapshot_diff/","title":"libdebug.snapshots.process.process_shapshot_diff","text":""},{"location":"from_pydoc/generated/snapshots/process/process_shapshot_diff/#libdebug.snapshots.process.process_shapshot_diff.ProcessSnapshotDiff","title":"ProcessSnapshotDiff","text":" Bases: Diff
This object represents a diff between process snapshots.
Source code inlibdebug/snapshots/process/process_shapshot_diff.py class ProcessSnapshotDiff(Diff):\n \"\"\"This object represents a diff between process snapshots.\"\"\"\n\n def __init__(self: ProcessSnapshotDiff, snapshot1: ProcessSnapshot, snapshot2: ProcessSnapshot) -> None:\n \"\"\"Returns a diff between given snapshots of the same process.\n\n Args:\n snapshot1 (ProcessSnapshot): A process snapshot.\n snapshot2 (ProcessSnapshot): A process snapshot.\n \"\"\"\n super().__init__(snapshot1, snapshot2)\n\n # Register diffs\n self._save_reg_diffs()\n\n # Memory map diffs\n self._resolve_maps_diff()\n\n # Thread diffs\n self._generate_thread_diffs()\n\n if (self.snapshot1._process_name == self.snapshot2._process_name) and (\n self.snapshot1.aslr_enabled or self.snapshot2.aslr_enabled\n ):\n liblog.warning(\"ASLR is enabled in either or both snapshots. Diff may be messy.\")\n\n def _generate_thread_diffs(self: ProcessSnapshotDiff) -> None:\n \"\"\"Generates diffs between threads in the two compared snapshots.\n\n Thread differences:\n - Born threads and dead threads are stored directly in separate lists (no state diff exists between the two).\n - Threads that exist in both snapshots are stored as diffs and can be accessed through the threads_diff property.\n \"\"\"\n self.born_threads = []\n self.dead_threads = []\n self.threads_diff = []\n\n snapshot1_by_tid = {thread.tid: thread for thread in self.snapshot1.threads}\n snapshot2_by_tid = {thread.tid: thread for thread in self.snapshot2.threads}\n\n for tid, t1 in snapshot1_by_tid.items():\n t2 = snapshot2_by_tid.get(tid)\n if t2 is None:\n self.dead_threads.append(t1)\n else:\n diff = LightweightThreadSnapshotDiff(t1, t2, self)\n self.threads_diff.append(diff)\n\n for tid, t2 in snapshot2_by_tid.items():\n if tid not in snapshot1_by_tid:\n self.born_threads.append(t2)\n"},{"location":"from_pydoc/generated/snapshots/process/process_shapshot_diff/#libdebug.snapshots.process.process_shapshot_diff.ProcessSnapshotDiff.__init__","title":"__init__(snapshot1, snapshot2)","text":"Returns a diff between given snapshots of the same process.
Parameters:
Name Type Description Defaultsnapshot1 ProcessSnapshot A process snapshot.
requiredsnapshot2 ProcessSnapshot A process snapshot.
required Source code inlibdebug/snapshots/process/process_shapshot_diff.py def __init__(self: ProcessSnapshotDiff, snapshot1: ProcessSnapshot, snapshot2: ProcessSnapshot) -> None:\n \"\"\"Returns a diff between given snapshots of the same process.\n\n Args:\n snapshot1 (ProcessSnapshot): A process snapshot.\n snapshot2 (ProcessSnapshot): A process snapshot.\n \"\"\"\n super().__init__(snapshot1, snapshot2)\n\n # Register diffs\n self._save_reg_diffs()\n\n # Memory map diffs\n self._resolve_maps_diff()\n\n # Thread diffs\n self._generate_thread_diffs()\n\n if (self.snapshot1._process_name == self.snapshot2._process_name) and (\n self.snapshot1.aslr_enabled or self.snapshot2.aslr_enabled\n ):\n liblog.warning(\"ASLR is enabled in either or both snapshots. Diff may be messy.\")\n"},{"location":"from_pydoc/generated/snapshots/process/process_shapshot_diff/#libdebug.snapshots.process.process_shapshot_diff.ProcessSnapshotDiff._generate_thread_diffs","title":"_generate_thread_diffs()","text":"Generates diffs between threads in the two compared snapshots.
Thread differenceslibdebug/snapshots/process/process_shapshot_diff.py def _generate_thread_diffs(self: ProcessSnapshotDiff) -> None:\n \"\"\"Generates diffs between threads in the two compared snapshots.\n\n Thread differences:\n - Born threads and dead threads are stored directly in separate lists (no state diff exists between the two).\n - Threads that exist in both snapshots are stored as diffs and can be accessed through the threads_diff property.\n \"\"\"\n self.born_threads = []\n self.dead_threads = []\n self.threads_diff = []\n\n snapshot1_by_tid = {thread.tid: thread for thread in self.snapshot1.threads}\n snapshot2_by_tid = {thread.tid: thread for thread in self.snapshot2.threads}\n\n for tid, t1 in snapshot1_by_tid.items():\n t2 = snapshot2_by_tid.get(tid)\n if t2 is None:\n self.dead_threads.append(t1)\n else:\n diff = LightweightThreadSnapshotDiff(t1, t2, self)\n self.threads_diff.append(diff)\n\n for tid, t2 in snapshot2_by_tid.items():\n if tid not in snapshot1_by_tid:\n self.born_threads.append(t2)\n"},{"location":"from_pydoc/generated/snapshots/process/process_snapshot/","title":"libdebug.snapshots.process.process_snapshot","text":""},{"location":"from_pydoc/generated/snapshots/process/process_snapshot/#libdebug.snapshots.process.process_snapshot.ProcessSnapshot","title":"ProcessSnapshot","text":" Bases: Snapshot
This object represents a snapshot of the target process. It holds information about the process's state.
Snapshot levels: - base: Registers - writable: Registers, writable memory contents - full: Registers, stack, all readable memory contents
Source code inlibdebug/snapshots/process/process_snapshot.py class ProcessSnapshot(Snapshot):\n \"\"\"This object represents a snapshot of the target process. It holds information about the process's state.\n\n Snapshot levels:\n - base: Registers\n - writable: Registers, writable memory contents\n - full: Registers, stack, all readable memory contents\n \"\"\"\n\n def __init__(\n self: ProcessSnapshot, debugger: InternalDebugger, level: str = \"base\", name: str | None = None\n ) -> None:\n \"\"\"Creates a new snapshot object for the given process.\n\n Args:\n debugger (Debugger): The thread to take a snapshot of.\n level (str, optional): The level of the snapshot. Defaults to \"base\".\n name (str, optional): A name associated to the snapshot. Defaults to None.\n \"\"\"\n # Set id of the snapshot and increment the counter\n self.snapshot_id = debugger._snapshot_count\n debugger.notify_snaphot_taken()\n\n # Basic snapshot info\n self.process_id = debugger.process_id\n self.pid = self.process_id\n self.name = name\n self.level = level\n self.arch = debugger.arch\n self.aslr_enabled = debugger.aslr_enabled\n self._process_full_path = debugger._process_full_path\n self._process_name = debugger._process_name\n self._serialization_helper = debugger.serialization_helper\n\n # Memory maps\n match level:\n case \"base\":\n self.maps = MemoryMapSnapshotList([], self._process_name, self._process_full_path)\n\n for curr_map in debugger.maps:\n saved_map = MemoryMapSnapshot(\n start=curr_map.start,\n end=curr_map.end,\n permissions=curr_map.permissions,\n size=curr_map.size,\n offset=curr_map.offset,\n backing_file=curr_map.backing_file,\n content=None,\n )\n self.maps.append(saved_map)\n\n self._memory = None\n case \"writable\":\n if not debugger.fast_memory:\n liblog.warning(\n \"Memory snapshot requested but fast memory is not enabled. This will take a long time.\",\n )\n\n # Save all memory pages\n self._save_memory_maps(debugger, writable_only=True)\n\n self._memory = SnapshotMemoryView(self, debugger.symbols)\n case \"full\":\n if not debugger.fast_memory:\n liblog.warning(\n \"Memory snapshot requested but fast memory is not enabled. This will take a long time.\",\n )\n\n # Save all memory pages\n self._save_memory_maps(debugger, writable_only=False)\n\n self._memory = SnapshotMemoryView(self, debugger.symbols)\n case _:\n raise ValueError(f\"Invalid snapshot level {level}\")\n\n # Snapshot the threads\n self._save_threads(debugger)\n\n # Log the creation of the snapshot\n named_addition = \" named \" + self.name if name is not None else \"\"\n liblog.debugger(\n f\"Created snapshot {self.snapshot_id} of level {self.level} for process {self.pid}{named_addition}\"\n )\n\n def _save_threads(self: ProcessSnapshot, debugger: InternalDebugger) -> None:\n self.threads = []\n\n for thread in debugger.threads:\n # Create a lightweight snapshot for the thread\n lw_snapshot = LightweightThreadSnapshot(thread, self)\n\n self.threads.append(lw_snapshot)\n\n @property\n def regs(self: ProcessSnapshot) -> SnapshotRegisters:\n \"\"\"Returns the registers of the process snapshot.\"\"\"\n return self.threads[0].regs\n\n def diff(self: ProcessSnapshot, other: ProcessSnapshot) -> Diff:\n \"\"\"Returns the diff between two process snapshots.\"\"\"\n if not isinstance(other, ProcessSnapshot):\n raise TypeError(\"Both arguments must be ProcessSnapshot objects.\")\n\n return ProcessSnapshotDiff(self, other)\n"},{"location":"from_pydoc/generated/snapshots/process/process_snapshot/#libdebug.snapshots.process.process_snapshot.ProcessSnapshot.regs","title":"regs property","text":"Returns the registers of the process snapshot.
"},{"location":"from_pydoc/generated/snapshots/process/process_snapshot/#libdebug.snapshots.process.process_snapshot.ProcessSnapshot.__init__","title":"__init__(debugger, level='base', name=None)","text":"Creates a new snapshot object for the given process.
Parameters:
Name Type Description Defaultdebugger Debugger The thread to take a snapshot of.
requiredlevel str The level of the snapshot. Defaults to \"base\".
'base' name str A name associated to the snapshot. Defaults to None.
None Source code in libdebug/snapshots/process/process_snapshot.py def __init__(\n self: ProcessSnapshot, debugger: InternalDebugger, level: str = \"base\", name: str | None = None\n) -> None:\n \"\"\"Creates a new snapshot object for the given process.\n\n Args:\n debugger (Debugger): The thread to take a snapshot of.\n level (str, optional): The level of the snapshot. Defaults to \"base\".\n name (str, optional): A name associated to the snapshot. Defaults to None.\n \"\"\"\n # Set id of the snapshot and increment the counter\n self.snapshot_id = debugger._snapshot_count\n debugger.notify_snaphot_taken()\n\n # Basic snapshot info\n self.process_id = debugger.process_id\n self.pid = self.process_id\n self.name = name\n self.level = level\n self.arch = debugger.arch\n self.aslr_enabled = debugger.aslr_enabled\n self._process_full_path = debugger._process_full_path\n self._process_name = debugger._process_name\n self._serialization_helper = debugger.serialization_helper\n\n # Memory maps\n match level:\n case \"base\":\n self.maps = MemoryMapSnapshotList([], self._process_name, self._process_full_path)\n\n for curr_map in debugger.maps:\n saved_map = MemoryMapSnapshot(\n start=curr_map.start,\n end=curr_map.end,\n permissions=curr_map.permissions,\n size=curr_map.size,\n offset=curr_map.offset,\n backing_file=curr_map.backing_file,\n content=None,\n )\n self.maps.append(saved_map)\n\n self._memory = None\n case \"writable\":\n if not debugger.fast_memory:\n liblog.warning(\n \"Memory snapshot requested but fast memory is not enabled. This will take a long time.\",\n )\n\n # Save all memory pages\n self._save_memory_maps(debugger, writable_only=True)\n\n self._memory = SnapshotMemoryView(self, debugger.symbols)\n case \"full\":\n if not debugger.fast_memory:\n liblog.warning(\n \"Memory snapshot requested but fast memory is not enabled. This will take a long time.\",\n )\n\n # Save all memory pages\n self._save_memory_maps(debugger, writable_only=False)\n\n self._memory = SnapshotMemoryView(self, debugger.symbols)\n case _:\n raise ValueError(f\"Invalid snapshot level {level}\")\n\n # Snapshot the threads\n self._save_threads(debugger)\n\n # Log the creation of the snapshot\n named_addition = \" named \" + self.name if name is not None else \"\"\n liblog.debugger(\n f\"Created snapshot {self.snapshot_id} of level {self.level} for process {self.pid}{named_addition}\"\n )\n"},{"location":"from_pydoc/generated/snapshots/process/process_snapshot/#libdebug.snapshots.process.process_snapshot.ProcessSnapshot.diff","title":"diff(other)","text":"Returns the diff between two process snapshots.
Source code inlibdebug/snapshots/process/process_snapshot.py def diff(self: ProcessSnapshot, other: ProcessSnapshot) -> Diff:\n \"\"\"Returns the diff between two process snapshots.\"\"\"\n if not isinstance(other, ProcessSnapshot):\n raise TypeError(\"Both arguments must be ProcessSnapshot objects.\")\n\n return ProcessSnapshotDiff(self, other)\n"},{"location":"from_pydoc/generated/snapshots/registers/register_diff/","title":"libdebug.snapshots.registers.register_diff","text":""},{"location":"from_pydoc/generated/snapshots/registers/register_diff/#libdebug.snapshots.registers.register_diff.RegisterDiff","title":"RegisterDiff dataclass","text":"This object represents a diff between registers in a thread snapshot.
Source code inlibdebug/snapshots/registers/register_diff.py @dataclass\nclass RegisterDiff:\n \"\"\"This object represents a diff between registers in a thread snapshot.\"\"\"\n\n old_value: int | float\n \"\"\"The old value of the register.\"\"\"\n\n new_value: int | float\n \"\"\"The new value of the register.\"\"\"\n\n has_changed: bool\n \"\"\"Whether the register has changed.\"\"\"\n\n def __repr__(self: RegisterDiff) -> str:\n \"\"\"Return a string representation of the RegisterDiff object.\"\"\"\n old_value_str = hex(self.old_value) if isinstance(self.old_value, int) else str(self.old_value)\n new_value_str = hex(self.new_value) if isinstance(self.new_value, int) else str(self.new_value)\n return f\"RegisterDiff(old_value={old_value_str}, new_value={new_value_str}, has_changed={self.has_changed})\"\n"},{"location":"from_pydoc/generated/snapshots/registers/register_diff/#libdebug.snapshots.registers.register_diff.RegisterDiff.has_changed","title":"has_changed instance-attribute","text":"Whether the register has changed.
"},{"location":"from_pydoc/generated/snapshots/registers/register_diff/#libdebug.snapshots.registers.register_diff.RegisterDiff.new_value","title":"new_value instance-attribute","text":"The new value of the register.
"},{"location":"from_pydoc/generated/snapshots/registers/register_diff/#libdebug.snapshots.registers.register_diff.RegisterDiff.old_value","title":"old_value instance-attribute","text":"The old value of the register.
"},{"location":"from_pydoc/generated/snapshots/registers/register_diff/#libdebug.snapshots.registers.register_diff.RegisterDiff.__repr__","title":"__repr__()","text":"Return a string representation of the RegisterDiff object.
Source code inlibdebug/snapshots/registers/register_diff.py def __repr__(self: RegisterDiff) -> str:\n \"\"\"Return a string representation of the RegisterDiff object.\"\"\"\n old_value_str = hex(self.old_value) if isinstance(self.old_value, int) else str(self.old_value)\n new_value_str = hex(self.new_value) if isinstance(self.new_value, int) else str(self.new_value)\n return f\"RegisterDiff(old_value={old_value_str}, new_value={new_value_str}, has_changed={self.has_changed})\"\n"},{"location":"from_pydoc/generated/snapshots/registers/register_diff_accessor/","title":"libdebug.snapshots.registers.register_diff_accessor","text":""},{"location":"from_pydoc/generated/snapshots/registers/register_diff_accessor/#libdebug.snapshots.registers.register_diff_accessor.RegisterDiffAccessor","title":"RegisterDiffAccessor","text":"Class used to access RegisterDiff objects for a thread snapshot.
Source code inlibdebug/snapshots/registers/register_diff_accessor.py class RegisterDiffAccessor:\n \"\"\"Class used to access RegisterDiff objects for a thread snapshot.\"\"\"\n\n def __init__(\n self: RegisterDiffAccessor,\n generic_regs: list[str],\n special_regs: list[str],\n vec_fp_regs: list[str],\n ) -> None:\n \"\"\"Initializes the RegisterDiffAccessor object.\n\n Args:\n generic_regs (list[str]): The list of generic registers to include in the repr.\n special_regs (list[str]): The list of special registers to include in the repr.\n vec_fp_regs (list[str]): The list of vector and floating point registers to include in the repr.\n \"\"\"\n self._generic_regs = generic_regs\n self._special_regs = special_regs\n self._vec_fp_regs = vec_fp_regs\n\n def __repr__(self: RegisterDiffAccessor) -> str:\n \"\"\"Return a string representation of the RegisterDiffAccessor object.\"\"\"\n str_repr = \"RegisterDiffAccessor(\\n\\n\"\n\n # Header with column alignment\n str_repr += \"{:<15} {:<20} {:<20}\\n\".format(\"Register\", \"Old Value\", \"New Value\")\n str_repr += \"-\" * 60 + \"\\n\"\n\n # Log all integer changes\n for attr_name in self._generic_regs:\n attr = self.__getattribute__(attr_name)\n\n if attr.has_changed:\n # Format integer values in hexadecimal without zero-padding\n old_value = f\"{attr.old_value:<18}\" if isinstance(attr.old_value, float) else f\"{attr.old_value:<#16x}\"\n new_value = f\"{attr.new_value:<18}\" if isinstance(attr.new_value, float) else f\"{attr.new_value:<#16x}\"\n # Align output for consistent spacing between old and new values\n str_repr += f\"{attr_name:<15} {old_value} {new_value}\\n\"\n\n return str_repr\n"},{"location":"from_pydoc/generated/snapshots/registers/register_diff_accessor/#libdebug.snapshots.registers.register_diff_accessor.RegisterDiffAccessor.__init__","title":"__init__(generic_regs, special_regs, vec_fp_regs)","text":"Initializes the RegisterDiffAccessor object.
Parameters:
Name Type Description Defaultgeneric_regs list[str] The list of generic registers to include in the repr.
requiredspecial_regs list[str] The list of special registers to include in the repr.
requiredvec_fp_regs list[str] The list of vector and floating point registers to include in the repr.
required Source code inlibdebug/snapshots/registers/register_diff_accessor.py def __init__(\n self: RegisterDiffAccessor,\n generic_regs: list[str],\n special_regs: list[str],\n vec_fp_regs: list[str],\n) -> None:\n \"\"\"Initializes the RegisterDiffAccessor object.\n\n Args:\n generic_regs (list[str]): The list of generic registers to include in the repr.\n special_regs (list[str]): The list of special registers to include in the repr.\n vec_fp_regs (list[str]): The list of vector and floating point registers to include in the repr.\n \"\"\"\n self._generic_regs = generic_regs\n self._special_regs = special_regs\n self._vec_fp_regs = vec_fp_regs\n"},{"location":"from_pydoc/generated/snapshots/registers/register_diff_accessor/#libdebug.snapshots.registers.register_diff_accessor.RegisterDiffAccessor.__repr__","title":"__repr__()","text":"Return a string representation of the RegisterDiffAccessor object.
Source code inlibdebug/snapshots/registers/register_diff_accessor.py def __repr__(self: RegisterDiffAccessor) -> str:\n \"\"\"Return a string representation of the RegisterDiffAccessor object.\"\"\"\n str_repr = \"RegisterDiffAccessor(\\n\\n\"\n\n # Header with column alignment\n str_repr += \"{:<15} {:<20} {:<20}\\n\".format(\"Register\", \"Old Value\", \"New Value\")\n str_repr += \"-\" * 60 + \"\\n\"\n\n # Log all integer changes\n for attr_name in self._generic_regs:\n attr = self.__getattribute__(attr_name)\n\n if attr.has_changed:\n # Format integer values in hexadecimal without zero-padding\n old_value = f\"{attr.old_value:<18}\" if isinstance(attr.old_value, float) else f\"{attr.old_value:<#16x}\"\n new_value = f\"{attr.new_value:<18}\" if isinstance(attr.new_value, float) else f\"{attr.new_value:<#16x}\"\n # Align output for consistent spacing between old and new values\n str_repr += f\"{attr_name:<15} {old_value} {new_value}\\n\"\n\n return str_repr\n"},{"location":"from_pydoc/generated/snapshots/registers/snapshot_registers/","title":"libdebug.snapshots.registers.snapshot_registers","text":""},{"location":"from_pydoc/generated/snapshots/registers/snapshot_registers/#libdebug.snapshots.registers.snapshot_registers.SnapshotRegisters","title":"SnapshotRegisters","text":" Bases: Registers
Class that holds the state of the architectural-dependent registers of a snapshot.
Source code inlibdebug/snapshots/registers/snapshot_registers.py class SnapshotRegisters(Registers):\n \"\"\"Class that holds the state of the architectural-dependent registers of a snapshot.\"\"\"\n\n def __init__(\n self: SnapshotRegisters,\n thread_id: int,\n generic_regs: list[str],\n special_regs: list[str],\n vec_fp_regs: list[str],\n ) -> None:\n \"\"\"Initializes the Registers object.\n\n Args:\n thread_id (int): The thread ID.\n generic_regs (list[str]): The list of registers to include in the repr.\n special_regs (list[str]): The list of special registers to include in the repr.\n vec_fp_regs (list[str]): The list of vector and floating point registers to include in the repr\n \"\"\"\n self._thread_id = thread_id\n self._generic_regs = generic_regs\n self._special_regs = special_regs\n self._vec_fp_regs = vec_fp_regs\n\n def filter(self: SnapshotRegisters, value: float) -> list[str]:\n \"\"\"Filters the registers by value.\n\n Args:\n value (float): The value to search for.\n\n Returns:\n list[str]: A list of names of the registers containing the value.\n \"\"\"\n attributes = self.__dict__\n\n return [attr for attr in attributes if getattr(self, attr) == value]\n"},{"location":"from_pydoc/generated/snapshots/registers/snapshot_registers/#libdebug.snapshots.registers.snapshot_registers.SnapshotRegisters.__init__","title":"__init__(thread_id, generic_regs, special_regs, vec_fp_regs)","text":"Initializes the Registers object.
Parameters:
Name Type Description Defaultthread_id int The thread ID.
requiredgeneric_regs list[str] The list of registers to include in the repr.
requiredspecial_regs list[str] The list of special registers to include in the repr.
requiredvec_fp_regs list[str] The list of vector and floating point registers to include in the repr
required Source code inlibdebug/snapshots/registers/snapshot_registers.py def __init__(\n self: SnapshotRegisters,\n thread_id: int,\n generic_regs: list[str],\n special_regs: list[str],\n vec_fp_regs: list[str],\n) -> None:\n \"\"\"Initializes the Registers object.\n\n Args:\n thread_id (int): The thread ID.\n generic_regs (list[str]): The list of registers to include in the repr.\n special_regs (list[str]): The list of special registers to include in the repr.\n vec_fp_regs (list[str]): The list of vector and floating point registers to include in the repr\n \"\"\"\n self._thread_id = thread_id\n self._generic_regs = generic_regs\n self._special_regs = special_regs\n self._vec_fp_regs = vec_fp_regs\n"},{"location":"from_pydoc/generated/snapshots/registers/snapshot_registers/#libdebug.snapshots.registers.snapshot_registers.SnapshotRegisters.filter","title":"filter(value)","text":"Filters the registers by value.
Parameters:
Name Type Description Defaultvalue float The value to search for.
requiredReturns:
Type Descriptionlist[str] list[str]: A list of names of the registers containing the value.
Source code inlibdebug/snapshots/registers/snapshot_registers.py def filter(self: SnapshotRegisters, value: float) -> list[str]:\n \"\"\"Filters the registers by value.\n\n Args:\n value (float): The value to search for.\n\n Returns:\n list[str]: A list of names of the registers containing the value.\n \"\"\"\n attributes = self.__dict__\n\n return [attr for attr in attributes if getattr(self, attr) == value]\n"},{"location":"from_pydoc/generated/snapshots/serialization/json_serializer/","title":"libdebug.snapshots.serialization.json_serializer","text":""},{"location":"from_pydoc/generated/snapshots/serialization/json_serializer/#libdebug.snapshots.serialization.json_serializer.JSONSerializer","title":"JSONSerializer","text":"Helper class to serialize and deserialize snapshots using JSON format.
Source code inlibdebug/snapshots/serialization/json_serializer.py class JSONSerializer:\n \"\"\"Helper class to serialize and deserialize snapshots using JSON format.\"\"\"\n\n def load(self: JSONSerializer, file_path: str) -> Snapshot:\n \"\"\"Load a snapshot from a JSON file.\n\n Args:\n file_path (str): The path to the JSON file containing the snapshot.\n\n Returns:\n Snapshot: The loaded snapshot object.\n \"\"\"\n with Path(file_path).open() as file:\n snapshot_dict = json.load(file)\n\n # Determine the type of snapshot\n is_process_snapshot = \"process_id\" in snapshot_dict\n\n # Create a new instance of the appropriate class\n if is_process_snapshot:\n loaded_snap = ProcessSnapshot.__new__(ProcessSnapshot)\n loaded_snap.process_id = snapshot_dict[\"process_id\"]\n loaded_snap.pid = loaded_snap.process_id\n else:\n loaded_snap = ThreadSnapshot.__new__(ThreadSnapshot)\n loaded_snap.thread_id = snapshot_dict[\"thread_id\"]\n loaded_snap.tid = loaded_snap.thread_id\n\n # Basic snapshot info\n loaded_snap.snapshot_id = snapshot_dict[\"snapshot_id\"]\n loaded_snap.arch = snapshot_dict[\"arch\"]\n loaded_snap.name = snapshot_dict[\"name\"]\n loaded_snap.level = snapshot_dict[\"level\"]\n loaded_snap.aslr_enabled = snapshot_dict.get(\"aslr_enabled\")\n loaded_snap._process_full_path = snapshot_dict.get(\"_process_full_path\", None)\n loaded_snap._process_name = snapshot_dict.get(\"_process_name\", None)\n\n # Create a register field for the snapshot\n if not is_process_snapshot:\n loaded_snap.regs = SnapshotRegisters(\n loaded_snap.thread_id,\n snapshot_dict[\"architectural_registers\"][\"generic\"],\n snapshot_dict[\"architectural_registers\"][\"special\"],\n snapshot_dict[\"architectural_registers\"][\"vector_fp\"],\n )\n\n # Load registers\n for reg_name, reg_value in snapshot_dict[\"regs\"].items():\n loaded_snap.regs.__setattr__(reg_name, reg_value)\n\n # Recreate memory maps\n loaded_maps = snapshot_dict[\"maps\"]\n raw_map_list = []\n\n for saved_map in loaded_maps:\n new_map = MemoryMapSnapshot(\n saved_map[\"start\"],\n saved_map[\"end\"],\n saved_map[\"permissions\"],\n saved_map[\"size\"],\n saved_map[\"offset\"],\n saved_map[\"backing_file\"],\n b64decode(saved_map[\"content\"]) if saved_map[\"content\"] is not None else None,\n )\n raw_map_list.append(new_map)\n\n loaded_snap.maps = MemoryMapSnapshotList(\n raw_map_list,\n loaded_snap._process_name,\n loaded_snap._process_full_path,\n )\n\n # Handle threads for ProcessSnapshot\n if is_process_snapshot:\n loaded_snap.threads = []\n for thread_dict in snapshot_dict[\"threads\"]:\n thread_snap = LightweightThreadSnapshot.__new__(LightweightThreadSnapshot)\n thread_snap.snapshot_id = thread_dict[\"snapshot_id\"]\n thread_snap.thread_id = thread_dict[\"thread_id\"]\n thread_snap.tid = thread_snap.thread_id\n thread_snap._proc_snapshot = loaded_snap\n\n thread_snap.regs = SnapshotRegisters(\n thread_snap.thread_id,\n snapshot_dict[\"architectural_registers\"][\"generic\"],\n snapshot_dict[\"architectural_registers\"][\"special\"],\n snapshot_dict[\"architectural_registers\"][\"vector_fp\"],\n )\n\n for reg_name, reg_value in thread_dict[\"regs\"].items():\n thread_snap.regs.__setattr__(reg_name, reg_value)\n\n loaded_snap.threads.append(thread_snap)\n\n # Handle symbols\n raw_loaded_symbols = snapshot_dict.get(\"symbols\", None)\n if raw_loaded_symbols is not None:\n sym_list = [\n Symbol(\n saved_symbol[\"start\"],\n saved_symbol[\"end\"],\n saved_symbol[\"name\"],\n saved_symbol[\"backing_file\"],\n )\n for saved_symbol in raw_loaded_symbols\n ]\n sym_list = SymbolList(sym_list, loaded_snap)\n loaded_snap._memory = SnapshotMemoryView(loaded_snap, sym_list)\n elif loaded_snap.level != \"base\":\n raise ValueError(\"Memory snapshot loading requested but no symbols were saved.\")\n else:\n loaded_snap._memory = None\n\n return loaded_snap\n\n def dump(self: JSONSerializer, snapshot: Snapshot, out_path: str) -> None:\n \"\"\"Dump a snapshot to a JSON file.\n\n Args:\n snapshot (Snapshot): The snapshot to be dumped.\n out_path (str): The path to the output JSON file.\n \"\"\"\n\n def get_register_names(regs: SnapshotRegisters) -> list[str]:\n return [reg_name for reg_name in dir(regs) if isinstance(getattr(regs, reg_name), int | float)]\n\n def save_memory_maps(maps: MemoryMapSnapshotList) -> list[dict]:\n return [\n {\n \"start\": memory_map.start,\n \"end\": memory_map.end,\n \"permissions\": memory_map.permissions,\n \"size\": memory_map.size,\n \"offset\": memory_map.offset,\n \"backing_file\": memory_map.backing_file,\n \"content\": b64encode(memory_map.content).decode(\"utf-8\")\n if memory_map.content is not None\n else None,\n }\n for memory_map in maps\n ]\n\n def save_symbols(memory: SnapshotMemoryView) -> list[dict] | None:\n if memory is None:\n return None\n return [\n {\n \"start\": symbol.start,\n \"end\": symbol.end,\n \"name\": symbol.name,\n \"backing_file\": symbol.backing_file,\n }\n for symbol in memory._symbol_ref\n ]\n\n all_reg_names = get_register_names(snapshot.regs)\n\n serializable_dict = {\n \"type\": \"process\" if hasattr(snapshot, \"threads\") else \"thread\",\n \"arch\": snapshot.arch,\n \"snapshot_id\": snapshot.snapshot_id,\n \"level\": snapshot.level,\n \"name\": snapshot.name,\n \"aslr_enabled\": snapshot.aslr_enabled,\n \"architectural_registers\": {\n \"generic\": snapshot.regs._generic_regs,\n \"special\": snapshot.regs._special_regs,\n \"vector_fp\": snapshot.regs._vec_fp_regs,\n },\n \"maps\": save_memory_maps(snapshot.maps),\n \"symbols\": save_symbols(snapshot._memory),\n }\n\n if hasattr(snapshot, \"threads\"):\n # ProcessSnapshot-specific data\n thread_snapshots = [\n {\n \"snapshot_id\": thread.snapshot_id,\n \"thread_id\": thread.thread_id,\n \"regs\": {reg_name: getattr(thread.regs, reg_name) for reg_name in all_reg_names},\n }\n for thread in snapshot.threads\n ]\n serializable_dict.update(\n {\n \"process_id\": snapshot.process_id,\n \"threads\": thread_snapshots,\n \"_process_full_path\": snapshot._process_full_path,\n \"_process_name\": snapshot._process_name,\n }\n )\n else:\n # ThreadSnapshot-specific data\n serializable_dict.update(\n {\n \"thread_id\": snapshot.thread_id,\n \"regs\": {reg_name: getattr(snapshot.regs, reg_name) for reg_name in all_reg_names},\n \"_process_full_path\": snapshot._process_full_path,\n \"_process_name\": snapshot._process_name,\n }\n )\n\n with Path(out_path).open(\"w\") as file:\n json.dump(serializable_dict, file)\n"},{"location":"from_pydoc/generated/snapshots/serialization/json_serializer/#libdebug.snapshots.serialization.json_serializer.JSONSerializer.dump","title":"dump(snapshot, out_path)","text":"Dump a snapshot to a JSON file.
Parameters:
Name Type Description Defaultsnapshot Snapshot The snapshot to be dumped.
requiredout_path str The path to the output JSON file.
required Source code inlibdebug/snapshots/serialization/json_serializer.py def dump(self: JSONSerializer, snapshot: Snapshot, out_path: str) -> None:\n \"\"\"Dump a snapshot to a JSON file.\n\n Args:\n snapshot (Snapshot): The snapshot to be dumped.\n out_path (str): The path to the output JSON file.\n \"\"\"\n\n def get_register_names(regs: SnapshotRegisters) -> list[str]:\n return [reg_name for reg_name in dir(regs) if isinstance(getattr(regs, reg_name), int | float)]\n\n def save_memory_maps(maps: MemoryMapSnapshotList) -> list[dict]:\n return [\n {\n \"start\": memory_map.start,\n \"end\": memory_map.end,\n \"permissions\": memory_map.permissions,\n \"size\": memory_map.size,\n \"offset\": memory_map.offset,\n \"backing_file\": memory_map.backing_file,\n \"content\": b64encode(memory_map.content).decode(\"utf-8\")\n if memory_map.content is not None\n else None,\n }\n for memory_map in maps\n ]\n\n def save_symbols(memory: SnapshotMemoryView) -> list[dict] | None:\n if memory is None:\n return None\n return [\n {\n \"start\": symbol.start,\n \"end\": symbol.end,\n \"name\": symbol.name,\n \"backing_file\": symbol.backing_file,\n }\n for symbol in memory._symbol_ref\n ]\n\n all_reg_names = get_register_names(snapshot.regs)\n\n serializable_dict = {\n \"type\": \"process\" if hasattr(snapshot, \"threads\") else \"thread\",\n \"arch\": snapshot.arch,\n \"snapshot_id\": snapshot.snapshot_id,\n \"level\": snapshot.level,\n \"name\": snapshot.name,\n \"aslr_enabled\": snapshot.aslr_enabled,\n \"architectural_registers\": {\n \"generic\": snapshot.regs._generic_regs,\n \"special\": snapshot.regs._special_regs,\n \"vector_fp\": snapshot.regs._vec_fp_regs,\n },\n \"maps\": save_memory_maps(snapshot.maps),\n \"symbols\": save_symbols(snapshot._memory),\n }\n\n if hasattr(snapshot, \"threads\"):\n # ProcessSnapshot-specific data\n thread_snapshots = [\n {\n \"snapshot_id\": thread.snapshot_id,\n \"thread_id\": thread.thread_id,\n \"regs\": {reg_name: getattr(thread.regs, reg_name) for reg_name in all_reg_names},\n }\n for thread in snapshot.threads\n ]\n serializable_dict.update(\n {\n \"process_id\": snapshot.process_id,\n \"threads\": thread_snapshots,\n \"_process_full_path\": snapshot._process_full_path,\n \"_process_name\": snapshot._process_name,\n }\n )\n else:\n # ThreadSnapshot-specific data\n serializable_dict.update(\n {\n \"thread_id\": snapshot.thread_id,\n \"regs\": {reg_name: getattr(snapshot.regs, reg_name) for reg_name in all_reg_names},\n \"_process_full_path\": snapshot._process_full_path,\n \"_process_name\": snapshot._process_name,\n }\n )\n\n with Path(out_path).open(\"w\") as file:\n json.dump(serializable_dict, file)\n"},{"location":"from_pydoc/generated/snapshots/serialization/json_serializer/#libdebug.snapshots.serialization.json_serializer.JSONSerializer.load","title":"load(file_path)","text":"Load a snapshot from a JSON file.
Parameters:
Name Type Description Defaultfile_path str The path to the JSON file containing the snapshot.
requiredReturns:
Name Type DescriptionSnapshot Snapshot The loaded snapshot object.
Source code inlibdebug/snapshots/serialization/json_serializer.py def load(self: JSONSerializer, file_path: str) -> Snapshot:\n \"\"\"Load a snapshot from a JSON file.\n\n Args:\n file_path (str): The path to the JSON file containing the snapshot.\n\n Returns:\n Snapshot: The loaded snapshot object.\n \"\"\"\n with Path(file_path).open() as file:\n snapshot_dict = json.load(file)\n\n # Determine the type of snapshot\n is_process_snapshot = \"process_id\" in snapshot_dict\n\n # Create a new instance of the appropriate class\n if is_process_snapshot:\n loaded_snap = ProcessSnapshot.__new__(ProcessSnapshot)\n loaded_snap.process_id = snapshot_dict[\"process_id\"]\n loaded_snap.pid = loaded_snap.process_id\n else:\n loaded_snap = ThreadSnapshot.__new__(ThreadSnapshot)\n loaded_snap.thread_id = snapshot_dict[\"thread_id\"]\n loaded_snap.tid = loaded_snap.thread_id\n\n # Basic snapshot info\n loaded_snap.snapshot_id = snapshot_dict[\"snapshot_id\"]\n loaded_snap.arch = snapshot_dict[\"arch\"]\n loaded_snap.name = snapshot_dict[\"name\"]\n loaded_snap.level = snapshot_dict[\"level\"]\n loaded_snap.aslr_enabled = snapshot_dict.get(\"aslr_enabled\")\n loaded_snap._process_full_path = snapshot_dict.get(\"_process_full_path\", None)\n loaded_snap._process_name = snapshot_dict.get(\"_process_name\", None)\n\n # Create a register field for the snapshot\n if not is_process_snapshot:\n loaded_snap.regs = SnapshotRegisters(\n loaded_snap.thread_id,\n snapshot_dict[\"architectural_registers\"][\"generic\"],\n snapshot_dict[\"architectural_registers\"][\"special\"],\n snapshot_dict[\"architectural_registers\"][\"vector_fp\"],\n )\n\n # Load registers\n for reg_name, reg_value in snapshot_dict[\"regs\"].items():\n loaded_snap.regs.__setattr__(reg_name, reg_value)\n\n # Recreate memory maps\n loaded_maps = snapshot_dict[\"maps\"]\n raw_map_list = []\n\n for saved_map in loaded_maps:\n new_map = MemoryMapSnapshot(\n saved_map[\"start\"],\n saved_map[\"end\"],\n saved_map[\"permissions\"],\n saved_map[\"size\"],\n saved_map[\"offset\"],\n saved_map[\"backing_file\"],\n b64decode(saved_map[\"content\"]) if saved_map[\"content\"] is not None else None,\n )\n raw_map_list.append(new_map)\n\n loaded_snap.maps = MemoryMapSnapshotList(\n raw_map_list,\n loaded_snap._process_name,\n loaded_snap._process_full_path,\n )\n\n # Handle threads for ProcessSnapshot\n if is_process_snapshot:\n loaded_snap.threads = []\n for thread_dict in snapshot_dict[\"threads\"]:\n thread_snap = LightweightThreadSnapshot.__new__(LightweightThreadSnapshot)\n thread_snap.snapshot_id = thread_dict[\"snapshot_id\"]\n thread_snap.thread_id = thread_dict[\"thread_id\"]\n thread_snap.tid = thread_snap.thread_id\n thread_snap._proc_snapshot = loaded_snap\n\n thread_snap.regs = SnapshotRegisters(\n thread_snap.thread_id,\n snapshot_dict[\"architectural_registers\"][\"generic\"],\n snapshot_dict[\"architectural_registers\"][\"special\"],\n snapshot_dict[\"architectural_registers\"][\"vector_fp\"],\n )\n\n for reg_name, reg_value in thread_dict[\"regs\"].items():\n thread_snap.regs.__setattr__(reg_name, reg_value)\n\n loaded_snap.threads.append(thread_snap)\n\n # Handle symbols\n raw_loaded_symbols = snapshot_dict.get(\"symbols\", None)\n if raw_loaded_symbols is not None:\n sym_list = [\n Symbol(\n saved_symbol[\"start\"],\n saved_symbol[\"end\"],\n saved_symbol[\"name\"],\n saved_symbol[\"backing_file\"],\n )\n for saved_symbol in raw_loaded_symbols\n ]\n sym_list = SymbolList(sym_list, loaded_snap)\n loaded_snap._memory = SnapshotMemoryView(loaded_snap, sym_list)\n elif loaded_snap.level != \"base\":\n raise ValueError(\"Memory snapshot loading requested but no symbols were saved.\")\n else:\n loaded_snap._memory = None\n\n return loaded_snap\n"},{"location":"from_pydoc/generated/snapshots/serialization/serialization_helper/","title":"libdebug.snapshots.serialization.serialization_helper","text":""},{"location":"from_pydoc/generated/snapshots/serialization/serialization_helper/#libdebug.snapshots.serialization.serialization_helper.SerializationHelper","title":"SerializationHelper","text":"Helper class to serialize and deserialize snapshots.
Source code inlibdebug/snapshots/serialization/serialization_helper.py class SerializationHelper:\n \"\"\"Helper class to serialize and deserialize snapshots.\"\"\"\n\n def load(self: SerializationHelper, file_path: str) -> Snapshot:\n \"\"\"Load a snapshot from a file.\n\n Args:\n file_path (str): The path to the file containing the snapshot.\n\n Returns:\n Snapshot: The loaded snapshot object.\n \"\"\"\n if not file_path.endswith(\".json\"):\n liblog.warning(\"The target file doesn't have a JSON extension. The output will be assumed JSON.\")\n\n # Future code can select the serializer\n # Currently, only JSON is supported\n serializer_type = SupportedSerializers.JSON\n\n serializer = serializer_type.serializer_class()\n\n return serializer.load(file_path)\n\n def save(self: SerializationHelper, snapshot: Snapshot, out_path: str) -> None:\n \"\"\"Dump a snapshot to a file.\n\n Args:\n snapshot (Snapshot): The snapshot to be dumped.\n out_path (str): The path to the output file.\n \"\"\"\n if not out_path.endswith(\".json\"):\n liblog.warning(\"The target file doesn't have a JSON extension. The output will be assumed JSON.\")\n\n # Future code can select the serializer\n # Currently, only JSON is supported\n serializer_type = SupportedSerializers.JSON\n\n serializer = serializer_type.serializer_class()\n\n serializer.dump(snapshot, out_path)\n"},{"location":"from_pydoc/generated/snapshots/serialization/serialization_helper/#libdebug.snapshots.serialization.serialization_helper.SerializationHelper.load","title":"load(file_path)","text":"Load a snapshot from a file.
Parameters:
Name Type Description Defaultfile_path str The path to the file containing the snapshot.
requiredReturns:
Name Type DescriptionSnapshot Snapshot The loaded snapshot object.
Source code inlibdebug/snapshots/serialization/serialization_helper.py def load(self: SerializationHelper, file_path: str) -> Snapshot:\n \"\"\"Load a snapshot from a file.\n\n Args:\n file_path (str): The path to the file containing the snapshot.\n\n Returns:\n Snapshot: The loaded snapshot object.\n \"\"\"\n if not file_path.endswith(\".json\"):\n liblog.warning(\"The target file doesn't have a JSON extension. The output will be assumed JSON.\")\n\n # Future code can select the serializer\n # Currently, only JSON is supported\n serializer_type = SupportedSerializers.JSON\n\n serializer = serializer_type.serializer_class()\n\n return serializer.load(file_path)\n"},{"location":"from_pydoc/generated/snapshots/serialization/serialization_helper/#libdebug.snapshots.serialization.serialization_helper.SerializationHelper.save","title":"save(snapshot, out_path)","text":"Dump a snapshot to a file.
Parameters:
Name Type Description Defaultsnapshot Snapshot The snapshot to be dumped.
requiredout_path str The path to the output file.
required Source code inlibdebug/snapshots/serialization/serialization_helper.py def save(self: SerializationHelper, snapshot: Snapshot, out_path: str) -> None:\n \"\"\"Dump a snapshot to a file.\n\n Args:\n snapshot (Snapshot): The snapshot to be dumped.\n out_path (str): The path to the output file.\n \"\"\"\n if not out_path.endswith(\".json\"):\n liblog.warning(\"The target file doesn't have a JSON extension. The output will be assumed JSON.\")\n\n # Future code can select the serializer\n # Currently, only JSON is supported\n serializer_type = SupportedSerializers.JSON\n\n serializer = serializer_type.serializer_class()\n\n serializer.dump(snapshot, out_path)\n"},{"location":"from_pydoc/generated/snapshots/serialization/serializer/","title":"libdebug.snapshots.serialization.serializer","text":""},{"location":"from_pydoc/generated/snapshots/serialization/serializer/#libdebug.snapshots.serialization.serializer.AbstractSerializer","title":"AbstractSerializer","text":" Bases: ABC
Helper class to serialize and deserialize snapshots.
Source code inlibdebug/snapshots/serialization/serializer.py class AbstractSerializer(ABC):\n \"\"\"Helper class to serialize and deserialize snapshots.\"\"\"\n\n @abstractmethod\n def load(self: AbstractSerializer, file_path: str) -> Snapshot:\n \"\"\"Load a snapshot from a file.\n\n Args:\n file_path (str): The path to the file containing the snapshot.\n\n Returns:\n Snapshot: The loaded snapshot object.\n \"\"\"\n\n @abstractmethod\n def dump(self: AbstractSerializer, snapshot: Snapshot, out_path: str) -> None:\n \"\"\"Dump a snapshot to a file.\n\n Args:\n snapshot (Snapshot): The snapshot to be dumped.\n out_path (str): The path to the output file.\n \"\"\"\n"},{"location":"from_pydoc/generated/snapshots/serialization/serializer/#libdebug.snapshots.serialization.serializer.AbstractSerializer.dump","title":"dump(snapshot, out_path) abstractmethod","text":"Dump a snapshot to a file.
Parameters:
Name Type Description Defaultsnapshot Snapshot The snapshot to be dumped.
requiredout_path str The path to the output file.
required Source code inlibdebug/snapshots/serialization/serializer.py @abstractmethod\ndef dump(self: AbstractSerializer, snapshot: Snapshot, out_path: str) -> None:\n \"\"\"Dump a snapshot to a file.\n\n Args:\n snapshot (Snapshot): The snapshot to be dumped.\n out_path (str): The path to the output file.\n \"\"\"\n"},{"location":"from_pydoc/generated/snapshots/serialization/serializer/#libdebug.snapshots.serialization.serializer.AbstractSerializer.load","title":"load(file_path) abstractmethod","text":"Load a snapshot from a file.
Parameters:
Name Type Description Defaultfile_path str The path to the file containing the snapshot.
requiredReturns:
Name Type DescriptionSnapshot Snapshot The loaded snapshot object.
Source code inlibdebug/snapshots/serialization/serializer.py @abstractmethod\ndef load(self: AbstractSerializer, file_path: str) -> Snapshot:\n \"\"\"Load a snapshot from a file.\n\n Args:\n file_path (str): The path to the file containing the snapshot.\n\n Returns:\n Snapshot: The loaded snapshot object.\n \"\"\"\n"},{"location":"from_pydoc/generated/snapshots/serialization/supported_serializers/","title":"libdebug.snapshots.serialization.supported_serializers","text":""},{"location":"from_pydoc/generated/snapshots/serialization/supported_serializers/#libdebug.snapshots.serialization.supported_serializers.SupportedSerializers","title":"SupportedSerializers","text":" Bases: Enum
Enumeration of supported serializers for snapshots.
Source code inlibdebug/snapshots/serialization/supported_serializers.py class SupportedSerializers(Enum):\n \"\"\"Enumeration of supported serializers for snapshots.\"\"\"\n JSON = JSONSerializer\n\n @property\n def serializer_class(self: SupportedSerializers) -> AbstractSerializer:\n \"\"\"Return the serializer class.\"\"\"\n return self.value\n"},{"location":"from_pydoc/generated/snapshots/serialization/supported_serializers/#libdebug.snapshots.serialization.supported_serializers.SupportedSerializers.serializer_class","title":"serializer_class property","text":"Return the serializer class.
"},{"location":"from_pydoc/generated/snapshots/thread/lw_thread_snapshot/","title":"libdebug.snapshots.thread.lw_thread_snapshot","text":""},{"location":"from_pydoc/generated/snapshots/thread/lw_thread_snapshot/#libdebug.snapshots.thread.lw_thread_snapshot.LightweightThreadSnapshot","title":"LightweightThreadSnapshot","text":" Bases: ThreadSnapshot
This object represents a snapshot of the target thread. It has to be initialized by a ProcessSnapshot, since it initializes its properties with shared process state. It holds information about a thread's state.
Snapshot levels: - base: Registers - writable: Registers, writable memory contents - full: Registers, all readable memory contents
Source code inlibdebug/snapshots/thread/lw_thread_snapshot.py class LightweightThreadSnapshot(ThreadSnapshot):\n \"\"\"This object represents a snapshot of the target thread. It has to be initialized by a ProcessSnapshot, since it initializes its properties with shared process state. It holds information about a thread's state.\n\n Snapshot levels:\n - base: Registers\n - writable: Registers, writable memory contents\n - full: Registers, all readable memory contents\n \"\"\"\n\n def __init__(\n self: LightweightThreadSnapshot,\n thread: ThreadContext,\n process_snapshot: ProcessSnapshot,\n ) -> None:\n \"\"\"Creates a new snapshot object for the given thread.\n\n Args:\n thread (ThreadContext): The thread to take a snapshot of.\n process_snapshot (ProcessSnapshot): The process snapshot to which the thread belongs.\n \"\"\"\n # Set id of the snapshot and increment the counter\n self.snapshot_id = thread._snapshot_count\n thread.notify_snapshot_taken()\n\n # Basic snapshot info\n self.thread_id = thread.thread_id\n self.tid = thread.tid\n\n # If there is a name, append the thread id\n if process_snapshot.name is None:\n self.name = None\n else:\n self.name = f\"{process_snapshot.name} - Thread {self.tid}\"\n\n # Get thread registers\n self._save_regs(thread)\n\n self._proc_snapshot = process_snapshot\n\n @property\n def level(self: LightweightThreadSnapshot) -> str:\n \"\"\"Returns the snapshot level.\"\"\"\n return self._proc_snapshot.level\n\n @property\n def arch(self: LightweightThreadSnapshot) -> str:\n \"\"\"Returns the architecture of the thread snapshot.\"\"\"\n return self._proc_snapshot.arch\n\n @property\n def maps(self: LightweightThreadSnapshot) -> MemoryMapSnapshotList:\n \"\"\"Returns the memory map snapshot list associated with the process snapshot.\"\"\"\n return self._proc_snapshot.maps\n\n @property\n def _memory(self: LightweightThreadSnapshot) -> SnapshotMemoryView:\n \"\"\"Returns the memory view associated with the process snapshot.\"\"\"\n return self._proc_snapshot._memory\n"},{"location":"from_pydoc/generated/snapshots/thread/lw_thread_snapshot/#libdebug.snapshots.thread.lw_thread_snapshot.LightweightThreadSnapshot._memory","title":"_memory property","text":"Returns the memory view associated with the process snapshot.
"},{"location":"from_pydoc/generated/snapshots/thread/lw_thread_snapshot/#libdebug.snapshots.thread.lw_thread_snapshot.LightweightThreadSnapshot.arch","title":"arch property","text":"Returns the architecture of the thread snapshot.
"},{"location":"from_pydoc/generated/snapshots/thread/lw_thread_snapshot/#libdebug.snapshots.thread.lw_thread_snapshot.LightweightThreadSnapshot.level","title":"level property","text":"Returns the snapshot level.
"},{"location":"from_pydoc/generated/snapshots/thread/lw_thread_snapshot/#libdebug.snapshots.thread.lw_thread_snapshot.LightweightThreadSnapshot.maps","title":"maps property","text":"Returns the memory map snapshot list associated with the process snapshot.
"},{"location":"from_pydoc/generated/snapshots/thread/lw_thread_snapshot/#libdebug.snapshots.thread.lw_thread_snapshot.LightweightThreadSnapshot.__init__","title":"__init__(thread, process_snapshot)","text":"Creates a new snapshot object for the given thread.
Parameters:
Name Type Description Defaultthread ThreadContext The thread to take a snapshot of.
requiredprocess_snapshot ProcessSnapshot The process snapshot to which the thread belongs.
required Source code inlibdebug/snapshots/thread/lw_thread_snapshot.py def __init__(\n self: LightweightThreadSnapshot,\n thread: ThreadContext,\n process_snapshot: ProcessSnapshot,\n) -> None:\n \"\"\"Creates a new snapshot object for the given thread.\n\n Args:\n thread (ThreadContext): The thread to take a snapshot of.\n process_snapshot (ProcessSnapshot): The process snapshot to which the thread belongs.\n \"\"\"\n # Set id of the snapshot and increment the counter\n self.snapshot_id = thread._snapshot_count\n thread.notify_snapshot_taken()\n\n # Basic snapshot info\n self.thread_id = thread.thread_id\n self.tid = thread.tid\n\n # If there is a name, append the thread id\n if process_snapshot.name is None:\n self.name = None\n else:\n self.name = f\"{process_snapshot.name} - Thread {self.tid}\"\n\n # Get thread registers\n self._save_regs(thread)\n\n self._proc_snapshot = process_snapshot\n"},{"location":"from_pydoc/generated/snapshots/thread/lw_thread_snapshot_diff/","title":"libdebug.snapshots.thread.lw_thread_snapshot_diff","text":""},{"location":"from_pydoc/generated/snapshots/thread/lw_thread_snapshot_diff/#libdebug.snapshots.thread.lw_thread_snapshot_diff.LightweightThreadSnapshotDiff","title":"LightweightThreadSnapshotDiff","text":" Bases: ThreadSnapshotDiff
This object represents a diff between thread snapshots.
Source code inlibdebug/snapshots/thread/lw_thread_snapshot_diff.py class LightweightThreadSnapshotDiff(ThreadSnapshotDiff):\n \"\"\"This object represents a diff between thread snapshots.\"\"\"\n\n def __init__(\n self: LightweightThreadSnapshotDiff,\n snapshot1: ThreadSnapshot,\n snapshot2: ThreadSnapshot,\n process_diff: ProcessSnapshotDiff,\n ) -> ThreadSnapshotDiff:\n \"\"\"Returns a diff between given snapshots of the same thread.\n\n Args:\n snapshot1 (ThreadSnapshot): A thread snapshot.\n snapshot2 (ThreadSnapshot): A thread snapshot.\n process_diff (ProcessSnapshotDiff): The diff of the process to which the thread belongs.\n \"\"\"\n # Generic diff initialization\n Diff.__init__(self, snapshot1, snapshot2)\n\n # Register diffs\n self._save_reg_diffs()\n\n self._proc_diff = process_diff\n\n @property\n def maps(self: LightweightThreadSnapshotDiff) -> list[MemoryMapDiff]:\n \"\"\"Return the memory map diff.\"\"\"\n return self._proc_diff.maps\n"},{"location":"from_pydoc/generated/snapshots/thread/lw_thread_snapshot_diff/#libdebug.snapshots.thread.lw_thread_snapshot_diff.LightweightThreadSnapshotDiff.maps","title":"maps property","text":"Return the memory map diff.
"},{"location":"from_pydoc/generated/snapshots/thread/lw_thread_snapshot_diff/#libdebug.snapshots.thread.lw_thread_snapshot_diff.LightweightThreadSnapshotDiff.__init__","title":"__init__(snapshot1, snapshot2, process_diff)","text":"Returns a diff between given snapshots of the same thread.
Parameters:
Name Type Description Defaultsnapshot1 ThreadSnapshot A thread snapshot.
requiredsnapshot2 ThreadSnapshot A thread snapshot.
requiredprocess_diff ProcessSnapshotDiff The diff of the process to which the thread belongs.
required Source code inlibdebug/snapshots/thread/lw_thread_snapshot_diff.py def __init__(\n self: LightweightThreadSnapshotDiff,\n snapshot1: ThreadSnapshot,\n snapshot2: ThreadSnapshot,\n process_diff: ProcessSnapshotDiff,\n) -> ThreadSnapshotDiff:\n \"\"\"Returns a diff between given snapshots of the same thread.\n\n Args:\n snapshot1 (ThreadSnapshot): A thread snapshot.\n snapshot2 (ThreadSnapshot): A thread snapshot.\n process_diff (ProcessSnapshotDiff): The diff of the process to which the thread belongs.\n \"\"\"\n # Generic diff initialization\n Diff.__init__(self, snapshot1, snapshot2)\n\n # Register diffs\n self._save_reg_diffs()\n\n self._proc_diff = process_diff\n"},{"location":"from_pydoc/generated/snapshots/thread/thread_snapshot/","title":"libdebug.snapshots.thread.thread_snapshot","text":""},{"location":"from_pydoc/generated/snapshots/thread/thread_snapshot/#libdebug.snapshots.thread.thread_snapshot.ThreadSnapshot","title":"ThreadSnapshot","text":" Bases: Snapshot
This object represents a snapshot of the target thread. It holds information about a thread's state.
Snapshot levels: - base: Registers - writable: Registers, writable memory contents - full: Registers, all readable memory contents
Source code inlibdebug/snapshots/thread/thread_snapshot.py class ThreadSnapshot(Snapshot):\n \"\"\"This object represents a snapshot of the target thread. It holds information about a thread's state.\n\n Snapshot levels:\n - base: Registers\n - writable: Registers, writable memory contents\n - full: Registers, all readable memory contents\n \"\"\"\n\n def __init__(self: ThreadSnapshot, thread: ThreadContext, level: str = \"base\", name: str | None = None) -> None:\n \"\"\"Creates a new snapshot object for the given thread.\n\n Args:\n thread (ThreadContext): The thread to take a snapshot of.\n level (str, optional): The level of the snapshot. Defaults to \"base\".\n name (str, optional): A name associated to the snapshot. Defaults to None.\n \"\"\"\n # Set id of the snapshot and increment the counter\n self.snapshot_id = thread._snapshot_count\n thread.notify_snapshot_taken()\n\n # Basic snapshot info\n self.thread_id = thread.thread_id\n self.tid = thread.tid\n self.name = name\n self.level = level\n self.arch = thread._internal_debugger.arch\n self.aslr_enabled = thread._internal_debugger.aslr_enabled\n self._process_full_path = thread.debugger._internal_debugger._process_full_path\n self._process_name = thread.debugger._internal_debugger._process_name\n self._serialization_helper = thread._internal_debugger.serialization_helper\n\n # Get thread registers\n self._save_regs(thread)\n\n # Memory maps\n match level:\n case \"base\":\n map_list = []\n\n for curr_map in thread.debugger.maps:\n saved_map = MemoryMapSnapshot(\n start=curr_map.start,\n end=curr_map.end,\n permissions=curr_map.permissions,\n size=curr_map.size,\n offset=curr_map.offset,\n backing_file=curr_map.backing_file,\n content=None,\n )\n map_list.append(saved_map)\n\n self.maps = MemoryMapSnapshotList(map_list, self._process_name, self._process_full_path)\n\n self._memory = None\n case \"writable\":\n if not thread.debugger.fast_memory:\n liblog.warning(\n \"Memory snapshot requested but fast memory is not enabled. This will take a long time.\",\n )\n\n # Save all writable memory pages\n self._save_memory_maps(thread.debugger._internal_debugger, writable_only=True)\n\n self._memory = SnapshotMemoryView(self, thread.debugger.symbols)\n case \"full\":\n if not thread.debugger.fast_memory:\n liblog.warning(\n \"Memory snapshot requested but fast memory is not enabled. This will take a long time.\",\n )\n\n # Save all memory pages\n self._save_memory_maps(thread._internal_debugger, writable_only=False)\n\n self._memory = SnapshotMemoryView(self, thread.debugger.symbols)\n case _:\n raise ValueError(f\"Invalid snapshot level {level}\")\n\n # Log the creation of the snapshot\n named_addition = \" named \" + self.name if name is not None else \"\"\n liblog.debugger(\n f\"Created snapshot {self.snapshot_id} of level {self.level} for thread {self.tid}{named_addition}\",\n )\n\n def diff(self: ThreadSnapshot, other: ThreadSnapshot) -> Diff:\n \"\"\"Creates a diff object between two snapshots.\"\"\"\n if not isinstance(other, ThreadSnapshot):\n raise TypeError(\"Both arguments must be ThreadSnapshot objects.\")\n\n return ThreadSnapshotDiff(self, other)\n"},{"location":"from_pydoc/generated/snapshots/thread/thread_snapshot/#libdebug.snapshots.thread.thread_snapshot.ThreadSnapshot.__init__","title":"__init__(thread, level='base', name=None)","text":"Creates a new snapshot object for the given thread.
Parameters:
Name Type Description Defaultthread ThreadContext The thread to take a snapshot of.
requiredlevel str The level of the snapshot. Defaults to \"base\".
'base' name str A name associated to the snapshot. Defaults to None.
None Source code in libdebug/snapshots/thread/thread_snapshot.py def __init__(self: ThreadSnapshot, thread: ThreadContext, level: str = \"base\", name: str | None = None) -> None:\n \"\"\"Creates a new snapshot object for the given thread.\n\n Args:\n thread (ThreadContext): The thread to take a snapshot of.\n level (str, optional): The level of the snapshot. Defaults to \"base\".\n name (str, optional): A name associated to the snapshot. Defaults to None.\n \"\"\"\n # Set id of the snapshot and increment the counter\n self.snapshot_id = thread._snapshot_count\n thread.notify_snapshot_taken()\n\n # Basic snapshot info\n self.thread_id = thread.thread_id\n self.tid = thread.tid\n self.name = name\n self.level = level\n self.arch = thread._internal_debugger.arch\n self.aslr_enabled = thread._internal_debugger.aslr_enabled\n self._process_full_path = thread.debugger._internal_debugger._process_full_path\n self._process_name = thread.debugger._internal_debugger._process_name\n self._serialization_helper = thread._internal_debugger.serialization_helper\n\n # Get thread registers\n self._save_regs(thread)\n\n # Memory maps\n match level:\n case \"base\":\n map_list = []\n\n for curr_map in thread.debugger.maps:\n saved_map = MemoryMapSnapshot(\n start=curr_map.start,\n end=curr_map.end,\n permissions=curr_map.permissions,\n size=curr_map.size,\n offset=curr_map.offset,\n backing_file=curr_map.backing_file,\n content=None,\n )\n map_list.append(saved_map)\n\n self.maps = MemoryMapSnapshotList(map_list, self._process_name, self._process_full_path)\n\n self._memory = None\n case \"writable\":\n if not thread.debugger.fast_memory:\n liblog.warning(\n \"Memory snapshot requested but fast memory is not enabled. This will take a long time.\",\n )\n\n # Save all writable memory pages\n self._save_memory_maps(thread.debugger._internal_debugger, writable_only=True)\n\n self._memory = SnapshotMemoryView(self, thread.debugger.symbols)\n case \"full\":\n if not thread.debugger.fast_memory:\n liblog.warning(\n \"Memory snapshot requested but fast memory is not enabled. This will take a long time.\",\n )\n\n # Save all memory pages\n self._save_memory_maps(thread._internal_debugger, writable_only=False)\n\n self._memory = SnapshotMemoryView(self, thread.debugger.symbols)\n case _:\n raise ValueError(f\"Invalid snapshot level {level}\")\n\n # Log the creation of the snapshot\n named_addition = \" named \" + self.name if name is not None else \"\"\n liblog.debugger(\n f\"Created snapshot {self.snapshot_id} of level {self.level} for thread {self.tid}{named_addition}\",\n )\n"},{"location":"from_pydoc/generated/snapshots/thread/thread_snapshot/#libdebug.snapshots.thread.thread_snapshot.ThreadSnapshot.diff","title":"diff(other)","text":"Creates a diff object between two snapshots.
Source code inlibdebug/snapshots/thread/thread_snapshot.py def diff(self: ThreadSnapshot, other: ThreadSnapshot) -> Diff:\n \"\"\"Creates a diff object between two snapshots.\"\"\"\n if not isinstance(other, ThreadSnapshot):\n raise TypeError(\"Both arguments must be ThreadSnapshot objects.\")\n\n return ThreadSnapshotDiff(self, other)\n"},{"location":"from_pydoc/generated/snapshots/thread/thread_snapshot_diff/","title":"libdebug.snapshots.thread.thread_snapshot_diff","text":""},{"location":"from_pydoc/generated/snapshots/thread/thread_snapshot_diff/#libdebug.snapshots.thread.thread_snapshot_diff.ThreadSnapshotDiff","title":"ThreadSnapshotDiff","text":" Bases: Diff
This object represents a diff between thread snapshots.
Source code inlibdebug/snapshots/thread/thread_snapshot_diff.py class ThreadSnapshotDiff(Diff):\n \"\"\"This object represents a diff between thread snapshots.\"\"\"\n\n def __init__(self: ThreadSnapshotDiff, snapshot1: ThreadSnapshot, snapshot2: ThreadSnapshot) -> ThreadSnapshotDiff:\n \"\"\"Returns a diff between given snapshots of the same thread.\n\n Args:\n snapshot1 (ThreadSnapshot): A thread snapshot.\n snapshot2 (ThreadSnapshot): A thread snapshot.\n \"\"\"\n super().__init__(snapshot1, snapshot2)\n\n # Register diffs\n self._save_reg_diffs()\n\n # Memory map diffs\n self._resolve_maps_diff()\n\n if (self.snapshot1._process_name == self.snapshot2._process_name) and (\n self.snapshot1.aslr_enabled or self.snapshot2.aslr_enabled\n ):\n liblog.warning(\"ASLR is enabled in either or both snapshots. Diff may be messy.\")\n"},{"location":"from_pydoc/generated/snapshots/thread/thread_snapshot_diff/#libdebug.snapshots.thread.thread_snapshot_diff.ThreadSnapshotDiff.__init__","title":"__init__(snapshot1, snapshot2)","text":"Returns a diff between given snapshots of the same thread.
Parameters:
Name Type Description Defaultsnapshot1 ThreadSnapshot A thread snapshot.
requiredsnapshot2 ThreadSnapshot A thread snapshot.
required Source code inlibdebug/snapshots/thread/thread_snapshot_diff.py def __init__(self: ThreadSnapshotDiff, snapshot1: ThreadSnapshot, snapshot2: ThreadSnapshot) -> ThreadSnapshotDiff:\n \"\"\"Returns a diff between given snapshots of the same thread.\n\n Args:\n snapshot1 (ThreadSnapshot): A thread snapshot.\n snapshot2 (ThreadSnapshot): A thread snapshot.\n \"\"\"\n super().__init__(snapshot1, snapshot2)\n\n # Register diffs\n self._save_reg_diffs()\n\n # Memory map diffs\n self._resolve_maps_diff()\n\n if (self.snapshot1._process_name == self.snapshot2._process_name) and (\n self.snapshot1.aslr_enabled or self.snapshot2.aslr_enabled\n ):\n liblog.warning(\"ASLR is enabled in either or both snapshots. Diff may be messy.\")\n"},{"location":"from_pydoc/generated/utils/file_utils/","title":"libdebug.utils.file_utils","text":""},{"location":"from_pydoc/generated/utils/file_utils/#libdebug.utils.file_utils.ensure_file_executable","title":"ensure_file_executable(path) cached","text":"Ensures that a file exists and is executable.
Parameters:
Name Type Description Defaultpath str The path to the file.
required ThrowsFileNotFoundError: If the file does not exist. PermissionError: If the file is not executable.
Source code inlibdebug/utils/file_utils.py @functools.cache\ndef ensure_file_executable(path: str) -> None:\n \"\"\"Ensures that a file exists and is executable.\n\n Args:\n path (str): The path to the file.\n\n Throws:\n FileNotFoundError: If the file does not exist.\n PermissionError: If the file is not executable.\n \"\"\"\n file = Path(path)\n\n if not file.exists():\n raise FileNotFoundError(f\"File '{path}' does not exist.\")\n\n if not file.is_file():\n raise FileNotFoundError(f\"Path '{path}' is not a file.\")\n\n if not os.access(file, os.X_OK):\n raise PermissionError(f\"File '{path}' is not executable.\")\n"},{"location":"from_pydoc/generated/utils/pprint_primitives/","title":"libdebug.utils.pprint_primitives","text":""},{"location":"from_pydoc/generated/utils/pprint_primitives/#libdebug.utils.pprint_primitives.get_colored_saved_address_util","title":"get_colored_saved_address_util(return_address, maps, external_symbols=None)","text":"Pretty prints a return address for backtrace pprint.
Source code inlibdebug/utils/pprint_primitives.py def get_colored_saved_address_util(\n return_address: int,\n maps: MemoryMapList | MemoryMapSnapshotList,\n external_symbols: SymbolList = None,\n) -> str:\n \"\"\"Pretty prints a return address for backtrace pprint.\"\"\"\n filtered_maps = maps.filter(return_address)\n\n return_address_symbol = resolve_symbol_name_in_maps_util(return_address, external_symbols)\n\n permissions = filtered_maps[0].permissions\n if \"rwx\" in permissions:\n style = f\"{ANSIColors.UNDERLINE}{ANSIColors.RED}\"\n elif \"x\" in permissions:\n style = f\"{ANSIColors.RED}\"\n elif \"w\" in permissions:\n # This should not happen, but it's here for completeness\n style = f\"{ANSIColors.YELLOW}\"\n elif \"r\" in permissions:\n # This should not happen, but it's here for completeness\n style = f\"{ANSIColors.GREEN}\"\n if return_address_symbol[:2] == \"0x\":\n return f\"{style}{return_address:#x} {ANSIColors.RESET}\"\n else:\n return f\"{style}{return_address:#x} <{return_address_symbol}> {ANSIColors.RESET}\"\n"},{"location":"from_pydoc/generated/utils/pprint_primitives/#libdebug.utils.pprint_primitives.pad_colored_string","title":"pad_colored_string(string, length)","text":"Pads a colored string with spaces to the specified length.
Parameters:
Name Type Description Defaultstring str The string to pad.
requiredlength int The desired length of the string.
requiredReturns:
Name Type Descriptionstr str The padded string.
Source code inlibdebug/utils/pprint_primitives.py def pad_colored_string(string: str, length: int) -> str:\n \"\"\"Pads a colored string with spaces to the specified length.\n\n Args:\n string (str): The string to pad.\n length (int): The desired length of the string.\n\n Returns:\n str: The padded string.\n \"\"\"\n stripped_string = strip_ansi_codes(string)\n padding_length = length - len(stripped_string)\n if padding_length > 0:\n return string + \" \" * padding_length\n return string\n"},{"location":"from_pydoc/generated/utils/pprint_primitives/#libdebug.utils.pprint_primitives.pprint_backtrace_util","title":"pprint_backtrace_util(backtrace, maps, external_symbols=None)","text":"Pretty prints the current backtrace of the thread.
Source code inlibdebug/utils/pprint_primitives.py def pprint_backtrace_util(\n backtrace: list,\n maps: MemoryMapList | MemoryMapSnapshotList,\n external_symbols: SymbolList = None,\n) -> None:\n \"\"\"Pretty prints the current backtrace of the thread.\"\"\"\n for return_address in backtrace:\n print(get_colored_saved_address_util(return_address, maps, external_symbols))\n"},{"location":"from_pydoc/generated/utils/pprint_primitives/#libdebug.utils.pprint_primitives.pprint_diff_line","title":"pprint_diff_line(content, is_added)","text":"Prints a line of a diff.
Source code inlibdebug/utils/pprint_primitives.py def pprint_diff_line(content: str, is_added: bool) -> None:\n \"\"\"Prints a line of a diff.\"\"\"\n color = ANSIColors.GREEN if is_added else ANSIColors.RED\n\n prefix = \">>>\" if is_added else \"<<<\"\n\n print(f\"{prefix} {color}{content}{ANSIColors.RESET}\")\n"},{"location":"from_pydoc/generated/utils/pprint_primitives/#libdebug.utils.pprint_primitives.pprint_diff_substring","title":"pprint_diff_substring(content, start, end)","text":"Prints a diff with only a substring highlighted.
Source code inlibdebug/utils/pprint_primitives.py def pprint_diff_substring(content: str, start: int, end: int) -> None:\n \"\"\"Prints a diff with only a substring highlighted.\"\"\"\n color = ANSIColors.ORANGE\n\n print(f\"{content[:start]}{color}{content[start:end]}{ANSIColors.RESET}{content[end:]}\")\n"},{"location":"from_pydoc/generated/utils/pprint_primitives/#libdebug.utils.pprint_primitives.pprint_inline_diff","title":"pprint_inline_diff(content, start, end, correction)","text":"Prints a diff with inline changes.
Source code inlibdebug/utils/pprint_primitives.py def pprint_inline_diff(content: str, start: int, end: int, correction: str) -> None:\n \"\"\"Prints a diff with inline changes.\"\"\"\n print(\n f\"{content[:start]}{ANSIColors.RED}{ANSIColors.STRIKE}{content[start:end]}{ANSIColors.RESET} {ANSIColors.GREEN}{correction}{ANSIColors.RESET}{content[end:]}\"\n )\n"},{"location":"from_pydoc/generated/utils/pprint_primitives/#libdebug.utils.pprint_primitives.pprint_maps_util","title":"pprint_maps_util(maps)","text":"Prints the memory maps of the process.
Source code inlibdebug/utils/pprint_primitives.py def pprint_maps_util(maps: MemoryMapList | MemoryMapSnapshotList) -> None:\n \"\"\"Prints the memory maps of the process.\"\"\"\n header = f\"{'start':>18} {'end':>18} {'perm':>6} {'size':>8} {'offset':>8} {'backing_file':<20}\"\n print(header)\n for memory_map in maps:\n info = (\n f\"{memory_map.start:#18x} \"\n f\"{memory_map.end:#18x} \"\n f\"{memory_map.permissions:>6} \"\n f\"{memory_map.size:#8x} \"\n f\"{memory_map.offset:#8x} \"\n f\"{memory_map.backing_file}\"\n )\n if \"rwx\" in memory_map.permissions:\n print(f\"{ANSIColors.RED}{ANSIColors.UNDERLINE}{info}{ANSIColors.RESET}\")\n elif \"x\" in memory_map.permissions:\n print(f\"{ANSIColors.RED}{info}{ANSIColors.RESET}\")\n elif \"w\" in memory_map.permissions:\n print(f\"{ANSIColors.YELLOW}{info}{ANSIColors.RESET}\")\n elif \"r\" in memory_map.permissions:\n print(f\"{ANSIColors.GREEN}{info}{ANSIColors.RESET}\")\n else:\n print(info)\n"},{"location":"from_pydoc/generated/utils/pprint_primitives/#libdebug.utils.pprint_primitives.pprint_memory_diff_util","title":"pprint_memory_diff_util(address_start, extract_before, extract_after, word_size, maps, integer_mode=False)","text":"Pretty prints the memory diff.
Source code inlibdebug/utils/pprint_primitives.py def pprint_memory_diff_util(\n address_start: int,\n extract_before: bytes,\n extract_after: bytes,\n word_size: int,\n maps: MemoryMapSnapshotList,\n integer_mode: bool = False,\n) -> None:\n \"\"\"Pretty prints the memory diff.\"\"\"\n # Loop through each word-sized chunk\n for i in range(0, len(extract_before), word_size):\n # Calculate the current address\n current_address = address_start + i\n\n # Extract word-sized chunks from both extracts\n word_before = extract_before[i : i + word_size]\n word_after = extract_after[i : i + word_size]\n\n # Convert each byte in the chunks to hex and compare\n formatted_before = []\n formatted_after = []\n for byte_before, byte_after in zip(word_before, word_after, strict=False):\n # Check for changes and apply color\n if byte_before != byte_after:\n formatted_before.append(f\"{ANSIColors.RED}{byte_before:02x}{ANSIColors.RESET}\")\n formatted_after.append(f\"{ANSIColors.GREEN}{byte_after:02x}{ANSIColors.RESET}\")\n else:\n formatted_before.append(f\"{ANSIColors.RESET}{byte_before:02x}{ANSIColors.RESET}\")\n formatted_after.append(f\"{ANSIColors.RESET}{byte_after:02x}{ANSIColors.RESET}\")\n\n # Join the formatted bytes into a string for each column\n if not integer_mode:\n before_str = \" \".join(formatted_before)\n after_str = \" \".join(formatted_after)\n else:\n # Right now libdebug only considers little-endian systems, if this changes,\n # this code should be passed the endianness of the system and format the bytes accordingly\n before_str = \"0x\" + \"\".join(formatted_before[::-1])\n after_str = \"0x\" + \"\".join(formatted_after[::-1])\n\n current_address_str = _get_colored_address_string(current_address, maps)\n\n # Print the memory diff with the address for this word\n print(f\"{current_address_str}: {before_str} {after_str}\")\n"},{"location":"from_pydoc/generated/utils/pprint_primitives/#libdebug.utils.pprint_primitives.pprint_memory_util","title":"pprint_memory_util(address_start, extract, word_size, maps, integer_mode=False)","text":"Pretty prints the memory.
Source code inlibdebug/utils/pprint_primitives.py def pprint_memory_util(\n address_start: int,\n extract: bytes,\n word_size: int,\n maps: MemoryMapList,\n integer_mode: bool = False,\n) -> None:\n \"\"\"Pretty prints the memory.\"\"\"\n # Loop through each word-sized chunk\n for i in range(0, len(extract), word_size):\n # Calculate the current address\n current_address = address_start + i\n\n # Extract word-sized chunks from both extracts\n word = extract[i : i + word_size]\n\n # Convert each byte in the chunks to hex and compare\n formatted_word = [f\"{byte:02x}\" for byte in word]\n\n # Join the formatted bytes into a string for each column\n out = \" \".join(formatted_word) if not integer_mode else \"0x\" + \"\".join(formatted_word[::-1])\n\n current_address_str = _get_colored_address_string(current_address, maps)\n\n # Print the memory diff with the address for this word\n print(f\"{current_address_str}: {out}\")\n"},{"location":"from_pydoc/generated/utils/pprint_primitives/#libdebug.utils.pprint_primitives.pprint_reg_diff_large_util","title":"pprint_reg_diff_large_util(curr_reg_tuple, reg_tuple_before, reg_tuple_after)","text":"Pretty prints a register diff.
Source code inlibdebug/utils/pprint_primitives.py def pprint_reg_diff_large_util(\n curr_reg_tuple: (str, str),\n reg_tuple_before: (int, int),\n reg_tuple_after: (int, int),\n) -> None:\n \"\"\"Pretty prints a register diff.\"\"\"\n print(f\"{ANSIColors.BLUE}\" + \"{\" + f\"{ANSIColors.RESET}\")\n for reg_name, value_before, value_after in zip(curr_reg_tuple, reg_tuple_before, reg_tuple_after, strict=False):\n has_changed = value_before != value_after\n\n # Print the old and new values\n if has_changed:\n formatted_value_before = (\n f\"{ANSIColors.RED}{ANSIColors.STRIKE}\"\n + (f\"{value_before:#x}\" if isinstance(value_before, int) else str(value_before))\n + f\"{ANSIColors.RESET}\"\n )\n\n formatted_value_after = (\n f\"{ANSIColors.GREEN}\"\n + (f\"{value_after:#x}\" if isinstance(value_after, int) else str(value_after))\n + f\"{ANSIColors.RESET}\"\n )\n\n print(\n f\" {ANSIColors.RED}{reg_name}{ANSIColors.RESET}\\t{formatted_value_before}\\t->\\t{formatted_value_after}\"\n )\n else:\n formatted_value = f\"{value_before:#x}\" if isinstance(value_before, int) else str(value_before)\n\n print(f\" {ANSIColors.RED}{reg_name}{ANSIColors.RESET}\\t{formatted_value}\")\n\n print(f\"{ANSIColors.BLUE}\" + \"}\" + f\"{ANSIColors.RESET}\")\n"},{"location":"from_pydoc/generated/utils/pprint_primitives/#libdebug.utils.pprint_primitives.pprint_reg_diff_util","title":"pprint_reg_diff_util(curr_reg, maps_before, maps_after, before, after)","text":"Pretty prints a register diff.
Source code inlibdebug/utils/pprint_primitives.py def pprint_reg_diff_util(\n curr_reg: str,\n maps_before: MemoryMapList,\n maps_after: MemoryMapList,\n before: int,\n after: int,\n) -> None:\n \"\"\"Pretty prints a register diff.\"\"\"\n before_str = _get_colored_address_string(before, maps_before)\n after_str = _get_colored_address_string(after, maps_after)\n\n print(f\"{ANSIColors.RED}{curr_reg.ljust(12)}{ANSIColors.RESET}\\t{before_str}\\t{after_str}\")\n"},{"location":"from_pydoc/generated/utils/pprint_primitives/#libdebug.utils.pprint_primitives.pprint_registers_all_util","title":"pprint_registers_all_util(registers, maps, gen_regs, spec_regs, vec_fp_regs)","text":"Pretty prints all the thread's registers.
Source code inlibdebug/utils/pprint_primitives.py def pprint_registers_all_util(\n registers: Registers,\n maps: MemoryMapList,\n gen_regs: list[str],\n spec_regs: list[str],\n vec_fp_regs: list[str],\n) -> None:\n \"\"\"Pretty prints all the thread's registers.\"\"\"\n pprint_registers_util(registers, maps, gen_regs)\n\n for t in spec_regs:\n _pprint_reg(registers, maps, t)\n\n for t in vec_fp_regs:\n print(f\"{ANSIColors.BLUE}\" + \"{\" + f\"{ANSIColors.RESET}\")\n for register in t:\n value = getattr(registers, register)\n formatted_value = f\"{value:#x}\" if isinstance(value, int) else str(value)\n print(f\" {ANSIColors.RED}{register}{ANSIColors.RESET}\\t{formatted_value}\")\n\n print(f\"{ANSIColors.BLUE}\" + \"}\" + f\"{ANSIColors.RESET}\")\n"},{"location":"from_pydoc/generated/utils/pprint_primitives/#libdebug.utils.pprint_primitives.pprint_registers_util","title":"pprint_registers_util(registers, maps, gen_regs)","text":"Pretty prints the thread's registers.
Source code inlibdebug/utils/pprint_primitives.py def pprint_registers_util(registers: Registers, maps: MemoryMapList, gen_regs: list[str]) -> None:\n \"\"\"Pretty prints the thread's registers.\"\"\"\n for curr_reg in gen_regs:\n _pprint_reg(registers, maps, curr_reg)\n"},{"location":"from_pydoc/generated/utils/pprint_primitives/#libdebug.utils.pprint_primitives.strip_ansi_codes","title":"strip_ansi_codes(string)","text":"Strips ANSI escape codes from a string.
Parameters:
Name Type Description Defaultstring str The string to strip.
requiredReturns:
Name Type Descriptionstr str The string without the ANSI escape codes.
Source code inlibdebug/utils/pprint_primitives.py def strip_ansi_codes(string: str) -> str:\n \"\"\"Strips ANSI escape codes from a string.\n\n Args:\n string (str): The string to strip.\n\n Returns:\n str: The string without the ANSI escape codes.\n \"\"\"\n ansi_escape = re.compile(r\"\\x1B[@-_][0-?]*[ -/]*[@-~]\")\n return ansi_escape.sub(\"\", string)\n"},{"location":"logging/liblog/","title":"Logging","text":"Debugging an application with the freedom of a rich API can lead to flows which are hard to unravel. To aid the user in the debugging process, libdebug provides logging. The logging system is implemented in the submodule liblog and adheres to the Python logging system.
By default, libdebug only prints critical logs such as warnings and errors. However, the user can enable more verbose logging by setting the argv parameter of the script.
The available logging modes for events are:
Mode Descriptiondebugger Logs related to the debugging operations performed on the process by libdebug. pipe Logs related to interactions with the process pipe: bytes received and bytes sent. dbg Combination of the pipe and debugger options. pwntools compatibility
As reported in this documentation, the argv parameters passed to libdebug are lowercase. This choice is made to avoid conflicts with pwntools, which intercepts all uppercase arguments.
The debugger option displays all logs related to the debugging operations performed on the process by libdebug.
The pipe option, on the other hand, displays all logs related to interactions with the process pipe: bytes received and bytes sent.
The dbg option is the combination of the pipe and debugger options. It displays all logs related to the debugging operations performed on the process by libdebug, as well as interactions with the process pipe: bytes received and bytes sent.
libdebug defines logging levels and information types to allow the user to filter the granularity of the the information they want to see. Logger levels for each event type can be changed at runtime using the libcontext module.
Example of setting logging levels
from libdebug import libcontext\n\nlibcontext.general_logger = 'DEBUG'\nlibcontext.pipe_logger = 'DEBUG'\nlibcontext.debugger_logger = 'DEBUG'\n Logger Description Supported Levels Default Level general_logger Logger used for general libdebug logs, different from the pipe and debugger logs. DEBUG, INFO, WARNING, SILENT INFO pipe_logger Logger used for pipe logs. DEBUG, SILENT SILENT debugger_logger Logger used for debugger logs. DEBUG, SILENT SILENT Let's see what each logging level actually logs:
Log Level Debug Logs Information Logs Warnings DEBUG INFO WARNING SILENT","boost":4},{"location":"logging/liblog/#temporary-logging-level-changes","title":"Temporary logging level changes","text":"Logger levels can be temporarily changed at runtime using a with statement, as shown in the following example.
from libdebug import libcontext\n\nwith libcontext.tmp(pipe_logger='SILENT', debugger_logger='DEBUG'):\n r.sendline(b'gimme the flag')\n","boost":4},{"location":"multithreading/multi-stuff/","title":"The Family of the Process","text":"Debugging is all fun and games until you have to deal with a process that spawns children.
So...how are children born? In the POSIX standard, children of a process can be either threads or processes. Threads share the same virtual address space, while processes have their own. POSIX-compliant systems such as Linux supply a variety of system calls to create children of both types.
flowchart TD\n P[Parent Process] -->|\"fork()\"| CP1[Child Process]\n P -->|\"clone()\"| T((Thread))\n P -->|\"vfork()\"| CP2[Child<br>Process]\n P -->|\"clone3()\"| T2((Thread))\n\n CP1 -->|\"fork()\"| GP[Grandchild<br>Process]\n T -->|\"clone()\"| ST((Sibling<br>Thread)) Example family tree of a process in the Linux kernel.","boost":4},{"location":"multithreading/multi-stuff/#processes","title":"Processes","text":"Child processes are created by system calls such as fork, vfork, clone, and clone3. The clone and clone3 system calls are configurable, as they allow the caller to specify the resources to be shared between the parent and child.
In the Linux kernel, the ptrace system call allows a tracer to handle events like process creation and termination.
Since version 0.8 Chutoro Nigiri , libdebug supports handling children processes. Read more about it in the dedicated Multiprocessing section.
","boost":4},{"location":"multithreading/multi-stuff/#threads","title":"Threads","text":"Threads of a running process in the POSIX Threads standard are children of the main process. They are created by the system calls clone and clone3. What distinguishes threads from processes is that threads share the same virtual address space.
libdebug offers a simple API to work with children threads. Read more about it in the dedicated Multithreading section.
","boost":4},{"location":"multithreading/multiprocessing/","title":"Debugging Multiprocess Applications","text":"Since version 0.8 Chutoro Nigiri , libdebug supports debugging multiprocess applications. This feature allows you to attach to multiple processes and debug them simultaneously. This document explains how to use this feature and provides examples to help you get started.
","boost":4},{"location":"multithreading/multiprocessing/#a-child-process-is-born","title":"A Child Process is Born","text":"By default, libdebug will monitor all new children processes created by the tracee process. Of course, it will not retrieve past forked processes that have been created before an attach.
A new process is a big deal. For this reason, libdebug will provide you with a brand new Debugger object for each new child process. This object will be available in the list children attribute of the parent Debugger object.
Usage Example
from libdebug import debugger\n\nd = debugger(\"test\")\nd.run()\n\n[...]\n\nprint(f\"The process has spawned {len(d.children)} children\")\n\nfor child in d.children: # (1)!\n print(f\"Child PID: {child.pid}\")\n children attribute is a regular list. Indexing, slicing, and iterating are all supported.When a child process is spawned, it inherits the properties of the parent debugger. This includes whether ASLR is enabled, fast memory reading, and [other properties}../../basics/libdebug101/#what-else-can-i-do). However, the child debugger from that moment on will act independently. As such, any property changes made to the parent debugger will not affect the child debugger, and vice versa.
In terms of registered Stopping Events, the new debugger will be a blank slate. This means the debugger will not inherit breakpoints, watchpoints, syscall handlers, or signal catchers.
","boost":4},{"location":"multithreading/multiprocessing/#focusing-on-the-main-process","title":"Focusing on the Main Process","text":"Some applications may spawn a large number of children processes, and you may only be interested in debugging the main process. In this case, you can disable the automatic monitoring of children processes by setting the follow_children parameter to False when creating the Debugger object.
Usage Example
d = debugger(\"test\", follow_children=False)\nd.run()\n In this example, libdebug will only monitor the main process and ignore any child processes spawned by the tracee. However, you can also decide to stop monitoring child processes at any time during debugging by setting the follow_children attribute to False in a certain Debugger object.
When creating a snapshot of a process from the corresponding Debugger object, the snapshot will not include children processes, but only children threads. Read more about snapshots in the Save States section.
","boost":4},{"location":"multithreading/multiprocessing/#pipe-redirection","title":"Pipe Redirection","text":"By default, libdebug will redirect the standard input, output, and error of the child processes to pipes. This is how you can interact with these file descriptors using I/O commands. If you keep this parameter enabled, you will be able to interact with the child processes's standard I/O using the same PipeManager object that is provided upon creation of the root Debugger object. This is consistent with limitations of forking in the POSIX standard, where the child process inherits the file descriptors of the parent process.
Read more about disabling pipe redirection in the dedicated section.
","boost":4},{"location":"multithreading/multithreading/","title":"Debugging Multithreaded Applications","text":"Debugging multi-threaded applications can be a daunting task, particularly in an interactive debugger that is designed to operate on one thread at a time. libdebug offers a few features that will help you debug multi-threaded applications more intuitively and efficiently.
","boost":4},{"location":"multithreading/multithreading/#child-threads","title":"Child Threads","text":"libdebug automatically registers new threads and exposes their state with the same API as the main Debugger object. While technically threads can be running or stopped independently, libdebug will enforce a coherent state. This means that if a live thread is stopped, all other live threads will be stopped as well and if a continuation command is issued, all threads will be resumed.
stateDiagram-v2\n state fork_state <<fork>>\n [*] --> fork_state: d.interrupt()\n fork_state --> MainThread: STOP\n fork_state --> Child1: STOP\n fork_state --> Child2: STOP\n\n state join_state <<join>>\n MainThread --> join_state\n Child1 --> join_state\n Child2 --> join_state\n\n state fork_state1 <<fork>>\n join_state --> fork_state1: d.cont()\n fork_state1 --> MainThread_2: CONTINUE\n fork_state1 --> Child11: CONTINUE\n fork_state1 --> Child22: CONTINUE\n\n state join_state2 <<join>>\n MainThread_2 --> join_state2\n Child11 --> join_state2\n Child22 --> join_state2\n\n state fork_state2 <<fork>>\n join_state2 --> fork_state2: Breakpoint on Child 2\n fork_state2 --> MainThread_3: STOP\n fork_state2 --> Child111: STOP\n fork_state2 --> Child222: STOP\n\n state join_state3 <<join>>\n MainThread_3 --> join_state3\n Child111 --> join_state3\n Child222 --> join_state3\n\n %% State definitions with labels\n state \"Main Thread\" as MainThread\n state \"Child 1\" as Child1\n state \"Child 2\" as Child2\n state \"Main Thread\" as MainThread_2\n state \"Child 1\" as Child11\n state \"Child 2\" as Child22\n state \"Main Thread\" as MainThread_3\n state \"Child 1\" as Child111\n state \"Child 2\" as Child222 All live threads are synchronized in their execution state.","boost":4},{"location":"multithreading/multithreading/#libdebug-api-for-multithreading","title":"libdebug API for Multithreading","text":"To access the threads of a process, you can use the threads attribute of the Debugger object. This attribute will return a list of ThreadContext objects, each representing a thread of the process.
If you're already familiar with the Debugger object, you'll find the ThreadContext straightforward to use. The Debugger has always acted as a facade for the main thread, enabling you to access registers, memory, and other thread state fields exactly as you would for the main thread. The difference you will notice is that the ThreadContext object is missing a couple of fields that just don't make sense in the context of a single thread (e.g. symbols, which belong to the binary, and memory maps, since they are shared for the whole process).
from libdebug import debugger\n\nd = debugger(\"./so_many_threads\")\nd.run()\n\n# Reach the point of interest\nd.breakpoint(\"loom\", file=\"binary\")\nd.cont()\nd.wait()\n\nfor thread in d.threads:\n print(f\"Thread {thread.tid} stopped at {hex(thread.regs.rip)}\")\n print(\"Function frame:\")\n\n # Retrieve frame boundaries\n frame_start = thread.regs.rbp\n frame_end = thread.regs.rsp\n\n # Print function frame\n for addr in range(frame_end, frame_start, 8):\n print(f\" {addr:#16x}: {thread.memory[addr:addr+8].hex()}\")\n\n[...]\n","boost":4},{"location":"multithreading/multithreading/#properties-of-the-threadcontext","title":"Properties of the ThreadContext","text":"Property Type Description regs Registers The thread's registers. debugger Debugger The debugging context this thread belongs to. memory AbstractMemoryView The memory view of the debugged process (mem is an alias). instruction_pointer int The thread's instruction pointer. process_id int The process ID (pid is an alias). thread_id int The thread ID (tid is an alias). running bool Whether the process is running. saved_ip int The return address of the current function. dead bool Whether the thread is dead. exit_code int The thread's exit code (if dead). exit_signal str The thread's exit signal (if dead). syscall_arg0 int The thread's syscall argument 0. syscall_arg1 int The thread's syscall argument 1. syscall_arg2 int The thread's syscall argument 2. syscall_arg3 int The thread's syscall argument 3. syscall_arg4 int The thread's syscall argument 4. syscall_arg5 int The thread's syscall argument 5. syscall_number int The thread's syscall number. syscall_return int The thread's syscall return value. signal str The signal will be forwarded to the thread. signal_number int The signal number to forward to the thread. zombie bool Whether the thread is in a zombie state.","boost":4},{"location":"multithreading/multithreading/#methods-of-the-threadcontext","title":"Methods of the ThreadContext","text":"Method Description Return Type set_as_dead() Set the thread as dead. None step() Executes a single instruction of the process (si is an alias). None step_until(position: int, max_steps: int = -1, file: str = \"hybrid\") Executes instructions of the process until the specified location is reached (su is an alias). None finish(heuristic: str = \"backtrace\") Continues execution until the current function returns or the process stops (fin is an alias). None next() Executes the next instruction of the process. If the instruction is a call, the debugger will continue until the called function returns (fin is an alias). None backtrace(as_symbols: bool = False) Returns the current backtrace of the thread (see Stack Frame Utils). list pprint_backtrace() Pretty prints the current backtrace of the thread (see Pretty Printing). None pprint_registers() Pretty prints the thread's registers (see Pretty Printing). None pprint_regs() Alias for the pprint_registers method (see Pretty Printing). None pprint_registers_all() Pretty prints all the thread's registers (see Pretty Printing). None pprint_regs_all() Alias for the pprint_registers_all method (see Pretty Printing). None Meaning of the debugger object
When accessing state fields of the Debugger object (e.g. registers, memory), the debugger will act as an alias for the main thread. For example, doing d.regs.rax will be equivalent to doing d.threads[0].regs.rax.
","boost":4},{"location":"multithreading/multithreading/#shared-and-unshared-state","title":"Shared and Unshared State","text":"Each thread has its own register set, stack, and instruction pointer. However, the virtual address space is shared among all threads. This means that threads can access the same memory and share the same code.
How to access TLS?
While the virtual address space is shared between threads, each thread has its own Thread Local Storage (TLS) area. As it stands, libdebug does not provide a direct interface to the TLS area.
Let's see a couple of things to keep in mind when debugging multi-threaded applications with libdebug.
","boost":4},{"location":"multithreading/multithreading/#software-breakpoints","title":"Software Breakpoints","text":"Software breakpoints are implemented through code patching in the process memory. This means that a breakpoint set in one thread will be replicated across all threads.
When using synchronous breakpoints, you will need to \"diagnose\" the stopping event to determine which thread triggered the breakpoint. You can do this by checking the return value of the hit_on() method of the Breakpoint object. Passing the ThreadContext as an argument will return True if the breakpoint was hit by that thread.
Diagnosing a Synchronous Breakpoint
thread = d.threads[2]\n\nfor addr, bp in d.breakpoints.items():\n if bp.hit_on(thread):\n print(f\"Thread {thread.tid} hit breakpoint {addr:#x}\")\n When using asynchronous breakpoints, the breakpoint will be more intuitive to handle, as the signature of the callback function includes the ThreadContext object that triggered the breakpoint.
Handling an Asynchronous Breakpoint
def on_breakpoint_hit(t, bp):\n print(f\"Thread {t.tid} hit breakpoint {bp.address:#x}\")\n\nd.breakpoint(0x10ab, callback=on_breakpoint_hit, file=\"binary\")\n","boost":4},{"location":"multithreading/multithreading/#hardware-breakpoints-and-watchpoints","title":"Hardware Breakpoints and Watchpoints","text":"While hardware breakpoints are thread-specific, libdebug mirrors them across all threads. This is done to avoid asymmetries with software breakpoints. Watchpoints are hardware breakpoints, so this applies to them as well.
For consistency, syscall handlers are also enabled across all threads. The same considerations for synchronous and asynchronous breakpoints apply here as well.
Concurrency in Syscall Handling
When debugging entering and exiting events in syscalls, be mindful of the scheduling. The kernel may schedule a different thread to handle the syscall exit event right after the enter event of another thread.
","boost":4},{"location":"multithreading/multithreading/#signal-catching","title":"Signal Catching","text":"Who will receive the signal?Signal Catching is also shared among threads. Apart from consistency, this is a necessity. In fact, the kernel does not guarantee that a signal sent to a process will be dispatched to a specific thread. By contrast, when sending arbitrary signals through the ThreadContext object, the signal will be sent to the requested thread.
","boost":4},{"location":"multithreading/multithreading/#snapshot-behavior","title":"Snapshot Behavior","text":"When creating a snapshot of a process from the corresponding Debugger object, the snapshot will also save the state of all threads. You can also create a snapshot of a single thread by calling the create_snapshot() method from the ThreadContext object instead. Read more about snapshots in the Save States section.
When a thread or process terminates, it enters a zombie state. This is a temporary condition where the process is effectively dead but awaiting reaping by the parent or debugger, which involves reading its status. Reaping traced zombie threads can become complicated due to certain edge cases.
While libdebug automatically handles the reaping of zombie threads, it provides a property named zombie within the ThreadContext object, indicating whether the thread is in a zombie state. The same property is also available in the Debugger object, indicating whether the main thread is in a zombie state.
Example Code
if d.threads[1].zombie:\n print(\"The thread is a zombie\")\n sequenceDiagram\n participant Parent as Parent Process\n participant Child as Child Thread\n participant Kernel as Linux Kernel\n\n Note over Parent,Kernel: Normal Execution Phase\n Parent->>Child: clone()\n activate Child\n Child->>Kernel: Task added to the Process Table\n Kernel-->>Child: Thread ID\n\n Note over Parent,Kernel: Zombie Creation Phase\n Child->>Kernel: exit(statusCode)\n deactivate Child\n Note right of Kernel: Parent will be<br/>notified of exit\n Kernel->>Parent: SIGCHLD\n Note right of Parent: Parent Busy<br/>Cannot Process Signal\n\n Note over Parent,Kernel: Zombie State\n Note right of Child: Thread becomes<br/>zombie (defunct)<br/>- Maintains TID<br/>- Keeps exit status<br/>- Consumes minimal resources\n\n Note over Parent,Kernel: Reaping Phase\n Parent->>Kernel: waitpid()\n Kernel-->>Parent: Return Exit Status\n Kernel->>Kernel: Remove Zombie Entry<br/>from Process Table\n Note right of Kernel: Resources Released","boost":4},{"location":"quality_of_life/anti_debugging/","title":"Evasion of Anti-Debugging","text":"","boost":4},{"location":"quality_of_life/anti_debugging/#automatic-evasion-of-anti-debugging-techniques","title":"Automatic Evasion of Anti-Debugging Techniques","text":"A common anti-debugging technique for Linux ELF binaries is to invoke the ptrace syscall with the PTRACE_TRACEME argument. The syscall will fail if the binary is currently being traced by a debugger, as the kernel forbids a process from being traced by multiple debuggers.
Bypassing this technique involves intercepting such syscalls and altering the return value to make the binary believe that it is not being traced. While this can absolutely be performed manually, libdebug comes with a pre-made implementation that can save you precious time.
To enable this feature, set the escape_antidebug property to True when creating the debugger object. The debugger will take care of the rest.
Example
> C source code
#include <stdio.h>\n#include <stdlib.h>\n#include <sys/ptrace.h>\n\nint main()\n{\n\n if (ptrace(PTRACE_TRACEME, 0, NULL, 0) == -1) // (1)!\n {\n puts(\"No cheating! Debugger detected.\"); // (2)!\n exit(1);\n }\n\n puts(\"Congrats! Here's your flag:\"); // (3)!\n puts(\"flag{y0u_sn3aky_guy_y0u_tr1ck3d_m3}\");\n\n return 0;\n}\n PTRACE_TRACEME to detect if we are being debugged> libdebug script
from libdebug import debugger\n\nd = debugger(\"evasive_binary\",\n escape_antidebug=True)\n\npipe = d.run()\n\nd.cont()\nout = pipe.recvline(numlines=2)\nd.wait()\n\nprint(out.decode())\n Execution of the script will print the flag, even if the binary is being debugged.
","boost":4},{"location":"quality_of_life/memory_maps/","title":"Memory Maps","text":"Virtual memory is a fundamental concept in operating systems. It allows the operating system to provide each process with its own address space, which is isolated from other processes. This isolation is crucial for security and stability reasons. The memory of a process is divided into regions called memory maps. Each memory map has a starting address, an ending address, and a set of permissions (read, write, execute).
In libdebug, you can access the memory maps of a process using the maps attribute of the Debugger object.
The maps attribute returns a list of MemoryMap objects, which contain the following attributes:
start int The start address of the memory map. There is also an equivalent alias called base. end int The end address of the memory map. permissions str The permissions of the memory map. size int The size of the memory map. offset int The offset of the memory map relative to the backing file. backing_file str The backing file of the memory map, or the symbolic name of the memory map.","boost":4},{"location":"quality_of_life/memory_maps/#filtering-memory-maps","title":"Filtering Memory Maps","text":"You can filter memory maps based on their attributes using the filter() method of the maps attribute. The filter() method accepts a value that can be either a memory address (int) or a symbolic name (str) and returns a list of MemoryMap objects that match the criteria.
Function Signature
d.maps.filter(value: int | str) -> MemoryMapList[MemoryMap]:\n The behavior of the memory map filtering depends on the type of the value parameter:
libdebug offers utilities to visualize the process's state in a human-readable format and with color highlighting. This can be especially useful when debugging complex binaries or when you need to quickly understand the behavior of a program.
","boost":4},{"location":"quality_of_life/pretty_printing/#registers-pretty-printing","title":"Registers Pretty Printing","text":"There are two functions available to print the registers of a thread: pprint_registers() and print_registers_all(). The former will print the current values of the most commonly-interesting registers, while the latter will print all available registers.
Aliases
If you don't like long function names, you can use aliases for the two register pretty print functions. The shorter aliases are pprint_regs() and print_regs_all().
When debugging a binary, it is often much faster to guess what the intended functionality is by looking at the syscalls that are being invoked. libdebug offers a function that will intercept any syscall and print its arguments and return value. This can be done by setting the property pprint_syscalls = True in the Debugger object or ThreadContext object and resuming the process.
Syscall Trace PPrint Syntax
d.pprint_syscalls = True\nd.cont()\n The output will be printed to the console in color according to the following coding:
Format Description blue Syscall name red Syscall was intercepted and handled by a callback (either a basic handler or a hijack) yellow Value given to a syscall argument in hexadecimal strikethrough Syscall was hijacked or a value was changed, the new syscall or value follows the striken textHandled syscalls with a callback associated with them will be listed as such. Additionally, syscalls hijacked through the libdebug API will be highlighted as striken through, allowing you to monitor both the original behavior and your own changes to the flow. The id of the thread that made the syscall will be printed in the beginning of the line in white bold.
","boost":4},{"location":"quality_of_life/pretty_printing/#memory-maps-pretty-printing","title":"Memory Maps Pretty Printing","text":"To pretty print the memory maps of a process, you can simply use the pprint_maps() function. This will print the memory maps of the process in a human-readable format, with color highlighting to distinguish between different memory regions.
To pretty print the stack trace (backtrace) of a process, you can use the pprint_backtrace() function. This will print the stack trace of the process in a human-readable format.
The pprint_memory() function will print the contents of the process memory within a certain range of addresses.
Function signature
d.pprint_memory(\n start: int,\n end: int,\n file: str = \"hybrid\",\n override_word_size: int = None,\n integer_mode: bool = False,\n) -> None:\n Parameter Data Type Description start int The start address of the memory range to print. end int The end address of the memory range to print. file str (optional) The file to use for the memory content. Defaults to hybrid mode (see memory access). override_word_size int (optional) The word size to use to align memory contents. By default, it uses the ISA register size. integer_mode bool (optional) Whether to print the memory content in integer mode. Defaults to False Start after End
For your convenience, if the start address is greater than the end address, the function will swap the values.
Here is a visual example of the memory content pretty printing (with and without integer mode):
Integer mode disabledInteger mode enabled ","boost":4},{"location":"quality_of_life/quality_of_life/","title":"Quality of Life Features","text":"For your convenience, libdebug offers a few functions that will speed up your debugging process.
","boost":4},{"location":"quality_of_life/quality_of_life/#pretty-printing","title":"Pretty Printing","text":"Visualizing the state of the process you are debugging can be a daunting task. libdebug offers utilities to print registers, memory maps, syscalls, and more in a human-readable format and with color highlighting.
","boost":4},{"location":"quality_of_life/quality_of_life/#symbol-resolution","title":"Symbol Resolution","text":"libdebug can resolve symbols in the binary and shared libraries. With big binaries, this can be a computationally intensive, especially if your script needs to be run multiple types. You can set symbol resolution levels and specify where to look for symbols according to your needs.
","boost":4},{"location":"quality_of_life/quality_of_life/#memory-maps","title":"Memory Maps","text":"libdebug offers utilities to retrieve the memory maps of a process. This can be useful to understand the memory layout of the process you are debugging.
","boost":4},{"location":"quality_of_life/quality_of_life/#stack-frame-utils","title":"Stack Frame Utils","text":"libdebug offers utilities to resolve the return addresses of a process.
","boost":4},{"location":"quality_of_life/quality_of_life/#evasion-of-anti-debugging","title":"Evasion of Anti-Debugging","text":"libdebug offers a few functions that will help you evade simple anti-debugging techniques. These functions can be used to bypass checks for the presence of a debugger.
","boost":4},{"location":"quality_of_life/stack_frame_utils/","title":"Stack Frame Utils","text":"Function calls in a binary executable are made according to a system calling convention. One constant in these conventions is the use of a stack frame to store the return addresses to resume at the end of the function.
Different architectures have slightly different ways to retrieve the return address (for example, in AArch64, the latest return address is stored in x30, the Link Register). To abstract these differences, libdebug provides common utilities to resolve the stack trace (backtrace) of the running process (or thread).
libdebug's backtrace is structured like a LIFO stack, with the top-most value being the current instruction pointer. Subsequent values are the return addresses of the functions that were called to reach the current instruction pointer.
Backtrace usage example
from libdebug import debugger\n\nd = debugger(\"test_backtrace\")\nd.run()\n\n# A few calls later...\n[...]\n\ncurrent_ip = d.backtrace()[0]\nreturn_address = d.backtrace()[1]\nother_return_addresses = d.backtrace()[2:]\n Additionally, the field saved_ip of the Debugger or ThreadContext objects will contain the return address of the current function.
As described in the memory access section, many functions in libdebug accept symbols as an alternative to actual addresses or offsets.
You can list all resolved symbols in the binary and shared libraries using the symbols attribute of the Debugger object. This attribute returns a SymbolList object.
This object grants the user hybrid access to the symbols: as a dict or as a list. Tor example, the following lines of code all have a valid syntax:
d.symbols['printf'] #(1)!\nd.symbols[0] #(2)!\nd.symbols['printf'][0] #(3)!\n printf exactly.printf exactly.Please note that the dict-like access returns exact matches with the symbol name. If you want to filter for symbols that contain a specific string, read the dedicated section.
C++ Demangling
Reverse-engineering of C++ binaries can be a struggle. To help out, libdebug automatically demangles C++ symbols.
","boost":4},{"location":"quality_of_life/symbols/#symbol-resolution-levels","title":"Symbol Resolution Levels","text":"With large binaries and libraries, parsing symbols can become an expensive operation. Because of this, libdebug offers the possibility of choosing among 5 levels of symbol resolution. To set the symbol resolution level, you can use the sym_lvl property of the libcontext module. The default value is level 5.
debuginfod. The file is cached in the default folder for debuginfod. Upon searching for symbols, libdebug will proceed from the lowest level to the set maximum.
Example of setting the symbol resolution level
from libdebug import libcontext\n\nlibcontext.sym_lvl = 3\nd.breakpoint('main')\n If you want to change the symbol resolution level temporarily, you can use a with statement along with the tmp method of the libcontext module.
Example of temporary resolution level change
from libdebug import libcontext\n\nwith libcontext.tmp(sym_lvl = 5):\n d.breakpoint('main')\n","boost":4},{"location":"quality_of_life/symbols/#symbol-filtering","title":"Symbol Filtering","text":"The symbols attribute of the Debugger object allows you to filter symbols in the binary and shared libraries.
Function Signature
d.symbols.filter(value: int | str) -> SymbolList[Symbol]\n Given a symbol name or address, this function returns a SymbolList. The list will contain all symbols that match the given value.
Symbol objects contain the following attributes:
Attribute Type Descriptionstart int The start offset of the symbol. end int The end offset of the symbol. name str The name of the symbol. backing_file str The file where the symbol is defined (e.g., binary, libc, ld). Slow Symbol Resolution
Please keep in mind that symbol resolution can be an expensive operation on large binaries and shared libraries. If you are experiencing performance issues, you can set the symbol resolution level to a lower value.
","boost":4},{"location":"save_states/save_states/","title":"Save States","text":"Save states are a powerful feature in libdebug to save the current state of the process.
There is no single way to define a save state. The state of a process in an operating system, is not just its memory and register contents. The process interacts with shared external resources, such as files, sockets, and other processes. These resources cannot be restored in a reliable way. Still, there are many interesting use cases for saving and restoring all that can be saved.
So...what is a save state in libdebug? Although we plan on supporting multiple types of save states for different use cases in the near future, libdebug currently supports only snapshots.
","boost":4},{"location":"save_states/snapshot_diffs/","title":"Snapshot Diffs","text":"Snapshot diffs are objects that represent what changed between two snapshots. They are created through the diff() method of a snapshot.
The level of a diff is resolved as the lowest level of the two snapshots being compared. For example, if a diff is created between a full snapshot and a base snapshot, their diff will be of base level. For more information on the different levels of snapshots, see the Snapshots page.
ASLR Mess
If Address Space Layout Randomization (ASLR) is enabled, the memory addresses in the diffs may appear inconsistent or messy. libdebug will remind you of this when you diff snapshots with ASLR enabled. See here for more information.
","boost":4},{"location":"save_states/snapshot_diffs/#api","title":"API","text":"Just like snapshots themselves, diffs try to mimic the API of the Debugger and ThreadContext objects. The main difference is that returned objects represent a change in state, rather than the state itself.
","boost":4},{"location":"save_states/snapshot_diffs/#register-diffs","title":"Register Diffs","text":"The regs attribute of a diff object (aliased as registers) is a RegisterDiffAccessor object that allows you to access the register values of the snapshot. The accessor will return a RegisterDiff object that represents the difference between the two snapshots.
You can access each diff with any of the architecture-specific register names. For a full list, refer to the Register Access page.
Example usage
print(ts_diff.regs.rip)\n Output: RegisterDiff(old_value=0x56148d577130, new_value=0x56148d577148, has_changed=True)\n Each register diff is an object with the following attributes:
Attribute Data Type Descriptionold_value int | float The value of the register in the first snapshot. new_value int | float The value of the register in the second snapshot. has_changed bool Whether the register value has changed.","boost":4},{"location":"save_states/snapshot_diffs/#memory-map-diffs","title":"Memory Map Diffs","text":"The maps attribute of a diff object is a MemoryMapDiffList object that contains the memory maps of the process in each of the snapshots.
Here is what a MemoryMapDiff object looks like:
Example usage
print(ts_diff.maps[-2])\n Output (indented for readability): MemoryMapDiff(\n old_map_state=MemoryMap(\n start=0x7fff145ea000,\n end=0x7fff1460c000,\n permissions=rw-p,\n size=0x22000,\n offset=0x0,\n backing_file=[stack]\n ) [snapshot with content],\n new_map_state=MemoryMap(\n start=0x7fff145ea000,\n end=0x7fff1460c000,\n permissions=rw-p,\n size=0x22000,\n offset=0x0,\n backing_file=[stack]\n ) [snapshot with content],\n has_changed=True,\n _cached_diffs=None\n)\n The map diff contains the following attributes:
Attribute Data Type Descriptionold_map_state MemoryMap The memory map in the first snapshot. new_map_state MemoryMap The memory map in the second snapshot. has_changed bool Whether the memory map has changed. Memory Map Diff Levels
If the diff is of base level, the has_changed attribute will only consider superficial changes in the memory map (e.g., permissions, end address). Under the writable and full levels, the diff will also consider the contents of the memory map.
If the diff is of full or writable level, the MemoryMapDiff object exposes a useful utility to track blocks of differing memory contents in a certain memory map: the content_diff attribute.
Example usage
stack_page_diff = ts_diff.maps.filter(\"stack\")[0]\n\nfor current_slice in stack_page_diff.content_diff:\n print(f\"Memory diff slice: {hex(current_slice.start)}:{hex(current_slice.stop)}\")\n Output: Memory diff slice: 0x20260:0x20266\nMemory diff slice: 0x20268:0x2026e\n The attribute will return a list of slice objects that represent the blocks of differing memory contents in the memory map. Each slice will contain the start and end addresses of the differing memory block relative to the memory map.
","boost":4},{"location":"save_states/snapshot_diffs/#attributes","title":"Attributes","text":"Attribute Data Type Level Description Aliases Commonsnapshot1 Snapshot All The earliest snapshot being compared (recency is determined by id ordering). snapshot2 Snapshot All The latest snapshot being compared (recency is determined by id ordering). level str All The diff level. maps MemoryMapDiffList All The memory maps of the process. Each map will also have the contents of the memory map under the appropriate snapshot level. Thread Snapshot Diff regs RegisterDiffAccessor All The register values of the thread. registers Process Snapshot Diff born_threads list[LightweightThreadSnapshot] All Snapshots of all threads of the process. dead_threads list[LightweightThreadSnapshot] All Snapshots of all threads of the process. threads list[LightweightThreadSnapshotDiff] All Snapshots of all threads of the process. regs RegsterDiffAccessor All The register values of the main thread of the process. registers","boost":4},{"location":"save_states/snapshot_diffs/#pretty-printing","title":"Pretty Printing","text":"Pretty Printing is a feature of some libdebug objects that allows you to print the contents of a snapshot in a colorful and eye-catching format. This is useful when you want to inspect the state of the process at a glance.
Diff objects have the following pretty printing functions:
Function Descriptionpprint_registers() Prints changed general-purpose register values pprint_registers_all() Prints all changed register values (including special and vector registers) pprint_maps() Prints memory maps which have changed between snapshots (highlights if only the content or the end address have changed). pprint_memory() Prints the memory content diffs of the snapshot. See next section for more information pprint_backtrace() Prints the diff of the backtrace between the two snapshots. Here are some visual examples of the pretty printing functions:
","boost":4},{"location":"save_states/snapshot_diffs/#register-diff-pretty-printing","title":"Register Diff Pretty Printing","text":"The pprint_registers() function of a diff object will print the changed general-purpose register values.
Here is a visual example of the register diff pretty printing:
","boost":4},{"location":"save_states/snapshot_diffs/#memory-map-diff-pretty-printing","title":"Memory Map Diff Pretty Printing","text":"The pprint_maps() function of a diff object will print the memory maps which have changed between snapshots. It also hi
Here is a visual example of the memory map diff pretty printing:
","boost":4},{"location":"save_states/snapshot_diffs/#memory-content-diff-pretty-printing","title":"Memory Content Diff Pretty Printing","text":"The pprint_memory() function of a diff object will print the content diffs within a certain range of memory addresses.
Function signature
ts_diff.pprint_memory(\n start: int,\n end: int,\n file: str = \"hybrid\",\n override_word_size: int = None,\n integer_mode: bool = False,\n) -> None:\n Parameter Data Type Description start int The start address of the memory range to print. end int The end address of the memory range to print. file str (optional) The file to use for the memory content. Defaults to hybrid mode (see memory access). override_word_size int (optional) The word size to use to align memory contents. By default, it uses the ISA register size. integer_mode bool (optional) Whether to print the memory content in integer mode. Defaults to False Start after End
For your convenience, if the start address is greater than the end address, the function will swap the values.
Here is a visual example of the memory content diff pretty printing (with and without integer mode):
Integer mode disabledInteger mode enabled ","boost":4},{"location":"save_states/snapshot_diffs/#stack-trace-diff-pretty-printing","title":"Stack Trace Diff Pretty Printing","text":"To pretty print the stack trace diff (backtrace) of a process, you can use the pprint_backtrace() function. Return addresses are printed from the most to the least recent. They are placed in three columns. The center one is the common part of the backtrace, while the left and right columns are the differing parts. The following image shows an example of a backtrace diff:
Snapshots are a static type of save state in libdebug. They allow you to save the current state of the process in terms of registers, memory, and other process properties. Snapshots can be saved to disk as a file and loaded for future use. Finally, snapshots can be diffed to compare the differences between the state of the process at two different moments or executions.
Snapshots are static
Snapshots are static in the sense that they capture the state of the process at a single moment in time. They can be loaded and inspected at any time and across different architectures. They do not, however, allow to restore their state to the process.
There are three available levels of snapshots in libdebug, which differ in the amount of information they store:
Level Registers Memory Pages Memory Contentsbase writable writable pages only full Since memory content snapshots can be large, the default level is base.
You can create snapshots of single threads or the entire process.
","boost":4},{"location":"save_states/snapshots/#api","title":"API","text":"Register Access
You can access a snapshot's registers using the regs attribute, just like you would when debugging the process.
API Reference
Memory Access
When the snapshot level is appropriate, you can access the memory of the process using the memory attribute.
API Reference
Memory Maps
Memory maps are always available. When the snapshot level is appropriate, you can access the contents as a bytes-like object.
API Reference
Stack Trace
When the snapshot level is appropriate, you can access the backtrace of the process or thread.
API Reference
The function used to create a snapshot is create_snapshot(). It behaves differently depending on the object it is called from.
The following is the signature of the function:
Function Signature
d.create_snapshot(level: str = \"base\", name: str = None) -> ProcessSnapshot\n or t.create_snapshot(level: str = \"base\", name: str = None) -> ThreadSnapshot\n Where d is a Debugger object and t is a ThreadContext object. The following is an example usage of the function in both cases:
d = debugger(\"program\")\n\nmy_thread = d.threads[1]\n\n# Thread Snapshot\nts = my_thread.create_snapshot(level=\"full\", name=\"cool snapshot\") #(1)!\n\n# Process Snapshot\nps = d.create_snapshot(level=\"writable\", name=\"very cool snapshot\") #(2)!\n my_thread and name it \"cool snapshot\".Naming Snapshots
When creating a snapshot, you can optionally specify a name for it. The name will be useful when comparing snapshots in diffs or when saving them to disk.
","boost":4},{"location":"save_states/snapshots/#saving-and-loading-snapshots","title":"Saving and Loading Snapshots","text":"You can save a snapshot to disk using the save() method of the Snapshot object. The method will create a serializable version of the snapshot and export a json file to the specified path.
Example usage
ts = d.threads[1].create_snapshot(level=\"full\")\nts.save(\"path/to/save/snapshot.json\")\n You can load a snapshot from disk using the load_snapshot() method of the Debugger object. The method will read the json file from the specified path and create a Snapshot object from it.
Example usage
ts = d.load_snapshot(\"path/to/load/snapshot.json\")\n The snapshot type will be inferred from the json file, so you can easily load both thread and process snapshots from the same method.
","boost":4},{"location":"save_states/snapshots/#resolving-diffs","title":"Resolving Diffs","text":"Thanks to their static nature, snapshots can be easily compared to find differences in saved properties.
You can diff a snapshot against another using the diff() method. The method will return a Diff object that represents the differences between the two snapshots. The diff will be of the lowest level of the two snapshots being compared in terms.
Example usage
ts1 = d.threads[1].create_snapshot(level=\"full\")\n\n[...] # (1)!\n\nts2 = d.threads[1].create_snapshot(level=\"full\")\n\nts_diff = ts1.diff(ts2) # (2)!\n Diffs have a rich and detailed API that allows you to inspect the differences in registers, memory, and other properties. Read more in the dedicated section.
","boost":4},{"location":"save_states/snapshots/#pretty-printing","title":"Pretty Printing","text":"Pretty Printing is a feature of some libdebug objects that allows you to print the contents of a snapshot in a colorful and eye-catching format. This is useful when you want to inspect the state of the process at a glance.
Pretty printing utilities of snapshots are \"mirrors\" of pretty pretting functions available for the Debugger and ThreadContext. Here is a list of available pretty printing functions and their equivalent for the running process:
Function Description Referencepprint_registers() Prints the general-purpose registers of the snapshot. API Reference pprint_registers_all() Prints all registers of the snapshot. API Reference pprint_maps() Prints the memory of the snapshot. API Reference pprint_backtrace() Prints the backtrace of the snapshot. API Reference","boost":4},{"location":"save_states/snapshots/#attributes","title":"Attributes","text":"Attribute Data Type Level Description Aliases Common name str (optional) All The name of the snapshot. arch str All The ISA under which the snapshot process was running. snapshot_id int All Progressive id counted from 0. Process and Thread snapshots have separate counters. level str All The snapshot level. maps MemoryMapSnapshotList All The memory maps of the process. Each map will also have the contents of the memory map under the appropriate snapshot level. memory SnapshotMemoryView writable / full Interface to the memory of the process. mem aslr_enabled bool All Whether ASLR was enabled at the time of the snapshot. Thread Snapshot thread_id int All The ID of the thread the snapshot was taken from. tid regs SnapshotRegisters All The register values of the thread. registers Process Snapshot process_id int All The ID of the process the snapshot was taken from. pid threads list[LightweightThreadSnapshot] All Snapshots of all threads of the process. regs SnapshotRegisters All The register values of the main thread of the process. registers","boost":4},{"location":"stopping_events/breakpoints/","title":"Breakpoints","text":"Breakpoints are the killer feature of any debugger, the fundamental stopping event. They allow you to stop the execution of your code at a specific point and inspect the state of your program to find bugs or understand its design.
Multithreading and Breakpoints
libdebug breakpoints are shared across all threads. This means that any thread can hit the breakpoint and cause the process to stop. You can use the hit_on() method of a breakpoint object to determine which thread hit the breakpoint (provided that the stop was indeed caused by the breakpoint).
A breakpoint can be inserted at any of two levels: software or hardware.
","boost":4},{"location":"stopping_events/breakpoints/#software-breakpoints","title":"Software Breakpoints","text":"Software breakpoints in the Linux kernel are implemented by patching the code in memory at runtime. The instruction at the chosen address is replaced with an interrupt instruction that is conventionally used for debugging. For example, in the i386 and AMD64 instruction sets, int3 (0xCC) is reserved for this purpose.
When the int3 instruction is executed, the CPU raises a SIGTRAP signal, which is caught by the debugger. The debugger then stops the process and restores the original instruction to its rightful place.
Pros and Cons of Software Breakpoints
Software breakpoints are unlimited, but they can break when the program uses self-modifying code. This is because the patched code could be overwritten by the program. On the other hand, software breakpoints are slower than their hardware counterparts on most modern CPUs.
","boost":4},{"location":"stopping_events/breakpoints/#hardware-breakpoints","title":"Hardware Breakpoints","text":"Hardware breakpoints are a more reliable way to set breakpoints. They are made possible by the existence of special registers in the CPU that can be used to monitor memory accesses. Differently from software breakpoints, their hardware counterparts allows the debugger to monitor read and write accesses on top of code execution. This kind of hardware breakpoint is also called a watchpoint. More information on watchpoints can be found in the dedicated documentation.
Pros and Cons of Hardware Breakpoints
Hardware breakpoints are not affected by self-modifying code. They are also usually faster and more flexible. However, hardware breakpoints are limited in number and are hardware-dependent, so their support may vary across different systems.
Hardware Breakpoint Alignment in AArch64
Hardware breakpoints have to be aligned to 4 bytes (which is the size of an ARM instruction).
","boost":4},{"location":"stopping_events/breakpoints/#libdebug-api-for-breakpoints","title":"libdebug API for Breakpoints","text":"The breakpoint() function in the Debugger object sets a breakpoint at a specific address.
Function Signature
d.breakpoint(address, hardware=False, condition='x', length=1, callback=None, file='hybrid')\n Parameters:
Argument Type Descriptionaddress int | str The address or symbol where the breakpoint will be set. hardware bool Set to True to set a hardware breakpoint. condition str The type of access in case of a hardware breakpoint. length int The size of the word being watched in case of a hardware breakpoint. callback Callable | bool (see callback signature here) Used to create asyncronous breakpoints (read more on the debugging flow of stopping events). file str The backing file for relative addressing. Refer to the memory access section for more information on addressing modes. Returns:
Return Type DescriptionBreakpoint Breakpoint The breakpoint object created. Limited Hardware Breakpoints
Hardware breakpoints are limited in number. If you exceed the number of hardware breakpoints available on your system, a RuntimeError will be raised.
Usage Example
from libdebug import debugger\n\nd = debugger(\"./test_program\")\n\nd.run()\n\nbp = d.breakpoint(0x10ab, file=\"binary\") # (1)!\nbp1 = d.breakpoint(\"main\", file=\"binary\") # (3)!\nbp2 = d.breakpoint(\"printf\", file=\"libc\") # (4)!\n\nd.cont()\n\nprint(f\"RAX: {d.regs.rax:#x} at the breakpoint\") # (2)!\nif bp.hit_on(d):\n print(\"Breakpoint at 0x10ab was hit\")\nelif bp1.hit_on(d):\n print(\"Breakpoint at main was hit\")\nelif bp2.hit_on(d):\n print(\"Breakpoint at printf was hit\")\n main symbolprintf symbol in the libc libraryIf you wish to create an asynchronous breakpoint, you will have to provide a callback function. If you want to leave the callback empty, you can set callback to True.
Callback Signature
def callback(t: ThreadContext, bp: Breakpoint):\n Parameters:
Argument Type Descriptiont ThreadContext The thread that hit the breakpoint. bp Breakpoint The breakpoint object that triggered the callback. Example usage of asynchronous breakpoints
def on_breakpoint_hit(t, bp):\n print(f\"RAX: {t.regs.rax:#x}\")\n\n if bp.hit_count == 100:\n print(\"Hit count reached 100\")\n bp.disable()\n\nd.breakpoint(0x11f0, callback=on_breakpoint_hit, file=\"binary\")\n","boost":4},{"location":"stopping_events/breakpoints/#the-breakpoints-dict","title":"The Breakpoints Dict","text":"The breakpoints attribute of the Debugger object is a dictionary that contains all the breakpoints set by the user. The keys are the addresses of the breakpoints, and the values are the corresponding Breakpoint objects. This is useful to retrieve breakpoints in \\(O(1)\\) time complexity.
Usage Example - Massive Breakpoint Insertion
from libdebug import debugger\n\ndef hook_callback(t, bp):\n [...]\n\nd = debugger(\"example_binary\")\nd.run()\n\n# Massive breakpoint insertion\nwith open(\"example_binary\", \"rb\") as f:\n binary_data = f.read()\n\ncursor = 0\nwhile cursor < len(binary_data):\n if binary_data[cursor:cursor+2] == b\"\\xD9\\xC9\":\n d.breakpoint(cursor, callback=hook_callback, file=\"binary\") # (1)!\n cursor += 1\n\nd.cont()\n\n[...]\n\nip = d.regs.rip\n\nif d.memory[0x10, 4, \"binary\"] == b\"\\x00\\xff\\x00\\xab\":\n d.breakpoints[ip].disable() # (2)!\n[...]\n FXCH instruction in the binary (at least ones found through static analysis)Before diving into each libdebug stopping event, it's crucial to understand the debugging flow that these events introduce, based on the mode selected by the user.
The flow of all stopping events is similar and adheres to a mostly uniform API structure. Upon placing a stopping event, the user is allowed to specify a callback function for the stopping event. If a callback is passed, the event will trigger asynchronously. Otherwise, if the callback is not passed, the event will be synchronous. The following flowchart shows the difference between the two flows.
Flowchart of different handling modes for stopping eventsWhen a synchronous event is hit, the process will stop, awaiting further commands. When an asynchronous event is hit, libdebug temporarily stops the process and invokes the user callback. Process execution is automatically resumed right after.
Tip: Use cases of asynchronous stopping events
The asynchronous mode for stopping events is particularly useful for events being repeated as a result of a loop in the executed code.
When attempting side-channel reverse engineering, this mode can save a lot of your time.
","boost":4},{"location":"stopping_events/debugging_flow/#types-of-stopping-events","title":"Types of Stopping Events","text":"libdebug supports the following types of stopping events:
Event Type Description Notes Breakpoint Stops the process when a certain address is executed Can be a software or a hardware breakpoint Watchpoint Stops the process when a memory area is read or written Alias for a hardware breakpoint Syscall Stops the process when a syscall is made Two events are supported: syscall start and end Signal Stops the process when a signal is receivedMultiple callbacks or hijacks
Please note that there can be at most one user-defined callback or hijack for each instance of a stopping event (the same syscall, signal or breakpoint address). If a new stopping event is defined for the same thing, the new stopping event will replace the old one, and a warning will be printed.
Internally, hijacks are considered callbacks, so you cannot have a callback and hijack registered for the same event.
","boost":4},{"location":"stopping_events/debugging_flow/#common-apis-of-stopping-events","title":"Common APIs of Stopping Events","text":"All libdebug stopping events share some common attributes that can be employed in debugging scripts.
","boost":4},{"location":"stopping_events/debugging_flow/#enabledisable","title":"Enable/Disable","text":"All stopping events can be enabled or disabled at any time. You can read the enabled attribute to check the current state of the event. To enable or disable the event, you can call the enable() or disable() methods respectively.
The callback function of the event can be set, changed or removed (set to None) at any time. Please be mindful of the event mode resulting from the change on the callback parameter. Additionally, you can set the callback to True to register an empty callback.
Stopping events have attributes that can help you keep track of hits. For example, the hit_count attribute stores the number of times the event has been triggered.
The hit_on() function is used to check if the stopping event was the cause of the process stopping. It is particularly useful when debugging multithreaded applications, as it takes a ThreadContext as a parameter. Refer to multithreading for more information.
Hijacking is a powerful feature that allows you to change the flow of the process when a stopping event is hit. It is available for both syscalls and signals, but currently not for other stopping events. When registering a hijack for a compatible stopping event, that execution flow will be replaced with another.
Example hijacking of a SIGALRM to a SIGUSR1For example, in the case of a signal, you can specify that a received SIGALRM signal should be replaced with a SIGUSR1 signal. This can be useful when you want to prevent a process from executing a certain code path. In fact, you can even use the hijack feature to \"NOP\" the syscall or signal altogether, avoiding it to be executed / forwarded to the processed. More information on how to use this feature in each stopping event can be found in their respective documentation.
Mixing asynchronous callbacks and hijacking can become messy. Because of this, libdebug provides users with the choice of whether to execute the callback for an event that was triggered by a callback or hijack.
This behavior is enabled by the parameter recursive, available when instantiating a syscall handler, a signal catcher, or their respective hijackers. By default, recursion is disabled.
Recursion Loop Detection
When carelessly doing recursive callbacks and hijacking, it could happen that loops are created. libdebug automatically performs checks to avoid these situations and raises an exception if an infinite loop is detected.
For example, the following code raises a RuntimeError:
handler = d.hijack_syscall(\"read\", \"write\", recursive=True)\nhandler = d.hijack_syscall(\"write\", \"read\", recursive=True)\n","boost":4},{"location":"stopping_events/signals/","title":"Signals","text":"Signals are a feature of POSIX systems like (e.g., the Linux kernel) that provide a mechanism for asynchronous communication between processes and the operating system. When certain events occur (e.g., hardware interrupts, illegal operations, or termination requests) the kernel can send a signal to a process to notify it of the event. Each signal is identified by a unique integer and corresponds to a specific type of event. For example, SIGINT (usually triggered by pressing Ctrl+C) is used to interrupt a process, while SIGKILL forcefully terminates a process without cleanup.
Processes can handle these signals in different ways: they may catch and define custom behavior for certain signals, ignore them, or allow the default action to occur.
Restrictions on Signal Catching
libdebug does not support catching SIGSTOP and SIGKILL, since kernel-level restrictions prevent these signals from being caught or ignored. While SIGTRAP can be caught, it is used internally by libdebug to implement stopping events and should be used with caution.
libdebug allows you to intercept signals sent to the tracee. Specifically, you can choose to catch or hijack a specific signal (read more on hijacking).
","boost":4},{"location":"stopping_events/signals/#signal-catchers","title":"Signal Catchers","text":"Signal catchers can be created to register stopping events for when a signal is received.
Multiple catchers for the same signal
Please note that there can be at most one user-defined catcher or hijack for each signal. If a new catcher is defined for a signal that is already caught or hijacked, the new catcher will replace the old one, and a warning will be printed.
","boost":4},{"location":"stopping_events/signals/#libdebug-api-for-signal-catching","title":"libdebug API for Signal Catching","text":"The catch_signal() function in the Debugger object registers a catcher for the specified signal.
Function Signature
d.catch_signal(signal, callback=None, recursive=False) \n Parameters:
Argument Type Descriptionsignal int | str The signal number or name to catch. If set to \"*\" or \"all\", all signals will be caught. callback Callable | bool (see callback signature here) The callback function to be executed when the signal is received. recursive bool If set to True, the catcher's callback will be executed even if the signal was triggered by a hijack. Returns:
Return Type DescriptionSignalCatcher SignalCatcher The catcher object created. Inside a callback or when the process stops on hitting your catcher, you can retrieve the signal number that triggered the catcher by accessing the signal_number attribute of the ThreadContext object. Alternatively, if one exists, the signal attribute of the will contain the signal mnemonic corresponding to the signal number. This is particularly useful when your catcher is registered for multiple signals (e.g., with the all option) and accessing the signal number from it will not represent the signal that triggered the catcher.
Callback Signature
def callback(t: ThreadContext, catcher: SignalCatcher):\n Parameters:
Argument Type Descriptiont ThreadContext The thread that received the signal. catcher SignalCatcher The SignalCatcher object that triggered the callback. Signals in multi-threaded applications
In the Linux kernel, an incoming signal could be delivered to any thread in the process. Please do not assume that the signal will be delivered to a specific thread in your scripts.
Example usage of asynchronous signal catchers
from libdebug import debugger\n\nd = debugger(\"./test_program\")\nd.run()\n\n# Define the callback function\ndef catcher_SIGUSR1(t, catcher):\n t.signal = 0x0 # (1)!\n print(\"Look mum, I'm catching a signal\")\n\ndef catcher_SIGINT(t, catcher):\n print(\"Look mum, I'm catching another signal\")\n\n# Register the signal catchers\ncatcher1 = d.catch_signal(10, callback=catcher_SIGUSR1)\ncatcher2 = d.catch_signal('SIGINT', callback=catcher_SIGINT)\n\nd.cont()\nd.wait()\n 0x0 to prevent the signal from being delivered to the process. (Equivalent to filtering the signal).Example of synchronous signal catching
from libdebug import debugger\n\nd = debugger(\"./test_program\")\nd.run()\n\ncatcher = d.catch_signal(10)\nd.cont()\n\nif catcher.hit_on(d):\n print(\"Signal 10 was caught\")\n The script above will print \"Signal 10 was entered\".
Example of all signal catching
from libdebug import debugger\n\ndef catcher(t, catcher):\n print(f\"Signal {t.signal_number} ({t.signal}) was caught\")\n\nd = debugger(\"./test_program\")\nd.run()\n\ncatcher = d.catch_signal(\"all\")\nd.cont()\nd.wait()\n The script above will print the number and mnemonic of the signal that was caught.
","boost":4},{"location":"stopping_events/signals/#hijacking","title":"Hijacking","text":"When hijacking a signal, the user can provide an alternative signal to be executed in place of the original one. Internally, the hijack is implemented by registering a catcher for the signal and replacing the signal number with the new one.
Function Signature
d.hijack_signal(original_signal, new_signal, recursive=False) \n Parameters:
Argument Type Descriptionoriginal_signal int | str The signal number or name to be hijacked. If set to \"*\" or \"all\", all signals except the restricted ones will be hijacked. new_signal int | str The signal number or name to be delivered instead. recursive bool If set to True, the catcher's callback will be executed even if the signal was dispached by a hijack. Returns:
Return Type DescriptionSignalCatcher SignalCatcher The catcher object created. Example of hijacking a signal
#include <stdio.h>\n#include <stdlib.h>\n#include <unistd.h>\n#include <signal.h>\n\n// Handler for SIGALRM\nvoid handle_sigalrm(int sig) {\n printf(\"You failed. Better luck next time\\n\");\n exit(1);\n}\n\n// Handler for SIGUSR1\nvoid handle_sigusr1(int sig) {\n printf(\"Congrats: flag{pr1nt_pr0vol4_1s_th3_w4y}\\n\");\n exit(0);\n}\n\nint main() {\n // Set up the SIGALRM handler\n struct sigaction sa_alrm;\n sa_alrm.sa_handler = handle_sigalrm;\n sigemptyset(&sa_alrm.sa_mask);\n sa_alrm.sa_flags = 0;\n sigaction(SIGALRM, &sa_alrm, NULL);\n\n // Set up the SIGUSR1 handler\n struct sigaction sa_usr1;\n sa_usr1.sa_handler = handle_sigusr1;\n sigemptyset(&sa_usr1.sa_mask);\n sa_usr1.sa_flags = 0;\n sigaction(SIGUSR1, &sa_usr1, NULL);\n\n // Set an alarm to go off after 10 seconds\n alarm(10);\n\n printf(\"Waiting for a signal...\\n\");\n\n // Infinite loop, waiting for signals\n while (1) {\n pause(); // Suspend the program until a signal is caught\n }\n\n return 0;\n}\n from libdebug import debugger\n\nd = debugger(\"./test_program\")\nd.run()\n\nhandler = d.hijack_signal(\"SIGALRM\", \"SIGUSR1\")\n\nd.cont()\n\n# Will print \"Waiting for a signal...\"\nout = pipe.recvline()\nprint(out.decode())\n\nd.wait()\n\n# Will print the flag\nout = pipe.recvline()\nprint(out.decode())\n","boost":4},{"location":"stopping_events/signals/#signal-filtering","title":"Signal Filtering","text":"Instead of setting a catcher on signals, you might want to filter which signals are not to be forwarded to the debugged process during execution.
Example of signal filtering
d.signals_to_block = [10, 15, 'SIGINT', 3, 13]\n","boost":4},{"location":"stopping_events/signals/#arbitrary-signals","title":"Arbitrary Signals","text":"You can also send an arbitrary signal to the process. The signal will be forwarded upon resuming execution. As always, you can specify the signal number or name.
Example of sending an arbitrary signal
d.signal = 10\nd.cont()\n In multithreaded applications, the same syntax applies when using a ThreadContext object instead of the Debugger object.
","boost":4},{"location":"stopping_events/stopping_events/","title":"Stopping Events","text":"Debugging a process involves stopping the execution at specific points to inspect the state of the program. libdebug provides several ways to stop the execution of a program, such as breakpoints, syscall handling and signal catching. This section covers the different stopping events available in libdebug.
","boost":4},{"location":"stopping_events/stopping_events/#is-the-process-running","title":"Is the process running?","text":"Before we dive into the different stopping events, it is important to understand how to check if the process is running. The running attribute of the Debugger object returns True if the process is running and False otherwise.
Example
from libdebug import debugger\n\nd = debugger(\"program\")\n\nd.run()\n\nif d.running:\n print(\"The process is running\")\nelse:\n print(\"The process is not running\")\n In this example, the script should print The process is not running, since the run() command gives you control over a stopped process, ready to be debugged.
To know more on how to wait for the process to stop or forcibly cause it to stop, please read about control flow commands.
","boost":4},{"location":"stopping_events/syscalls/","title":"Syscalls","text":"System calls (a.k.a. syscalls or software interrupts) are the interface between user space and kernel space. They are used to request services from the kernel, such as reading from a file or creating a new process. libdebug allows you to trace syscalls invoked by the debugged program. Specifically, you can choose to handle or hijack a specific syscall (read more on hijacking).
For extra convenience, the Debugger and the ThreadContext objects provide a system-agnostic interface to the arguments and return values of syscalls. Interacting directly with these parameters enables you to create scripts that are independent of the syscall calling convention specific to the target architecture.
Field Descriptionsyscall_number The number of the syscall. syscall_arg0 The first argument of the syscall. syscall_arg1 The second argument of the syscall. syscall_arg2 The third argument of the syscall. syscall_arg3 The fourth argument of the syscall. syscall_arg4 The fifth argument of the syscall. syscall_arg5 The sixth argument of the syscall. syscall_return The return value of the syscall. Example of Syscall Parameters
[...] # (1)!\n\nbinsh_str = d.memory.find(b\"/bin/sh\\x00\", file=\"libc\")[0]\n\nd.syscall_arg0 = binsh_str\nd.syscall_arg1 = 0x0\nd.syscall_arg2 = 0x0\nd.syscall_number = 0x3b\n\nd.step() # (2)!\n execve('/bin/sh', 0, 0) will be executed in place of the previous syscall.Syscall handlers can be created to register stopping events for when a syscall is entered and exited.
Do I have to handle both on enter and on exit?
When using asynchronous syscall handlers, you can choose to handle both or only one of the two events. However, when using synchronous handlers, both events will stop the process.
","boost":4},{"location":"stopping_events/syscalls/#libdebug-api-for-syscall-handlers","title":"libdebug API for Syscall Handlers","text":"The handle_syscall() function in the Debugger object registers a handler for the specified syscall.
Function Signature
d.handle_syscall(syscall, on_enter=None, on_exit=None, recursive=False) \n Parameters:
Argument Type Descriptionsyscall int | str The syscall number or name to be handled. If set to \"*\" or \"all\" or \"ALL\", all syscalls will be handled. on_enter Callable | bool (see callback signature here) The callback function to be executed when the syscall is entered. on_exit Callable | bool (see callback signature here) The callback function to be executed when the syscall is exited. recursive bool If set to True, the handler's callback will be executed even if the syscall was triggered by a hijack or caused by a callback. Returns:
Return Type DescriptionSyscallHandler SyscallHandler The handler object created.","boost":4},{"location":"stopping_events/syscalls/#callback-signature","title":"Callback Signature","text":"Callback Signature
def callback(t: ThreadContext, handler: HandledSyscall) -> None:\n Parameters:
Argument Type Descriptiont ThreadContext The thread that hit the syscall. handler SyscallHandler The SyscallHandler object that triggered the callback. Nuances of Syscall Handling
The syscall handler is the only stopping event that can be triggered by the same syscall twice in a row. This is because the handler is triggered both when the syscall is entered and when it is exited. As a result the hit_on() method of the SyscallHandler object will return True in both instances.
You can also use the hit_on_enter() and hit_on_exit() functions to check if the cause of the process stop was the syscall entering or exiting, respectively.
As for the hit_count attribute, it only stores the number of times the syscall was exited.
Example usage of asynchronous syscall handlers
def on_enter_open(t, handler):\n print(\"entering open\")\n t.syscall_arg0 = 0x1\n\ndef on_exit_open(t, handler):\n print(\"exiting open\")\n t.syscall_return = 0x0\n\nhandler = d.handle_syscall(syscall=\"open\", on_enter=on_enter_open, on_exit=on_exit_open)\n Example of synchronous syscall handling
from libdebug import debugger\n\nd = debugger(\"./test_program\")\nd.run()\n\nhandler = d.handle_syscall(syscall=\"open\")\nd.cont()\n\nif handler.hit_on_enter(d):\n print(\"open syscall was entered\")\nelif handler.hit_on_exit(d):\n print(\"open syscall was exited\")\n The script above will print \"open syscall was entered\".
","boost":4},{"location":"stopping_events/syscalls/#resolution-of-syscall-numbers","title":"Resolution of Syscall Numbers","text":"Syscall handlers can be created with the identifier number of the syscall or by the syscall's common name. In the second case, syscall names are resolved from a definition list for Linux syscalls on the target architecture. The list is fetched from mebeim's syscall table. We thank him for hosting such a precious resource. Once downloaded, the list is cached internally.
","boost":4},{"location":"stopping_events/syscalls/#hijacking","title":"Hijacking","text":"When hijacking a syscall, the user can provide an alternative syscall to be executed in place of the original one. Internally, the hijack is implemented by registering a handler for the syscall and replacing the syscall number with the new one.
Function Signature
d.hijack_syscall(original_syscall, new_syscall, recursive=False, **kwargs) \n Parameters:
Argument Type Descriptionoriginal_syscall int | str The syscall number or name to be hijacked. If set to \"*\" or \"all\" or \"ALL\", all syscalls will be hijacked. new_syscall int | str The syscall number or name to be executed instead. recursive bool If set to True, the handler's callback will be executed even if the syscall was triggered by a hijack or caused by a callback. **kwargs (int, optional) Additional arguments to be passed to the new syscall. Returns:
Return Type DescriptionSyscallHandler SyscallHandler The handler object created. Example of hijacking a syscall
#include <unistd.h>\n\nchar secretBuffer[32] = \"The password is 12345678\";\n\nint main(int argc, char** argv)\n{\n [...]\n\n read(0, secretBuffer, 31);\n\n [...]\n return 0;\n}\n from libdebug import debugger\n\nd = debugger(\"./test_program\")\nd.run()\n\nhandler = d.hijack_syscall(\"read\", \"write\")\n\nd.cont()\nd.wait()\n\nout = pipe.recvline()\nprint(out.decode())\n In this case, the secret will be leaked to the standard output instead of being overwritten with content from the standard input.
For your convenience, you can also easily provide the syscall parameters to be used when the hijacked syscall is executed:
Example of hijacking a syscall with parameters
#include <unistd.h>\n\nchar manufacturerName[32] = \"libdebug\";\nchar secretKey[32] = \"provola\";\n\nint main(int argc, char** argv)\n{\n [...]\n\n read(0, manufacturerName, 31);\n\n [...]\n return 0;\n}\n from libdebug import debugger\n\nd = debugger(\"./test_program\")\nd.run()\n\nmanufacturerBuffer = ...\n\nhandler = d.hijack_syscall(\"read\", \"write\",\n syscall_arg0=0x1,\n syscall_arg1=manufacturerBuffer,\n syscall_arg2=0x100\n)\n\nd.cont()\nd.wait()\n\nout = pipe.recvline()\nprint(out.decode())\n Again, the secret will be leaked to the standard output.
","boost":4},{"location":"stopping_events/watchpoints/","title":"Watchpoints","text":"Watchpoints are a special type of hardware breakpoint that triggers when a specific memory location is accessed. You can set a watchpoint to trigger on certain memory access conditions, or upon execution (equivalent to a hardware breakpoint).
Features of watchpoints are shared with breakpoints, so you can set asynchronous watchpoints and use properties in the same way.
","boost":4},{"location":"stopping_events/watchpoints/#libdebug-api-for-watchpoints","title":"libdebug API for Watchpoints","text":"The watchpoint() function in the Debugger object sets a watchpoint at a specific address. While you can also use the breakpoint API to set up a watchpoint, a specific API is provided for your convenience:
Function Signature
d.watchpoint(position, condition='w', length=1, callback=None, file='hybrid') \n Parameters:
Argument Type Descriptionposition int | str The address or symbol where the watchpoint will be set. condition str The type of access (see later section). length int The size of the word being watched (see later section). callback Callable | bool (see callback signature here) Used to create asyncronous watchpoints (read more on the debugging flow of stopping events). file str The backing file for relative addressing. Refer to the memory access section for more information on addressing modes. Returns:
Return Type DescriptionBreakpoint Breakpoint The breakpoint object created.","boost":4},{"location":"stopping_events/watchpoints/#valid-access-conditions","title":"Valid Access Conditions","text":"The condition parameter specifies the type of access that triggers the watchpoint. Default is write access.
\"r\" Read access AArch64 \"w\" Write access AMD64, AArch64 \"rw\" Read/write access AMD64, AArch64 \"x\" Execute access AMD64","boost":4},{"location":"stopping_events/watchpoints/#valid-word-lengths","title":"Valid Word Lengths","text":"The length parameter specifies the size of the word being watched. By default, the watchpoint is set to watch a single byte.
Watchpoint alignment in AArch64
The address of the watchpoint on AArch64-based CPUs needs to be aligned to 8 bytes. Instead, basic hardware breakpoints have to be aligned to 4 bytes (which is the size of an ARM instruction).
","boost":4},{"location":"stopping_events/watchpoints/#callback-signature","title":"Callback Signature","text":"If you wish to create an asynchronous watchpoint, you will have to provide a callback function. Since internally watchpoints are implemented as hardware breakpoints, the callback signature is the same as for breakpoints. As for breakpoints, if you want to leave the callback empty, you can set callback to True.
Callback Signature
def callback(t: ThreadContext, bp: Breakpoint):\n Parameters:
Argument Type Descriptiont ThreadContext The thread that hit the breakpoint. bp Breakpoint The breakpoint object that triggered the callback. Example usage of asynchronous watchpoints
def on_watchpoint_hit(t, bp):\n print(f\"RAX: {t.regs.rax:#x}\")\n\n if bp.hit_count == 100:\n print(\"Hit count reached 100\")\n bp.disable()\n\nd.watchpoint(0x11f0, condition=\"rw\", length=8, callback=on_watchpoint_hit, file=\"binary\")\n","boost":4},{"location":"blog/archive/2025/","title":"2025","text":""},{"location":"blog/archive/2024/","title":"2024","text":""}]}
\ No newline at end of file
+{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Home","text":"","boost":2},{"location":"#quick-start","title":"Quick Start","text":"Welcome to libdebug! This powerful Python library can be used to debug your binary executables programmatically, providing a robust, user-friendly interface. Debugging multithreaded applications can be a nightmare, but libdebug has you covered. Hijack and manage signals and syscalls with a simple API.
Supported Systems
libdebug currently supports Linux under the x86_64, x86 and ARM64 architectures. Other operating systems and architectures are not supported at this time.
","boost":2},{"location":"#dependenciesadd-commentmore-actions","title":"DependenciesAdd commentMore actions","text":"To install libdebug, you first need to have some dependencies that will not be automatically resolved. These dependencies are libraries, utilities and development headers which are required by libdebug to compile its internals during installation.
Ubuntu Arch Linux Fedora Debiansudo apt install -y python3 python3-dev g++ libdwarf-dev libelf-dev libiberty-dev linux-headers-generic libc6-dbg\n sudo pacman -S python libelf libdwarf gcc make debuginfod\n sudo dnf install -y python3 python3-devel kernel-devel g++ binutils-devel libdwarf-devel\n sudo apt install -y python3 python3-dev g++ libdwarf-dev libelf-dev libiberty-dev linux-headers-generic libc6-dbg\n Is your distro missing?
If you are using a Linux distribution that is not included in this section, you can search for equivalent packages for your distro. Chances are the naming convention of your system's repository will only change a prefix or suffix.
","boost":2},{"location":"#installation","title":"Installation","text":"Installing libdebug once you have dependencies is as simple as running the following command:
stablepython3 -m pip install libdebug\n If you want to test your installation when installing from source, we provide a suite of tests that you can run:
Testing your installationgit clone https://github.com/libdebug/libdebug\ncd libdebug/test\npython run_suite.py\n For more advanced users, please refer to the Building libdebug from source page for more information on the build process.
","boost":2},{"location":"#your-first-script","title":"Your First Script","text":"Now that you have libdebug installed, you can start using it in your scripts. Here is a simple example of how to use libdebug to debug an executable:
libdebug's Hello World!from libdebug import debugger\n\nd = debugger(\"./test\") # (1)!\n\n# Start debugging from the entry point\nd.run() # (2)!\n\nmy_breakpoint = d.breakpoint(\"function\") # (3)!\n\n# Continue the execution until the breakpoint is hit\nd.cont() # (4)!\n\n# Print RAX\nprint(f\"RAX is {hex(d.regs.rax)}\") # (5)!\n test executable<function> in the binaryUsing pwntools alongside libdebug
The current version of libdebug is incompatible with pwntools.
While having both installed in your Python environment is not a problem, starting a process with pwntools in a libdebug script will cause unexpected behaviors as a result of some race conditions.
Examples of some known issues include:
ptrace not intercepting SIGTRAP signals when the process is run with pwntools. This behavior is described in Issue #48.shell=True will cause the process to attach to the shell process instead. This behavior is described in Issue #57.The documentation for versions of libdebug older that 0.7.0 has to be accessed manually at http://docs.libdebug.org/archive/VERSION, where VERSION is the version number you are looking for.
Need to cite libdebug as software used in your work? This is the way to cite us:
@software{libdebug_2024,\n title = {libdebug: {Build} {Your} {Own} {Debugger}},\n copyright = {MIT Licence},\n url = {https://libdebug.org},\n publisher = {libdebug.org},\n author = {Digregorio, Gabriele and Bertolini, Roberto Alessandro and Panebianco, Francesco and Polino, Mario},\n year = {2024},\n doi = {10.5281/zenodo.13151549},\n}\n We also have a poster on libdebug. If you use libdebug in your research, you can cite the associated short paper:
@inproceedings{10.1145/3658644.3691391,\nauthor = {Digregorio, Gabriele and Bertolini, Roberto Alessandro and Panebianco, Francesco and Polino, Mario},\ntitle = {Poster: libdebug, Build Your Own Debugger for a Better (Hello) World},\nyear = {2024},\nisbn = {9798400706363},\npublisher = {Association for Computing Machinery},\naddress = {New York, NY, USA},\nurl = {https://doi.org/10.1145/3658644.3691391},\ndoi = {10.1145/3658644.3691391},\nbooktitle = {Proceedings of the 2024 on ACM SIGSAC Conference on Computer and Communications Security},\npages = {4976\u20134978},\nnumpages = {3},\nkeywords = {debugging, reverse engineering, software security},\nlocation = {Salt Lake City, UT, USA},\nseries = {CCS '24}\n}\n","boost":2},{"location":"basics/command_queue/","title":"Default VS ASAP Mode","text":"For most commands that can be issued in libdebug, it is necessary that the traced process stops running. When the traced process stops running as a result of a stopping event, libdebug can inspect the state and intervene in its control flow. When one of these commands is used in the script as the process is still running, libdebug will wait for the process to stop before executing the command.
In the following example, the content of the RAX register is printed after the program hits the breakpoint or stops for any other reason:
from libdebug import debugger\n\nd = debugger(\"program\")\nd.run()\n\nd.breakpoint(\"func\", file=\"binary\")\n\nd.cont()\n\nprint(f\"RAX: {hex(d.regs.rax)}\")\n Script execution
Please note that, after resuming execution of the tracee process, the script will continue to run. This means that the script will not wait for the process to stop before continuing with the rest of the script. If the next command is a libdebug command that requires the process to be stopped, the script will then wait for a stopping event before executing that command.
In the following example, we make a similar scenario, but show how you can inspect the state of the process by arbitrarily stopping it in the default mode.
d = debugger(\"program\")\n\nd.run()\n\nd.breakpoint(\"func\", file=\"binary\")\n\nd.cont()\n\nprint(f\"RAX: {hex(d.regs.rax)}\") # (1)!\n\nd.cont()\nd.interrupt() # (2)!\n\nprint(f\"RAX: {hex(d.regs.rax)}\") # (3)!\n\nd.cont()\n\n[...]\n If you want the command to be executed As Soon As Possible (ASAP) instead of waiting for a stopping event, you can specify it when creating the Debugger object. In this mode, the debugger will stop the process and issue the command as it runs your script without waiting. The following script has the same behavior as the previous one, using the corresponding option:
d = debugger(\"program\", auto_interrupt_on_command=True)\n\nd.run()\n\nd.breakpoint(\"func\", file=\"binary\")\n\nd.cont()\nd.wait()\n\nprint(f\"RAX: {hex(d.regs.rax)}\") # (1)!\n\nd.cont()\n\nprint(f\"RAX: {hex(d.regs.rax)}\") # (2)!\n\nd.cont()\n\n[...]\n For the sake of this example the wait() method is used to wait for the stopping event (in this case, a breakpoint). This enforces the synchronization of the execution to the stopping point that we want to reach. Read more about the wait() method in the section dedicated to control flow commands.
Pwning with libdebug
Respectable pwners in the field find that the ASAP polling mode is particularly useful when writing exploits.
","boost":4},{"location":"basics/control_flow_commands/","title":"Control Flow Commands","text":"Control flow commands allow you to set step through the code, stop execution and resume it at your pleasure.
","boost":4},{"location":"basics/control_flow_commands/#stepping","title":"Stepping","text":"A basic feature of any debugger is the ability to step through the code. libdebug provides several methods to step, some of which will be familiar to users of other debuggers.
","boost":4},{"location":"basics/control_flow_commands/#single-step","title":"Single Step","text":"The step() command executes the instruction at the instruction pointer and stops the process. When possible, it uses the hardware single-step feature of the CPU for better performance.
Function Signature
d.step()\n","boost":4},{"location":"basics/control_flow_commands/#next","title":"Next","text":"The next() command executes the current instruction at the instruction pointer and stops the process. If the instruction is a function call, it will execute the whole function and stop at the instruction following the call. In other debuggers, this command is known as \"step over\".
Please note that the next() command resumes the execution of the program if the instruction is a function call. This means that the debugger can encounter stopping events in the middle of the function, causing the command to return before the function finishes.
Function Signature
d.next()\n Damn heuristics!
The next() command uses heuristics to determine if the instruction is a function call and to find the stopping point. This means that the command may not work as expected in some cases (e.g. functions called with a jump, non-returning calls).
The step_until() command executes single steps until a specific address is reached. Optionally, you can also limit steps to a maximum count (default value is -1, meaning no limit).
Function Signature
d.step_until(position, max_steps=-1, file='hybrid') \n The file parameter can be used to specify the choice on relative addressing. Refer to the memory access section for more information on addressing modes.
","boost":4},{"location":"basics/control_flow_commands/#continuing","title":"Continuing","text":"The cont() command continues the execution.
Function Signature
d.cont()\n For example, in the following script, libdebug will not wait for the process to stop before checking d.dead. To change this behavior, you can use the wait() command right after the cont().
from libdebug import debugger\n\nd = debugger(\"program_that_dies_tragically\")\n\nd.run()\n\nd.cont()\n\nif d.dead:\n print(\"The program is dead!\")\n","boost":4},{"location":"basics/control_flow_commands/#the-wait-method","title":"The wait() Method","text":"The wait() command is likely the most important in libdebug. Loved by most and hated by many, it instructs the debugger to wait for a stopping event before continuing with the execution of the script.
Example
In the following script, libdebug will wait for the process to stop before printing \"provola\".
from libdebug import debugger\n\nd = debugger(\"program_that_dies_tragically\")\n\nd.run()\n\nd.cont()\nd.wait()\n\nprint(\"provola\")\n","boost":4},{"location":"basics/control_flow_commands/#interrupt","title":"Interrupt","text":"You can manually issue a stopping signal to the program using the interrupt() command. Clearly, this command is issued as soon as it is executed within the script.
Function Signature
d.interrupt()\n","boost":4},{"location":"basics/control_flow_commands/#finish","title":"Finish","text":"The finish() command continues execution until the current function returns or a breakpoint is hit. In other debuggers, this command is known as \"step out\".
Function Signature
d.finish(heuristic='backtrace')\n Damn heuristics!
The finish() command uses heuristics to determine the end of a function. While libdebug allows to choose the heuristic, it is possible that none of the available options work in some specific cases. (e.g. tail-calls, non-returning calls).
The finish() command allows you to choose the heuristic to use. If you don't specify any, the \"backtrace\" heuristic will be used. The following heuristics are available:
backtrace The backtrace heuristic uses the return address on the function stack frame to determine the end of the function. This is the default heuristic but may fail in case of broken stack, rare execution flows, and obscure compiler optimizations. step-mode The step-mode heuristic uses repeated single steps to execute instructions until a ret instruction is reached. Nested calls are handled, when the calling convention is respected. This heuristic is slower and may fail in case of rare execution flows and obscure compiler optimizations.","boost":4},{"location":"basics/detach_and_gdb/","title":"Detach and GDB Migration","text":"In libdebug, you can detach from the debugged process and continue execution with the detach() method.
Function Signature
d.detach()\n Detaching from a running process
Remember that detaching from a process is meant to be used when the process is stopped. If the process is running, the command will wait for a stopping event. To forcibly stop the process, you can use the interrupt() method before migrating.
If at any time during your script you want to take a more traditional approach to debugging, you can seamlessly switch to GDB. This will temporarily detach libdebug from the program and give you control over the program using GDB. Quitting GDB or using the goback command will return control to libdebug.
Function Signature
d.gdb(\n migrate_breakpoints: bool = True,\n open_in_new_process: bool = True,\n blocking: bool = True,\n) -> GdbResumeEvent:\n Parameter Description migrate_breakpoints If set to True, libdebug will migrate the breakpoints to GDB. open_in_new_process If set to True, libdebug will open GDB in a new process. blocking If set to True, libdebug will wait for the user to terminate the GDB session to continue the script. Setting the blocking to False is useful when you want to continue using the pipe interaction and other parts of your script as you take control of the debugging process.
When blocking is set to False, the gdb() method will return a GdbResumeEvent object. This object can be used to wait for the GDB session to finish before continuing the script.
Example of using non-blocking GDB migration
from libdebug import debugger\nd = debugger(\"program\")\npipe = d.run()\n\n# Reach interesting point in the program\n[...]\n\ngdb_event = d.gdb(blocking = False)\n\npipe.sendline(b\"dump interpret\")\n\nwith open(\"dump.bin\", \"r\") as f:\n pipe.send(f.read())\n\ngdb_event.join() # (1)!\n Please consider a few requirements when opening GDB in a new process. For this mode to work, libdebug needs to know which terminal emulator you are using. If not set, libdebug will try to detect this automatically. In some cases, detection may fail. You can manually set the terminal command in libcontext. If instead of opening GDB in a new terminal window you want to use the current terminal, you can simply set the open_in_new_process parameter to False.
Example of setting the terminal with tmux
from libdebug import libcontext\n\nlibcontext.terminal = ['tmux', 'splitw', '-h']\n Migrating from a running process
Remember that GDB Migration is meant to be used when the process is stopped. If the process is running, the command will wait for a stopping event. To forcibly stop the process, you can use the interrupt() method before migrating.
If you are finished working with a Debugger object and wish to deallocate it, you can terminate it using the terminate() command.
Function Signature
d.terminate()\n What happens to the running process?
When you terminate a Debugger object, the process is forcibly killed. If you wish to detach from the process and continue the execution before terminating the debugger, you should use the detach() command before.
The default behavior in libdebug is to kill the debugged process when the script exits. This is done to prevent the process from running indefinitely if the debugging script terminates or you forget to kill it manually. When creating a Debugger object, you can set the kill_on_exit attribute to False to prevent this behavior:
from libdebug import Debugger\n\nd = debugger(\"test\", kill_on_exit=False)\n You can also change this attribute in an existing Debugger object at runtime:
d.kill_on_exit = False\n Behavior when attaching to a process
When debugging is initiated by attaching to an existing process, the kill_on_exit policy is enforced in the same way as when starting a new process.
You can kill the process any time the process is stopped using the kill() method:
Function Signature
d.kill()\n The method sends a SIGKILL signal to the process, which terminates it immediately. If the process is already dead, libdebug will throw an exception. When multiple threads are running, the kill() method will kill all threads under the parent process.
Process Stop
The kill() method will not stop a running process, unless libdebug is operating in ASAP Mode. Just like other commands, in the default mode, the kill() method will wait for the process to stop before executing.
You can check if the process is dead using the dead property:
if not d.dead:\n print(\"The process is not dead\")\nelse:\n print(\"The process is dead\")\n The running property
The Debugger object also exposes the running property. This is not the opposite of dead. The running property is True when the process is not stopped and False otherwise. If execution was stopped by a stopping event, the running property will be equal to False. However, in this case the process can still be alive.
Has your process passed away unexpectedly? We are sorry to hear that. If your process is indeed defunct, you can access the exit code and signal using exit_code and exit_signal. When there is no valid exit code or signal, these properties will return None.
if d.dead:\n print(f\"The process exited with code {d.exit_code}\")\n\nif d.dead:\n print(f\"The process exited with signal {d.exit_signal}\")\n","boost":4},{"location":"basics/kill_and_post_mortem/#zombie-processes-and-threads","title":"Zombie Processes and Threads","text":"When a process dies, it becomes a zombie process. This means that the process has terminated, but its parent process has not yet read its exit status. In libdebug, you can check if the process is a zombie using the zombie property of the Debugger object. This is particularly relevant in multi-threaded applications. To read more about this, check the dedicated section on zombie processes.
Example Code
if d.zombie:\n print(\"The process is a zombie\")\n","boost":4},{"location":"basics/libdebug101/","title":"libdebug 101","text":"Welcome to libdebug! When writing a script to debug a program, the first step is to create a Debugger object. This object will be your main interface for debugging commands.
from libdebug import debugger\n\ndebugger = debugger(argv=[\"./program\", \"arg1\", \"arg2\"]) # (1)!\n argv can either be a string (the name/path of the executable) or an array corresponding to the argument vector of the execution.Am I already debugging?
Creating a Debugger object will not start the execution automatically. You can reuse the same debugger to iteratively run multiple instances of the program. This is particularly useful for smart bruteforcing or fuzzing scripts.
Performing debugger initialization each time is not required and can be expensive.
To run the executable, refer to Running an Executable
","boost":4},{"location":"basics/libdebug101/#environment","title":"Environment","text":"Just as you would expect, you can also pass environment variables to the program using the env parameter. Here, the variables are passed as a string-string dictionary.
from libdebug import debugger\n\ndebugger = debugger(\"test\", env = {\"LD_PRELOAD\": \"musl_libc.so\"})\n","boost":4},{"location":"basics/libdebug101/#address-space-layout-randomization-aslr","title":"Address Space Layout Randomization (ASLR)","text":"Modern operating system kernels implement mitigations against predictable addresses in binary exploitation scenarios. One such feature is ASLR, which randomizes the base address of mapped virtual memory pages (e.g., binary, libraries, stack). When debugging, this feature can become a nuisance for the user.
By default, libdebug keeps ASLR enabled. The debugger aslr parameter can be used to change this behavior.
from libdebug import debugger\n\ndebugger = debugger(\"test\", aslr=False)\n","boost":4},{"location":"basics/libdebug101/#binary-entry-point","title":"Binary Entry Point","text":"When a child process is spawned on the Linux kernel through the ptrace system call, it is possible to trace it as soon as the loader has set up your executable. Debugging these first instructions inside the loader library is generally uninteresting.
For this reason, the default behavior for libdebug is to continue until the binary entry point (1) is reached. When you need to start debugging from the very beginning, you can simply disable this behavior in the following way:
_start / __rt_entry symbol in your binary executable. This function is the initial stub that calls the main() function in your executable, through a call to the standard library of your system (e.g., __libc_start_main, __rt_lib_init)from libdebug import debugger\n\ndebugger = debugger(\"test\", continue_to_binary_entrypoint=False)\n What the hell are you debugging?
Please note that this feature assumes the binary is well-formed. If the ELF header is corrupt, the binary entrypoint will not be resolved correctly. As such, setting this parameter to False is a good practice when you don't want libdebug to rely on this information.
The Debugger object has many more parameters it can take.
Function Signature
debugger(\n argv=[],\n aslr=True,\n env=None,\n escape_antidebug=False,\n continue_to_binary_entrypoint=True,\n auto_interrupt_on_command=False,\n fast_memory=False,\n kill_on_exit=True,\n follow_children=True\n) -> Debugger\n Parameter Type Description argv str | list[str] Path to the binary or argv list aslr bool Whether to enable ASLR. Defaults to True. env dict[str, str] The environment variables to use. Defaults to the same environment of the parent process. escape_antidebug bool Whether to automatically attempt to patch antidebugger detectors based on ptrace. continue_to_binary_entrypoint bool Whether to automatically continue to the binary entrypoint. auto_interrupt_on_command bool Whether to run libdebug in ASAP Mode. fast_memory bool Whether to use a faster memory reading method. Defaults to False. kill_on_exit bool Whether to kill the debugged process when the debugger exits. Defaults to True. follow_children bool Whether to automatically monitor child processes. Defaults to True. Return Value Debugger Debugger The debugger object","boost":4},{"location":"basics/memory_access/","title":"Memory Access","text":"In libdebug, memory access is performed via the memory attribute of the Debugger object or the Thread Context. When reading from memory, a bytes-like object is returned. The following methods are available:
Access a single byte of memory by providing the address as an integer.
d.memory[0x1000]\n Access a range of bytes by providing the start and end addresses as integers.
d.memory[0x1000:0x1010]\n Access a range of bytes by providing the base address and length as integers.
d.memory[0x1000, 0x10]\n Access memory using a symbol name.
d.memory[\"function\", 0x8]\n When specifying a symbol, you can also provide an offset. Contrary to what happens in GDB, the offset is always interpreted as hexadecimal.
d.memory[\"function+a8\"]\n Access a range of bytes using a symbol name.
d.memory[\"function\":\"function+0f\"]\n Please note that contrary to what happens in GDB, the offset is always interpreted as hexadecimal. Accessing memory with symbols
Please note that, unless otherwise specified, symbols are resolved in the debugged binary only. To resolve symbols in shared libraries, you need to indicate it in the third parameter of the function.
d.memory[\"__libc_start_main\", 0x8, \"libc\"]\n Writing to memory works similarly. You can write a bytes-like object to memory using the same addressing methods:
d.memory[d.rsp, 0x10] = b\"AAAAAAABC\"\nd.memory[\"main_arena\", 16, \"libc\"] = b\"12345678\"\n Length/Slice when writing
When writing to memory, slices and length are ignored in favor of the length of the specified bytes-like object.
In the following example, only 4 bytes are written:
d.memory[\"main_arena\", 50] = b\"\\x0a\\xeb\\x12\\xfc\"\n","boost":4},{"location":"basics/memory_access/#absolute-and-relative-addressing","title":"Absolute and Relative Addressing","text":"Just like with symbols, memory addresses can also be accessed relative to a certain file base. libdebug uses \"hybrid\" addressing by default. This means it first attempts to resolve addresses as absolute. If the address does not correspond to an absolute one, it considers it relative to the base of the binary.
You can use the third parameter of the memory access method to select the file you want to use as base (e.g., libc, ld, binary). If you want to force libdebug to use absolute addressing, you can specify \"absolute\" instead.
Examples of relative and absolute addressing
# Absolute addressing\nd.memory[0x7ffff7fcb200, 0x10, \"absolute\"]\n\n# Hybrid addressing\nd.memory[0x1000, 0x10, \"hybrid\"]\n\n# Relative addressing\nd.memory[0x1000, 0x10, \"binary\"]\nd.memory[0x1000, 0x10, \"libc\"]\n","boost":4},{"location":"basics/memory_access/#searching-inside-memory","title":"Searching inside Memory","text":"The memory attribute of the Debugger object also allows you to search for specific values in the memory of the process. You can search for integers, strings, or bytes-like objects.
Function Signature
d.memory.find(\n value: int | bytes | str,\n file: str = \"all\",\n start: int | None = None,\n end: int | None = None,\n) -> list[int]:\n Parameters:
Argument Type Descriptionvalue int | bytes | str The value to search for. file str The backing file to search in (e.g, binary, libc, stack). start int (optional) The start address of the search (works with both relative and absolute). end int (optional) The end address of the search (works with both relative and absolute). Returns:
Return Type DescriptionAddresses list[int] List of memory addresses where the value was found. Usage Example
binsh_string_addr = d.memory.find(\"/bin/sh\", file=\"libc\")\n\nvalue_address = d.memory.find(0x1234, file=\"stack\", start=d.regs.rsp)\n","boost":4},{"location":"basics/memory_access/#searching-pointers","title":"Searching Pointers","text":"The memory attribute of the Debugger object also allows you to search for values in a source memory map that are pointers to another memory map. One use case for this would be identifying potential leaks of memory addresses when libdebug is used for exploitation tasks.
Function Signature
def find_pointers(\n where: int | str = \"*\",\n target: int | str = \"*\",\n step: int = 1,\n ) -> list[tuple[int, int]]:\n Parameters:
Argument Type Descriptionwhere int | str The memory map where we want to search for references. Defaults to \"*\", which means all memory maps. target int | str The memory map whose pointers we want to find. Defaults to \"*\", which means all memory maps. step int The interval step size while iterating over the memory buffer. Defaults to 1. Returns:
Return Type DescriptionPointers list[tuple[int, int]] A list of tuples containing the address where the pointer was found and the pointer itself. Usage Example
pointers = d.memory.find_pointers(\"stack\", \"heap\")\n\nfor src, dst in pointers:\n print(f\"Heap leak to {dst} found at {src} points\")\n","boost":4},{"location":"basics/memory_access/#fast-and-slow-memory-access","title":"Fast and Slow Memory Access","text":"libdebug supports two different methods to access memory on Linux, controlled by the fast_memory parameter of the Debugger object. The two methods are:
fast_memory=False uses the ptrace system call interface, requiring a context switch from user space to kernel space for each architectural word-size read.fast_memory=True reduces the access latency by relying on Linux's procfs, which contains a virtual file as an interface to the process memory.As of version 0.8 Chutoro Nigiri , fast_memory=True is the default. The following examples show how to change the memory access method when creating the Debugger object or at runtime.
d = debugger(\"test\", fast_memory=False)\n d.fast_memory = False\n","boost":4},{"location":"basics/register_access/","title":"Register Access","text":"libdebug offers a simple register access interface for supported architectures. Registers are accessible through the regs attribute of the Debugger object or the Thread Context.
Multithreading
In multi-threaded debugging, the regs attribute of the Debugger object will return the registers of the main thread.
The following is an example of how to interact with the RAX register in a debugger object on AMD64:
read_value = d.regs.rax Writing d.regs.rax = read_value + 1 Note that the register values are read and written as Python integers. This is true for all registers except for floating point ones, which are coherent with their type. To avoid confusion, we list available registers and their types below. Related registers are available to access as well.
AMD64i386AArch64 Register Type Related Description General Purpose RAX Integer EAX, AX, AH, AL Accumulator register RBX Integer EBX, BX, BH, BL Base register RCX Integer ECX, CX, CH, CL Counter register RDX Integer EDX, DX, DH, DL Data register RSI Integer ESI, SI Source index for string operations RDI Integer EDI, DI Destination index for string operations RBP Integer EBP, BP Base pointer (frame pointer) RSP Integer ESP, SP Stack pointer R8 Integer R8D, R8W, R8B General-purpose register R9 Integer R9D, R9W, R9B General-purpose register R10 Integer R10D, R10W, R10B General-purpose register R11 Integer R11D, R11W, R11B General-purpose register R12 Integer R12D, R12W, R12B General-purpose register R13 Integer R13D, R13W, R13B General-purpose register R14 Integer R14D, R14W, R14B General-purpose register R15 Integer R15D, R15W, R15B General-purpose register RIP Integer EIP Instruction pointer Flags EFLAGS Integer Flags register Segment Registers CS Integer Code segment DS Integer Data segment ES Integer Extra segment FS Integer Additional segment GS Integer Additional segment SS Integer Stack segment FS_BASE Integer FS segment base address GS_BASE Integer GS segment base address Vector Registers XMM0 Integer Lower 128 bits of YMM0/ZMM0 XMM1 Integer Lower 128 bits of YMM1/ZMM1 XMM2 Integer Lower 128 bits of YMM2/ZMM2 XMM3 Integer Lower 128 bits of YMM3/ZMM3 XMM4 Integer Lower 128 bits of YMM4/ZMM4 XMM5 Integer Lower 128 bits of YMM5/ZMM5 XMM6 Integer Lower 128 bits of YMM6/ZMM6 XMM7 Integer Lower 128 bits of YMM7/ZMM7 XMM8 Integer Lower 128 bits of YMM8/ZMM8 XMM9 Integer Lower 128 bits of YMM9/ZMM9 XMM10 Integer Lower 128 bits of YMM10/ZMM10 XMM11 Integer Lower 128 bits of YMM11/ZMM11 XMM12 Integer Lower 128 bits of YMM12/ZMM12 XMM13 Integer Lower 128 bits of YMM13/ZMM13 XMM14 Integer Lower 128 bits of YMM14/ZMM14 XMM15 Integer Lower 128 bits of YMM15/ZMM15 YMM0 Integer 256-bit AVX extension of XMM0 YMM1 Integer 256-bit AVX extension of XMM1 YMM2 Integer 256-bit AVX extension of XMM2 YMM3 Integer 256-bit AVX extension of XMM3 YMM4 Integer 256-bit AVX extension of XMM4 YMM5 Integer 256-bit AVX extension of XMM5 YMM6 Integer 256-bit AVX extension of XMM6 YMM7 Integer 256-bit AVX extension of XMM7 YMM8 Integer 256-bit AVX extension of XMM8 YMM9 Integer 256-bit AVX extension of XMM9 YMM10 Integer 256-bit AVX extension of XMM10 YMM11 Integer 256-bit AVX extension of XMM11 YMM12 Integer 256-bit AVX extension of XMM12 YMM13 Integer 256-bit AVX extension of XMM13 YMM14 Integer 256-bit AVX extension of XMM14 YMM15 Integer 256-bit AVX extension of XMM15 ZMM0 Integer 512-bit AVX-512 extension of XMM0 ZMM1 Integer 512-bit AVX-512 extension of XMM1 ZMM2 Integer 512-bit AVX-512 extension of XMM2 ZMM3 Integer 512-bit AVX-512 extension of XMM3 ZMM4 Integer 512-bit AVX-512 extension of XMM4 ZMM5 Integer 512-bit AVX-512 extension of XMM5 ZMM6 Integer 512-bit AVX-512 extension of XMM6 ZMM7 Integer 512-bit AVX-512 extension of XMM7 ZMM8 Integer 512-bit AVX-512 extension of XMM8 ZMM9 Integer 512-bit AVX-512 extension of XMM9 ZMM10 Integer 512-bit AVX-512 extension of XMM10 ZMM11 Integer 512-bit AVX-512 extension of XMM11 ZMM12 Integer 512-bit AVX-512 extension of XMM12 ZMM13 Integer 512-bit AVX-512 extension of XMM13 ZMM14 Integer 512-bit AVX-512 extension of XMM14 ZMM15 Integer 512-bit AVX-512 extension of XMM15 Floating Point (Legacy x87) ST(0)-ST(7) Floating Point x87 FPU data registers MM0-MM7 Integer MMX registers Register Type Related Description General Purpose EAX Integer AX, AH, AL Accumulator register EBX Integer BX, BH, BL Base register ECX Integer CX, CH, CL Counter register EDX Integer DX, DH, DL Data register ESI Integer SI Source index for string operations EDI Integer DI Destination index for string operations EBP Integer BP Base pointer (frame pointer) ESP Integer SP Stack pointer EIP Integer IP Instruction pointer Flags EFLAGS Integer Flags register Segment Registers CS Integer Code segment DS Integer Data segment ES Integer Extra segment FS Integer Additional segment GS Integer Additional segment SS Integer Stack segment Floating Point Registers ST(0)-ST(7) Floating Point x87 FPU data registers Vector Registers XMM0 Integer Lower 128 bits of YMM0/ZMM0 XMM1 Integer Lower 128 bits of YMM1/ZMM1 XMM2 Integer Lower 128 bits of YMM2/ZMM2 XMM3 Integer Lower 128 bits of YMM3/ZMM3 XMM4 Integer Lower 128 bits of YMM4/ZMM4 XMM5 Integer Lower 128 bits of YMM5/ZMM5 XMM6 Integer Lower 128 bits of YMM6/ZMM6 XMM7 Integer Lower 128 bits of YMM7/ZMM7 YMM0 Integer 256-bit AVX extension of XMM0 YMM1 Integer 256-bit AVX extension of XMM1 YMM2 Integer 256-bit AVX extension of XMM2 YMM3 Integer 256-bit AVX extension of XMM3 YMM4 Integer 256-bit AVX extension of XMM4 YMM5 Integer 256-bit AVX extension of XMM5 YMM6 Integer 256-bit AVX extension of XMM6 YMM7 Integer 256-bit AVX extension of XMM7 Register Type Alias(es) Description General Purpose X0 Integer W0 Function result or argument X1 Integer W1 Function result or argument X2 Integer W2 Function result or argument X3 Integer W3 Function result or argument X4 Integer W4 Function result or argument X5 Integer W5 Function result or argument X6 Integer W6 Function result or argument X7 Integer W7 Function result or argument X8 Integer W8 Indirect result location (also called \"IP0\") X9 Integer W9 Temporary register X10 Integer W10 Temporary register X11 Integer W11 Temporary register X12 Integer W12 Temporary register X13 Integer W13 Temporary register X14 Integer W14 Temporary register X15 Integer W15 Temporary register (also called \"IP1\") X16 Integer W16 Platform Register (often used as scratch) X17 Integer W17 Platform Register (often used as scratch) X18 Integer W18 Platform Register X19 Integer W19 Callee-saved register X20 Integer W20 Callee-saved register X21 Integer W21 Callee-saved register X22 Integer W22 Callee-saved register X23 Integer W23 Callee-saved register X24 Integer W24 Callee-saved register X25 Integer W25 Callee-saved register X26 Integer W26 Callee-saved register X27 Integer W27 Callee-saved register X28 Integer W28 Callee-saved register X29 Integer W29, FP Frame pointer X30 Integer W30, LR Link register (return address) XZR Integer WZR, ZR Zero register (always reads as zero) SP Integer Stack pointer PC Integer Program counter Flags PSTATE Integer Processor state in exception handling Vector Registers (SIMD/FP) V0 Integer Vector or scalar register V1 Integer Vector or scalar register V2 Integer Vector or scalar register V3 Integer Vector or scalar register V4 Integer Vector or scalar register V5 Integer Vector or scalar register V6 Integer Vector or scalar register V7 Integer Vector or scalar register V8 Integer Vector or scalar register V9 Integer Vector or scalar register V10 Integer Vector or scalar register V11 Integer Vector or scalar register V12 Integer Vector or scalar register V13 Integer Vector or scalar register V14 Integer Vector or scalar register V15 Integer Vector or scalar register V16 Integer Vector or scalar register V17 Integer Vector or scalar register V18 Integer Vector or scalar register V19 Integer Vector or scalar register V20 Integer Vector or scalar register V21 Integer Vector or scalar register V22 Integer Vector or scalar register V23 Integer Vector or scalar register V24 Integer Vector or scalar register V25 Integer Vector or scalar register V26 Integer Vector or scalar register V27 Integer Vector or scalar register V28 Integer Vector or scalar register V29 Integer Vector or scalar register V30 Integer Vector or scalar register V31 Integer Vector or scalar register Q0 Integer Vector or scalar register Q1 Integer Vector or scalar register Q2 Integer Vector or scalar register Q3 Integer Vector or scalar register Q4 Integer Vector or scalar register Q5 Integer Vector or scalar register Q6 Integer Vector or scalar register Q7 Integer Vector or scalar register Q8 Integer Vector or scalar register Q9 Integer Vector or scalar register Q10 Integer Vector or scalar register Q11 Integer Vector or scalar register Q12 Integer Vector or scalar register Q13 Integer Vector or scalar register Q14 Integer Vector or scalar register Q15 Integer Vector or scalar register Q16 Integer Vector or scalar register Q17 Integer Vector or scalar register Q18 Integer Vector or scalar register Q19 Integer Vector or scalar register Q20 Integer Vector or scalar register Q21 Integer Vector or scalar register Q22 Integer Vector or scalar register Q23 Integer Vector or scalar register Q24 Integer Vector or scalar register Q25 Integer Vector or scalar register Q26 Integer Vector or scalar register Q27 Integer Vector or scalar register Q28 Integer Vector or scalar register Q29 Integer Vector or scalar register Q30 Integer Vector or scalar register Q31 Integer Vector or scalar register D0 Integer Vector or scalar register D1 Integer Vector or scalar register D2 Integer Vector or scalar register D3 Integer Vector or scalar register D4 Integer Vector or scalar register D5 Integer Vector or scalar register D6 Integer Vector or scalar register D7 Integer Vector or scalar register D8 Integer Vector or scalar register D9 Integer Vector or scalar register D10 Integer Vector or scalar register D11 Integer Vector or scalar register D12 Integer Vector or scalar register D13 Integer Vector or scalar register D14 Integer Vector or scalar register D15 Integer Vector or scalar register D16 Integer Vector or scalar register D17 Integer Vector or scalar register D18 Integer Vector or scalar register D19 Integer Vector or scalar register D20 Integer Vector or scalar register D21 Integer Vector or scalar register D22 Integer Vector or scalar register D23 Integer Vector or scalar register D24 Integer Vector or scalar register D25 Integer Vector or scalar register D26 Integer Vector or scalar register D27 Integer Vector or scalar register D28 Integer Vector or scalar register D29 Integer Vector or scalar register D30 Integer Vector or scalar register D31 Integer Vector or scalar register S0 Integer Vector or scalar register S1 Integer Vector or scalar register S2 Integer Vector or scalar register S3 Integer Vector or scalar register S4 Integer Vector or scalar register S5 Integer Vector or scalar register S6 Integer Vector or scalar register S7 Integer Vector or scalar register S8 Integer Vector or scalar register S9 Integer Vector or scalar register S10 Integer Vector or scalar register S11 Integer Vector or scalar register S12 Integer Vector or scalar register S13 Integer Vector or scalar register S14 Integer Vector or scalar register S15 Integer Vector or scalar register S16 Integer Vector or scalar register S17 Integer Vector or scalar register S18 Integer Vector or scalar register S19 Integer Vector or scalar register S20 Integer Vector or scalar register S21 Integer Vector or scalar register S22 Integer Vector or scalar register S23 Integer Vector or scalar register S24 Integer Vector or scalar register S25 Integer Vector or scalar register S26 Integer Vector or scalar register S27 Integer Vector or scalar register S28 Integer Vector or scalar register S29 Integer Vector or scalar register S30 Integer Vector or scalar register S31 Integer Vector or scalar register H0 Integer Vector or scalar register H1 Integer Vector or scalar register H2 Integer Vector or scalar register H3 Integer Vector or scalar register H4 Integer Vector or scalar register H5 Integer Vector or scalar register H6 Integer Vector or scalar register H7 Integer Vector or scalar register H8 Integer Vector or scalar register H9 Integer Vector or scalar register H10 Integer Vector or scalar register H11 Integer Vector or scalar register H12 Integer Vector or scalar register H13 Integer Vector or scalar register H14 Integer Vector or scalar register H15 Integer Vector or scalar register H16 Integer Vector or scalar register H17 Integer Vector or scalar register H18 Integer Vector or scalar register H19 Integer Vector or scalar register H20 Integer Vector or scalar register H21 Integer Vector or scalar register H22 Integer Vector or scalar register H23 Integer Vector or scalar register H24 Integer Vector or scalar register H25 Integer Vector or scalar register H26 Integer Vector or scalar register H27 Integer Vector or scalar register H28 Integer Vector or scalar register H29 Integer Vector or scalar register H30 Integer Vector or scalar register H31 Integer Vector or scalar register B0 Integer Vector or scalar register B1 Integer Vector or scalar register B2 Integer Vector or scalar register B3 Integer Vector or scalar register B4 Integer Vector or scalar register B5 Integer Vector or scalar register B6 Integer Vector or scalar register B7 Integer Vector or scalar register B8 Integer Vector or scalar register B9 Integer Vector or scalar register B10 Integer Vector or scalar register B11 Integer Vector or scalar register B12 Integer Vector or scalar register B13 Integer Vector or scalar register B14 Integer Vector or scalar register B15 Integer Vector or scalar register B16 Integer Vector or scalar register B17 Integer Vector or scalar register B18 Integer Vector or scalar register B19 Integer Vector or scalar register B20 Integer Vector or scalar register B21 Integer Vector or scalar register B22 Integer Vector or scalar register B23 Integer Vector or scalar register B24 Integer Vector or scalar register B25 Integer Vector or scalar register B26 Integer Vector or scalar register B27 Integer Vector or scalar register B28 Integer Vector or scalar register B29 Integer Vector or scalar register B30 Integer Vector or scalar register B31 Integer Vector or scalar registerHardware Support
libdebug only exposes registers which are available on your CPU model. For AMD64, the list of available AVX registers is determined by checking the CPU capabilities. If you believe your CPU supports AVX registers but they are not available, we encourage your to open an Issue with your hardware details.
","boost":4},{"location":"basics/register_access/#filtering-registers","title":"Filtering Registers","text":"The regs field of the Debugger object or the Thread Context can also be used to filter registers with specific values.
Function Signature
d.regs.filter(value: float) -> list[str]:\n The filtering routine will look for the given value in both integer and floating point registers.
Example of Filtering Registers
d.regs.rax = 0x1337\n\n# Filter the value 0x1337 in the registers\nresults = d.regs.filter(0x1337)\nprint(f\"Found in: {results}\")\n","boost":4},{"location":"basics/running_an_executable/","title":"Running an Executable","text":"You have created your first debugger object, and now you want to run the executable. Calling the run() method will spawn a new child process and prepare it for the execution of your binary.
from libdebug import debugger\n\nd = debugger(\"program\")\nd.run()\n At this point, the process execution is stopped, waiting for your commands. A few things to keep in mind
d.run(). You cannot set breakpoints before calling d.run().When execution is resumed, chances are that your process will need to take input and produce output. To interact with the standard input and output of the process, you can use the PipeManager returned by the run() function.
from libdebug import debugger\n\nd = debugger(\"program\")\npipe = d.run()\n\nd.cont()\nprint(pipe.recvline().decode())\nd.wait()\n All pipe receive-like methods have a timeout parameter that you can set. The default value, timeout_default, can be set globally as a parameter of the PipeManager object. By default, this value is set to 2 seconds.
Changing the global timeout
pipe = d.run()\n\npipe.timeout_default = 10 # (1)!\n You can interact with the process's pipe manager using the following methods:
Method Descriptionrecv Receives at most numb bytes from the target's stdout.Parameters:- numb (int) \u00a0\u00a0\u00a0 [default = 4096]- timeout (int) \u00a0\u00a0\u00a0 [default = timeout_default] recverr Receives at most numb bytes from the target's stderr.Parameters:- numb (int) \u00a0\u00a0\u00a0 [default = 4096]- timeout (int) \u00a0\u00a0\u00a0 [default = timeout_default] recvuntil Receives data from stdout until a specified delimiter is encountered for a certain number of occurrences.Parameters:- delims (bytes)- occurrences (int) \u00a0\u00a0\u00a0 [default = 1]- drop (bool) \u00a0\u00a0\u00a0 [default = False]- timeout (int) \u00a0\u00a0\u00a0 [default = timeout_default]- optional (bool) \u00a0\u00a0\u00a0 [default = False] recverruntil Receives data from stderr until a specified delimiter is encountered for a certain number of occurrences.Parameters:- delims (bytes)- occurrences (int) \u00a0\u00a0\u00a0 [default = 1]- drop (bool) \u00a0\u00a0\u00a0 [default = False]- timeout (int) \u00a0\u00a0\u00a0 [default = timeout_default]- optional (bool) \u00a0\u00a0\u00a0 [default = False] recvline Receives numlines lines from the target's stdout.Parameters:- numlines (int) \u00a0\u00a0\u00a0 [default = 1]- drop (bool) \u00a0\u00a0\u00a0 [default = True]- timeout (int) \u00a0\u00a0\u00a0 [default = timeout_default]- optional (bool) \u00a0\u00a0\u00a0 [default = False] recverrline Receives numlines lines from the target's stderr.Parameters:- numlines (int) \u00a0\u00a0\u00a0 [default = 1]- drop (bool) \u00a0\u00a0\u00a0 [default = True]- timeout (int) \u00a0\u00a0\u00a0 [default = timeout_default]- optional (bool) \u00a0\u00a0\u00a0 [default = False] send Sends data to the target's stdin.Parameters:- data (bytes) sendafter Sends data after receiving a specified number of occurrences of a delimiter from stdout.Parameters:- delims (bytes)- data (bytes)- occurrences (int) \u00a0\u00a0\u00a0 [default = 1]- drop (bool) \u00a0\u00a0\u00a0 [default = False]- timeout (int) \u00a0\u00a0\u00a0 [default = timeout_default]- optional (bool) \u00a0\u00a0\u00a0 [default = False] sendline Sends data followed by a newline to the target's stdin.Parameters:- data (bytes) sendlineafter Sends a line of data after receiving a specified number of occurrences of a delimiter from stdout.Parameters:- delims (bytes)- data (bytes)- occurrences (int) \u00a0\u00a0\u00a0 [default = 1]- drop (bool) \u00a0\u00a0\u00a0 [default = False]- timeout (int) \u00a0\u00a0\u00a0 [default = timeout_default]- optional (bool) \u00a0\u00a0\u00a0 [default = False] close Closes the connection to the target. interactive Enters interactive mode, allowing manual send/receive operations with the target. Read more in the dedicated section.Parameters:- prompt (str) \u00a0\u00a0\u00a0 [default = \"$ \"]- auto_quit (bool) \u00a0\u00a0\u00a0 [default = False] When process is stopped
When the process is stopped, the PipeManager will not be able to receive new (unbuffered) data from the target. For this reason, the API includes a parameter called optional.
When set to True, libdebug will not necessarily expect to receive data from the process when it is stopped. When set to False, any recv-like instruction (including sendafter and sendlineafter) will fail with an exception when the process is not running.
Operations on stdin like send and sendline are not affected by this limitation, since the kernel will buffer the data until the process is resumed.
The PipeManager contains a method called interactive() that allows you to directly interact with the process's standard I/O. This method will print characters from standard output and error and read your inputs, letting you interact naturally with the process. The interactive() method is blocking, so the execution of the script will wait for the user to terminate the interactive session. To quit an interactive session, you can press Ctrl+C or Ctrl+D.
Function Signature
pipe.interactive(prompt: str = prompt_default, auto_quit: bool = False):\n The prompt parameter sets the line prefix in the terminal (e.g. \"$ \" and \"> \" will produce $ cat flag and > cat flag respectively). By default, it is set to \"$ \". The auto_quit parameter, when set to True, will automatically quit the interactive session when the process is stopped.
If any of the file descriptors of standard input, output, or error are closed, a warning will be printed.
","boost":4},{"location":"basics/running_an_executable/#attaching-to-a-running-process","title":"Attaching to a Running Process","text":"If you want to attach to a running process instead of spawning a child, you can use the attach() method in the Debugger object. This method will attach to the process with the specified PID.
from libdebug import debugger\n\nd = debugger(\"test\")\n\npid = 1234\n\nd.attach(pid)\n The process will stop upon attachment, waiting for your commands.
Ptrace Scope
libdebug uses the ptrace system call to interact with the process. For security reasons, this system call is limited by the kernel according to a ptrace_scope parameter. Different systems have different default values for this parameter. If the ptrace system call is not allowed, the attach() method will raise an exception notifying you of this issue.
By default, libdebug redirects the standard input, output, and error of the process to pipes. This is how you can interact with these file descriptors using I/O commands. If you want to disable this behavior, you can set the redirect_pipes parameter of the run() method to False.
Usage
d.run(redirect_pipes=False)\n When set to False, the standard input, output, and error of the process will not be redirected to pipes. This means that you will not be able to interact with the process using the PipeManager object, and libdebug will act as a transparent proxy between the executable and its standard I/O.
Currently, libdebug only supports the GNU/Linux Operating System.
","boost":4},{"location":"basics/supported_systems/#architectures","title":"Architectures","text":"Architecture Alias Support x86_64 AMD64 Stable i386 over AMD64 32-bit compatibility mode Alpha i386 IA-32 Alpha ARM 64-bit AArch64 Beta ARM 32-bit ARM32 Not SupportedForcing a specific architecture
If for any reason you need to force libdebug to use a specific architecture (e.g., corrupted ELF), you can do so by setting the arch parameter in the Debugger object. For example, to force the debugger to use the x86_64 architecture, you can use the following code:
from libdebug import debugger\n\nd = debugger(\"program\", ...)\n\nd.arch = \"amd64\"\n","boost":4},{"location":"blog/","title":"Blogposts","text":""},{"location":"blog/2024/10/13/a-new-documentation/","title":"A New Documentation","text":"Hello, World! Thank for using libdebug. We are proud to roll out our new documentation along with version 0.7.0. This new documentation is powered by MkDocs and Material for MkDocs. We hope you find it more intuitive and easier to navigate.
We have expanded the documentation to cover more topics and provide more examples. We also tried to highlight some common difficulties that have been reported. Also, thanks to the mkdocs search plugin, you can more easily find what you are looking for, both in the documentation and pages generated from Pydoc.
We hope you enjoy the new documentation. If you find any mistakes or would like to suggest improvements, please let us know by opening an issue on our GitHub repository.
"},{"location":"blog/2024/10/14/see-you-at-acm-ccs-2024/","title":"See you at ACM CCS 2024!","text":"We are excited to announce that we will be presenting a poster on libdebug at the 2024 ACM Conference on Computer and Communications Security (ACM CCS 2024). The conference will be held in Salt Lake City, Utah. The poster session is October 16th at 16:30. We will be presenting the rationale behind libdebug and demonstrating how it can be used in some cool use cases.
If you are attending the conference, please stop by our poster and say hello. We would love to meet you and hear about your ideas. We are also looking forward to hearing about your research and how libdebug can help you in your work. Come by and grab some swag!
Link to the conference: ACM CCS 2024 Link to the poster information: libdebug Poster Link to the proceedings: ACM Digital Library
"},{"location":"blog/2025/03/26/release-08---chutoro-nigiri/","title":"Release 0.8 - Chutoro Nigiri","text":"Hello, debuggers! It's been a while since our last release, but we are excited to announce libdebug version 0.8, codename Chutoro Nigiri . This release brings several new features, improvements, and bug fixes. Here is a summary of the changes:
"},{"location":"blog/2025/03/26/release-08---chutoro-nigiri/#features","title":"Features","text":"fork(), attaching new debuggers to them. This behavior can be customized with the Debugger parameter follow_children.d.memory.find_pointers to identify all pointers in a memory region that reference another region, useful for detecting memory leaks in cybersecurity applictions.fast_memory=True): Improves performance of memory access. Can be disabled using the fast_memory parameter in Debugger.d.gdb(open_in_new_process=True): Ensures GDB opens correctly in a newly detected terminal without user-defined commands. zombie attribute in ThreadContext: Allows users to check if a thread is a zombie.SymbolList Slicing: Properly supports slice operations.debuginfod Handling: Enhanced caching logic when a file is not available on debuginfod, improving compatibility with other binaries that use debuginfod on your system.SyscallHandler, SignalCatcher).d.gdb for Edge Cases: Fixed several inconsistencies in execution.step, finish, and next Operations in Callbacks: Now executed correctly.This script was used to showcase the power of libdebug during the Workshop at the CyberChallenge.IT 2024 Finals. An explanation of the script, along with a brief introduction to libdebug, is available in the official stream of the event, starting from timestamp 2:17:00.
from libdebug import debugger\nfrom string import ascii_letters, digits\n\n# Enable the escape_antidebug option to bypass the ptrace call\nd = debugger(\"main\", escape_antidebug=True)\n\ndef callback(_, __):\n # This will automatically issue a continue when the breakpoint is hit\n pass\n\ndef on_enter_nanosleep(t, _):\n # This sets every argument to NULL to make the syscall fail\n t.syscall_arg0 = 0\n t.syscall_arg1 = 0\n t.syscall_arg2 = 0\n t.syscall_arg3 = 0\n\nalphabet = ascii_letters + digits + \"_{}\"\n\nflag = b\"\"\nbest_hit_count = 0\n\nwhile True:\n for c in alphabet:\n r = d.run()\n\n # Any time we call run() we have to reset the breakpoint and syscall handler\n bp = d.breakpoint(0x13e1, hardware=True, callback=callback, file=\"binary\")\n d.handle_syscall(\"clock_nanosleep\", on_enter=on_enter_nanosleep)\n\n d.cont()\n\n r.sendline(flag + c.encode())\n\n # This makes the debugger wait for the process to terminate\n d.wait()\n\n response = r.recvline()\n\n # `run()` will automatically kill any still-running process, but it's good practice to do it manually\n d.kill()\n\n if b\"Yeah\" in response:\n # The flag is correct\n flag += c.encode()\n print(flag)\n break\n\n if bp.hit_count > best_hit_count:\n # We have found a new flag character\n best_hit_count = bp.hit_count\n flag += c.encode()\n print(flag)\n break\n\n if c == \"}\":\n break\n\nprint(flag)\n","boost":0.8},{"location":"code_examples/example_nlinks/","title":"DEF CON Quals 2023 - nlinks","text":"This is a script that solves the challenge nlinks from DEF CON Quals 2023. Please find the binary executables here.
def get_passsphrase_from_class_1_binaries(previous_flag):\n flag = b\"\"\n\n d = debugger(\"CTF/1\")\n r = d.run()\n\n bp = d.breakpoint(0x7EF1, hardware=True, file=\"binary\")\n\n d.cont()\n\n r.recvuntil(b\"Passphrase:\\n\")\n\n # We send a fake flag after the valid password\n r.send(previous_flag + b\"a\" * 8)\n\n for _ in range(8):\n # Here we reached the breakpoint\n if not bp.hit_on(d):\n print(\"Here we should have hit the breakpoint\")\n\n offset = ord(\"a\") ^ d.regs.rbp\n d.regs.rbp = d.regs.r13\n\n # We calculate the correct character value and append it to the flag\n flag += (offset ^ d.regs.r13).to_bytes(1, \"little\")\n\n d.cont()\n\n r.recvline()\n\n d.kill()\n\n # Here the value of flag is b\"\\x00\\x006\\x00\\x00\\x00(\\x00\"\n return flag\n\ndef get_passsphrase_from_class_2_binaries(previous_flag):\n bitmap = {}\n lastpos = 0\n flag = b\"\"\n\n d = debugger(\"CTF/2\")\n r = d.run()\n\n bp1 = d.breakpoint(0xD8C1, hardware=True, file=\"binary\")\n bp2 = d.breakpoint(0x1858, hardware=True, file=\"binary\")\n bp3 = d.breakpoint(0xDBA1, hardware=True, file=\"binary\")\n\n d.cont()\n\n r.recvuntil(b\"Passphrase:\\n\")\n r.send(previous_flag + b\"a\" * 8)\n\n while True:\n if d.regs.rip == bp1.address:\n # Prepare for the next element in the bitmap\n lastpos = d.regs.rbp\n d.regs.rbp = d.regs.r13 + 1\n elif d.regs.rip == bp2.address:\n # Update the bitmap\n bitmap[d.regs.r12 & 0xFF] = lastpos & 0xFF\n elif d.regs.rip == bp3.address:\n # Use the bitmap to calculate the expected character\n d.regs.rbp = d.regs.r13\n wanted = d.regs.rbp\n needed = 0\n for i in range(8):\n if wanted & (2**i):\n needed |= bitmap[2**i]\n flag += chr(needed).encode()\n\n if bp3.hit_count == 8:\n # We have found all the characters\n d.cont()\n break\n\n d.cont()\n\n d.kill()\n\n # Here the value of flag is b\"\\x00\\x00\\x00\\x01\\x00\\x00a\\x00\"\n return flag\n\ndef get_passsphrase_from_class_3_binaries():\n flag = b\"\"\n\n d = debugger(\"CTF/0\")\n r = d.run()\n\n bp = d.breakpoint(0x91A1, hardware=True, file=\"binary\")\n\n d.cont()\n\n r.send(b\"a\" * 8)\n\n for _ in range(8):\n\n # Here we reached the breakpoint\n if not bp.hit_on(d):\n print(\"Here we should have hit the breakpoint\")\n\n offset = ord(\"a\") - d.regs.rbp\n d.regs.rbp = d.regs.r13\n\n # We calculate the correct character value and append it to the flag\n flag += chr((d.regs.r13 + offset) % 256).encode(\"latin-1\")\n\n d.cont()\n\n r.recvline()\n\n d.kill()\n\n # Here the value of flag is b\"BM8\\xd3\\x02\\x00\\x00\\x00\"\n return flag\n\ndef run_nlinks():\n flag0 = get_passsphrase_from_class_3_binaries()\n flag1 = get_passsphrase_from_class_1_binaries(flag0)\n flag2 = get_passsphrase_from_class_2_binaries(flag1)\n\n print(flag0, flag1, flag2)\n","boost":0.8},{"location":"code_examples/examples_index/","title":"Examples Index","text":"This chapter contains a collection of examples showcasing the power of libdebug in various scenarios. Each example is a script that uses the library to solve a specific challenge or demonstrate a particular feature.
","boost":1},{"location":"code_examples/examples_sudo_kurl/","title":"Execution Hijacking Example - TRX CTF 2025","text":"This code example shows how to hijack the exection flow of the program to retrieve the state of a Sudoku game and solve it with Z3. This is a challenge from the TRX CTF 2025. The full writeup, written by Luca Padalino (padawan), can be found here.
","boost":1},{"location":"code_examples/examples_sudo_kurl/#context-of-the-challenge","title":"Context of the challenge","text":"The attachment is an AMD64 ELF binary that simulates a futuristic scenario where the New Roman Empire faces alien invaders. Upon execution, the program prompts users to deploy legions by specifying row and column indices, along with troop values, within a 25x25 grid. The goal is to determine the correct deployment strategy to secure victory against the alien threat. The constraints for the deployment are actually those of a Sudoku game. The challenge is to solve the Sudoku puzzle to deploy the legions correctly.
The following table summarizes the main functions and their roles within the binary:
Function Description main() Prints the initial welcome message and then calls the game loop by invokingplay(). play() Implements the main game loop: it repeatedly validates the board state via isValid(), collects user input using askInput(), and upon receiving the win-check signal (-1), verifies the board via checkWin(). Depending on the result, it either displays a defeat message or computes and prints the flag via getFlag(). isValid(board) Checks the board\u2019s validity (a 25\u00d725 grid) by verifying that each row, column, and 5\u00d75 sub-grid has correct values without duplicates\u2014akin to a Sudoku verification. askInput(board) Prompts the user to input a row, column, and number of troops (values between 1 and 25). It updates the board if the target cell is empty or shows an error if the cell is already occupied. Using -1 for the row index signals that the user wants to check for a win. checkWin(board) Scans the board to ensure that no cell contains a 0 and that the board remains valid. It returns a status indicating whether the win condition has been met. getFlag(board) Processes the board along with an internal vector (named A) by splitting it into segments, performing matrix\u2013vector multiplications (via matrixVectorMultiply()), and converting the resulting numbers into characters to form the flag string. matrixVectorMultiply(matrix, vector) Multiplies a matrix with a vector and returns the result. This operation is used within getFlag() to transform part of the internal vector into a sequence that contributes to the flag. This table provides an at-a-glance reference to the main functions and their roles within the binary.
","boost":1},{"location":"code_examples/examples_sudo_kurl/#the-solution","title":"The solution","text":"The following is the initial state of the Sudoku board retrieved by the script:
initial_board = [\n 0,0,0,21,0,11,0,0,3,24,9,20,23,0,7,22,0,5,18,0,15,2,16,13,0,\n 24,4,0,20,15,0,0,5,0,16,2,25,22,0,17,6,21,0,14,0,8,10,1,19,18,\n 0,0,10,0,5,0,21,19,22,0,3,13,1,16,0,15,4,7,23,24,12,0,14,0,0,\n 0,0,13,6,12,14,4,1,0,0,24,18,19,5,0,0,17,0,0,0,7,22,0,9,21,\n 0,23,19,7,0,0,6,0,0,20,15,4,0,21,0,0,0,0,16,10,24,3,0,17,5,\n 12,15,21,0,0,0,16,6,18,5,7,0,17,3,9,14,0,4,24,22,13,0,0,0,0,\n 14,10,11,2,24,1,25,22,20,0,0,23,6,19,0,13,5,8,12,0,17,0,7,15,9,\n 0,0,0,0,1,24,0,3,15,10,20,8,5,0,25,9,16,19,21,0,2,6,0,12,14,\n 0,0,5,0,3,0,23,14,8,0,0,2,15,0,12,0,7,1,17,6,22,21,4,0,19,\n 13,0,0,4,20,0,0,0,17,0,11,16,0,0,22,0,10,18,15,23,0,25,8,1,3,\n 20,25,7,22,0,23,0,10,1,0,0,0,0,13,4,21,0,6,19,0,3,9,15,8,0,\n 1,24,0,0,0,4,0,20,13,0,8,0,3,0,19,16,2,12,9,5,0,14,10,25,22,\n 0,0,0,0,0,0,0,9,24,0,25,6,0,2,16,4,8,10,0,17,18,7,21,0,1,\n 0,8,0,10,14,16,3,25,6,0,0,7,18,9,11,0,13,0,20,0,19,24,5,0,17,\n 17,3,0,15,9,5,0,0,11,0,0,21,0,0,23,7,0,22,0,0,20,13,12,4,6,\n 15,0,20,11,21,10,0,0,5,22,16,0,0,8,3,24,0,13,2,19,0,0,0,0,0,\n 0,13,8,0,19,17,0,0,0,0,0,12,7,24,6,0,15,23,22,4,14,5,9,0,0,\n 9,1,23,14,4,0,24,0,7,8,19,0,2,0,13,17,3,20,5,0,0,15,0,16,10,\n 10,0,2,12,0,13,18,15,0,0,17,5,0,20,21,8,1,16,0,7,0,19,0,11,0,\n 7,5,17,24,16,20,2,11,19,3,23,0,4,15,1,18,14,0,10,0,0,8,13,21,12,\n 0,20,9,0,7,15,22,17,10,0,12,19,0,0,24,25,0,14,4,8,16,18,2,0,0,\n 19,2,24,8,0,0,20,7,4,0,0,0,9,0,15,5,0,21,11,16,1,0,0,14,25,\n 0,0,25,1,0,8,5,23,14,6,4,17,16,0,2,0,20,0,13,9,10,12,24,7,15,\n 0,0,14,0,0,0,0,0,0,2,6,10,13,0,5,12,0,24,0,0,9,11,0,3,8,\n 6,0,15,0,13,0,0,24,0,9,1,0,8,25,0,10,18,17,0,2,0,4,19,0,23\n]\n The solution script uses libdebug to force the binary to print the state of the board. This state is then parsed and used to create a Z3 model that solves the Sudoku. The solution is then sent back to the binary to solve the game.
from z3 import *\nfrom libdebug import debugger\n\nd = debugger(\"./chall\")\npipe = d.run()\n\n# 0) Hijack the instruction pointer to the displayBoard function\n# Yes...the parenteses are part of the symbol name\nbp = d.breakpoint(\"play()+26\", file=\"binary\", hardware=True)\nwhile not d.dead:\n d.cont()\n d.wait()\n\n if bp.hit_on(d.threads[0]):\n d.step()\n print(\"Hit on play()+0x26\")\n d.regs.rip = d.maps[0].base + 0x2469\n\n# 1) Get information from the board\npipe.recvline(numlines=4)\ninitial_board = pipe.recvline(25).decode().strip().split(\" \")\ninitial_board = [int(x) if x != \".\" else 0 for x in initial_board]\n\nBOARD_SIZE = 25\nBOARD_STEP = 5\n\n# 2) Solve using Z3\ns = Solver()\n\n# 2.1) Create board\nboard = [[Int(f\"board_{i}_{j}\") for i in range(25)] for j in range(25)]\n# 2.2) Add constraints\nfor i in range(BOARD_SIZE):\n for j in range(25):\n # 2.2.1) All the numbers must be between 1 and 25\n s.add(board[i][j] >= 1, board[i][j] <= 25)\n # 2.2.2) If the number is already given, it must be the same \n if initial_board[i*25+j] != 0:\n s.add(board[i][j] == initial_board[i*25+j])\n # 2.2.3) All the numbers in the row must be different\n s.add(Distinct(board[i]))\n # 2.2.4) All the numbers in the column must be different\n s.add(Distinct([board[j][i] for j in range(BOARD_SIZE)]))\n\n# 2.2.5) All the numbers in the 5x5 blocks must be different\nfor i in range(0, BOARD_SIZE, BOARD_STEP):\n for j in range(0, BOARD_SIZE, BOARD_STEP):\n block = [board[i+k][j+l] for k in range(BOARD_STEP) for l in range(BOARD_STEP)]\n s.add(Distinct(block))\n\n# 2.3) Check if the board is solvable\nif s.check() == sat:\n m = s.model()\n\n # 3) Solve the game\n pipe = d.run()\n d.cont()\n pipe.recvuntil(\"deploy.\\n\")\n\n # Send found solution\n for i in range(BOARD_SIZE):\n for j in range(BOARD_SIZE):\n if initial_board[i*25+j] == 0:\n pipe.recvuntil(\": \")\n pipe.sendline(f\"{i+1}\")\n pipe.recvuntil(\": \")\n pipe.sendline(f\"{j+1}\")\n pipe.recvuntil(\": \")\n pipe.sendline(str(m[board[i][j]]))\n print(f\"Row {i+1} - Col {j+1}: {m[board[i][j]]}\")\n\n pipe.recvuntil(\": \")\n pipe.sendline(f\"0\")\n\n # Receive final messages and the flag\n print(pipe.recvline().decode())\n print(pipe.recvline().decode())\n print(pipe.recvline().decode())\n print(pipe.recvline().decode())\n print(pipe.recvline().decode())\nelse:\n print(\"No solution found\")\n\nd.terminate()\n","boost":1},{"location":"development/building_libdebug/","title":"Building libdebug from source","text":"Building libdebug from source is a straightforward process. This guide will walk you through the steps required to compile and install libdebug on your system.
","boost":4},{"location":"development/building_libdebug/#resolving-dependencies","title":"Resolving Dependencies","text":"To install libdebug, you first need to have some dependencies that will not be automatically resolved. These dependencies are libraries, utilities and development headers which are required by libdebug to compile its internals during installation.
Ubuntu Arch Linux Fedora Debian openSUSE Alpine Linuxsudo apt install -y python3 python3-dev g++ libdwarf-dev libelf-dev libiberty-dev\n sudo pacman -S base-devel python3 elfutils libdwarf binutils\n sudo dnf install -y python3 python3-devel g++ elfutils-devel libdwarf-devel binutils-devel\n sudo apt install -y python3 python3-dev g++ libdwarf-dev libelf-dev libiberty-dev\n sudo zypper install -y gcc-c++ make python3 python3-devel libelf-devel libdwarf-devel binutils-devel\n sudo apk add -y python3 python3-dev py3-pip linux-headers elfutils-dev libdwarf-dev binutils-dev\n Is your distro missing?
If you are using a Linux distribution that is not included in this section, you can search for equivalent packages for your distro. Chances are the naming convention of your system's repository will only change a prefix or suffix.
","boost":4},{"location":"development/building_libdebug/#building","title":"Building","text":"To build libdebug from source, from the root directory of the repository, simply run the following command:
python3 -m pip install .\n Alternatively, without cloning the repository, you can directly install libdebug from the GitHub repository using the following command:
python3 -m pip install git+https://github.com/libdebug/libdebug.git@<branch_or_commit>\n Replace <branch_or_commit> with the desired branch or commit hash you want to install. If not specified, the default branch will be used. Editable Install
If you want to install libdebug in editable mode, allowing you to modify the source code and have those changes reflected immediately, you can use the following command, exclusively from a local clone of the repository:
python3 -m pip install --no-build-isolation -Ceditable.rebuild=true -ve .\n This will ensure that every time you make changes to the source code, they will be immediately available without needing to reinstall the package, even for the compiled C++ extensions.
","boost":4},{"location":"development/building_libdebug/#build-options","title":"Build Options","text":"There are some configurable build options that can be set during the installation process, to avoid linking against certain libraries or to enable/disable specific features. These options can be set using environment variables before running the installation command.
Option Description Default ValueUSE_LIBDWARF Include libdwarf, which is used for symbol resolution and debugging information. True USE_LIBELF Include libelf, which is used for reading ELF files. True USE_LIBIBERTY Include libiberty, which is used for demangling C++ symbols. True Changing these options can be done by setting the environment variable before running the installation command. For example, to disable libdwarf, you can run:
CMAKE_ARGS=-USE_LIBDWARF=OFF python3 -m pip install .\n","boost":4},{"location":"development/building_libdebug/#testing-your-installation","title":"Testing Your Installation","text":"We provide a comprehensive suite of tests to ensure that your installation is working correctly. Here's how you can run the tests:
cd test\npython3 run_suite.py <suite>\n We have different test suites available. By default, we run the fast, that skips some tests which require a lot of time to run. You can specify which test suite to run using the suite option. The available test suites are:
fast Runs all but a few tests to verify full functionality of the library. slow Runs the complete set of tests, including those that may take longer to execute. stress Runs a set of tests designed to detect issues in multithreaded processes. memory Runs a set of tests designed to detect memory leaks in libdebug.","boost":4},{"location":"from_pydoc/generated/libdebug/","title":"libdebug.libdebug","text":""},{"location":"from_pydoc/generated/libdebug/#libdebug.libdebug.debugger","title":"debugger(argv=[], aslr=True, env=None, escape_antidebug=False, continue_to_binary_entrypoint=True, auto_interrupt_on_command=False, fast_memory=True, kill_on_exit=True, follow_children=True)","text":"This function is used to create a new Debugger object. It returns a Debugger object.
Parameters:
Name Type Description Defaultargv str | list[str] The location of the binary to debug and any arguments to pass to it.
[] aslr bool Whether to enable ASLR. Defaults to True.
True env dict[str, str] The environment variables to use. Defaults to the same environment of the debugging script.
None escape_antidebug bool Whether to automatically attempt to patch antidebugger detectors based on the ptrace syscall.
False continue_to_binary_entrypoint bool Whether to automatically continue to the binary entrypoint. Defaults to True.
True auto_interrupt_on_command bool Whether to automatically interrupt the process when a command is issued. Defaults to False.
False fast_memory bool Whether to use a faster memory reading method. Defaults to True.
True kill_on_exit bool Whether to kill the debugged process when the debugger exits. Defaults to True.
True follow_children bool Whether to follow child processes. Defaults to True, which means that a new debugger will be created for each child process automatically.
True Returns:
Name Type DescriptionDebugger Debugger The Debugger object.
libdebug/libdebug.py def debugger(\n argv: str | list[str] = [],\n aslr: bool = True,\n env: dict[str, str] | None = None,\n escape_antidebug: bool = False,\n continue_to_binary_entrypoint: bool = True,\n auto_interrupt_on_command: bool = False,\n fast_memory: bool = True,\n kill_on_exit: bool = True,\n follow_children: bool = True,\n) -> Debugger:\n \"\"\"This function is used to create a new `Debugger` object. It returns a `Debugger` object.\n\n Args:\n argv (str | list[str], optional): The location of the binary to debug and any arguments to pass to it.\n aslr (bool, optional): Whether to enable ASLR. Defaults to True.\n env (dict[str, str], optional): The environment variables to use. Defaults to the same environment of the debugging script.\n escape_antidebug (bool): Whether to automatically attempt to patch antidebugger detectors based on the ptrace syscall.\n continue_to_binary_entrypoint (bool, optional): Whether to automatically continue to the binary entrypoint. Defaults to True.\n auto_interrupt_on_command (bool, optional): Whether to automatically interrupt the process when a command is issued. Defaults to False.\n fast_memory (bool, optional): Whether to use a faster memory reading method. Defaults to True.\n kill_on_exit (bool, optional): Whether to kill the debugged process when the debugger exits. Defaults to True.\n follow_children (bool, optional): Whether to follow child processes. Defaults to True, which means that a new debugger will be created for each child process automatically.\n\n Returns:\n Debugger: The `Debugger` object.\n \"\"\"\n if isinstance(argv, str):\n argv = [resolve_argv_path(argv)]\n elif argv:\n argv[0] = resolve_argv_path(argv[0])\n\n internal_debugger = InternalDebugger()\n internal_debugger.argv = argv\n internal_debugger.env = env\n internal_debugger.aslr_enabled = aslr\n internal_debugger.autoreach_entrypoint = continue_to_binary_entrypoint\n internal_debugger.auto_interrupt_on_command = auto_interrupt_on_command\n internal_debugger.escape_antidebug = escape_antidebug\n internal_debugger.fast_memory = fast_memory\n internal_debugger.kill_on_exit = kill_on_exit\n internal_debugger.follow_children = follow_children\n\n debugger = Debugger()\n debugger.post_init_(internal_debugger)\n\n internal_debugger.debugger = debugger\n\n # If we are attaching, we assume the architecture is the same as the current platform\n if argv:\n debugger.arch = elf_architecture(argv[0])\n\n return debugger\n"},{"location":"from_pydoc/generated/liblog/","title":"libdebug.liblog","text":""},{"location":"from_pydoc/generated/liblog/#libdebug.liblog.LibLog","title":"LibLog","text":"Custom logger singleton class that can be used to log messages to the console.
Source code inlibdebug/liblog.py class LibLog:\n \"\"\"Custom logger singleton class that can be used to log messages to the console.\"\"\"\n\n _instance = None\n\n def __new__(cls: type):\n \"\"\"Create a new instance of the class if it does not exist yet.\n\n Returns:\n LibLog: the instance of the class.\n \"\"\"\n if cls._instance is None:\n cls._instance = super().__new__(cls)\n cls._instance._initialized = False\n return cls._instance\n\n def __init__(self: LibLog) -> None:\n \"\"\"Initializes the logger.\"\"\"\n if self._initialized:\n return\n\n # Add custom log levels\n logging.addLevelName(60, \"SILENT\")\n logging.SILENT = 60\n\n # General logger\n self.general_logger = self._setup_logger(\"libdebug\", logging.INFO)\n\n # Component-specific loggers\n self.debugger_logger = self._setup_logger(\"debugger\", logging.SILENT)\n self.pipe_logger = self._setup_logger(\"pipe\", logging.SILENT)\n\n self._initialized = True\n\n def _setup_logger(self: LibLog, name: str, level: int) -> logging.Logger:\n \"\"\"Setup a logger with the given name and level.\n\n Args:\n name (str): name of the logger.\n level (int): logging level.\n\n Returns:\n logging.Logger: the logger object.\n \"\"\"\n logger = logging.getLogger(name)\n logger.setLevel(level)\n handler = logging.StreamHandler()\n formatter = logging.Formatter(\"%(message)s\")\n handler.setFormatter(formatter)\n logger.addHandler(handler)\n\n return logger\n\n def debugger(self: LibLog, message: str, *args: str, **kwargs: str) -> None:\n \"\"\"Log a message to the debugger logger.\n\n Args:\n message (str): the message to log.\n *args: positional arguments to pass to the logger.\n **kwargs: keyword arguments to pass to the logger.\n \"\"\"\n header = f\"[{ANSIColors.RED}DEBUGGER{ANSIColors.DEFAULT_COLOR}]\"\n self.debugger_logger.debug(f\"{header} {message}\", *args, **kwargs)\n\n def pipe(self: LibLog, message: str, *args: str, **kwargs: str) -> None:\n \"\"\"Log a message to the pipe logger.\n\n Args:\n message (str): the message to log.\n *args: positional arguments to pass to the logger.\n **kwargs: keyword arguments to pass to the logger.\n \"\"\"\n header = f\"[{ANSIColors.BLUE}PIPE{ANSIColors.DEFAULT_COLOR}]\"\n self.pipe_logger.debug(f\"{header} {message}\", *args, **kwargs)\n\n def info(self: LibLog, message: str, *args: str, **kwargs: str) -> None:\n \"\"\"Log a info message to the general logger.\n\n Args:\n message (str): the message to log.\n *args: positional arguments to pass to the logger.\n **kwargs: keyword arguments to pass to the logger.\n \"\"\"\n header = f\"[{ANSIColors.GREEN}INFO{ANSIColors.DEFAULT_COLOR}]\"\n self.general_logger.info(f\"{header} {message}\", *args, **kwargs)\n\n def warning(self: LibLog, message: str, *args: str, **kwargs: str) -> None:\n \"\"\"Log a warning message to the general logger.\n\n Args:\n message (str): the message to log.\n *args: positional arguments to pass to the logger.\n **kwargs: keyword arguments to pass to the logger.\n \"\"\"\n header = f\"[{ANSIColors.BRIGHT_YELLOW}WARNING{ANSIColors.DEFAULT_COLOR}]\"\n self.general_logger.warning(f\"{header} {message}\", *args, **kwargs)\n\n def error(self: LibLog, message: str, *args: str, **kwargs: str) -> None:\n \"\"\"Log an error message to the general logger.\n\n Args:\n message (str): the message to log.\n *args: positional arguments to pass to the logger.\n **kwargs: keyword arguments to pass to the logger.\n \"\"\"\n header = f\"[{ANSIColors.RED}ERROR{ANSIColors.DEFAULT_COLOR}]\"\n self.general_logger.error(f\"{header} {message}\", *args, **kwargs)\n"},{"location":"from_pydoc/generated/liblog/#libdebug.liblog.LibLog.__init__","title":"__init__()","text":"Initializes the logger.
Source code inlibdebug/liblog.py def __init__(self: LibLog) -> None:\n \"\"\"Initializes the logger.\"\"\"\n if self._initialized:\n return\n\n # Add custom log levels\n logging.addLevelName(60, \"SILENT\")\n logging.SILENT = 60\n\n # General logger\n self.general_logger = self._setup_logger(\"libdebug\", logging.INFO)\n\n # Component-specific loggers\n self.debugger_logger = self._setup_logger(\"debugger\", logging.SILENT)\n self.pipe_logger = self._setup_logger(\"pipe\", logging.SILENT)\n\n self._initialized = True\n"},{"location":"from_pydoc/generated/liblog/#libdebug.liblog.LibLog.__new__","title":"__new__()","text":"Create a new instance of the class if it does not exist yet.
Returns:
Name Type DescriptionLibLog the instance of the class.
Source code inlibdebug/liblog.py def __new__(cls: type):\n \"\"\"Create a new instance of the class if it does not exist yet.\n\n Returns:\n LibLog: the instance of the class.\n \"\"\"\n if cls._instance is None:\n cls._instance = super().__new__(cls)\n cls._instance._initialized = False\n return cls._instance\n"},{"location":"from_pydoc/generated/liblog/#libdebug.liblog.LibLog._setup_logger","title":"_setup_logger(name, level)","text":"Setup a logger with the given name and level.
Parameters:
Name Type Description Defaultname str name of the logger.
requiredlevel int logging level.
requiredReturns:
Type DescriptionLogger logging.Logger: the logger object.
Source code inlibdebug/liblog.py def _setup_logger(self: LibLog, name: str, level: int) -> logging.Logger:\n \"\"\"Setup a logger with the given name and level.\n\n Args:\n name (str): name of the logger.\n level (int): logging level.\n\n Returns:\n logging.Logger: the logger object.\n \"\"\"\n logger = logging.getLogger(name)\n logger.setLevel(level)\n handler = logging.StreamHandler()\n formatter = logging.Formatter(\"%(message)s\")\n handler.setFormatter(formatter)\n logger.addHandler(handler)\n\n return logger\n"},{"location":"from_pydoc/generated/liblog/#libdebug.liblog.LibLog.debugger","title":"debugger(message, *args, **kwargs)","text":"Log a message to the debugger logger.
Parameters:
Name Type Description Defaultmessage str the message to log.
required*args str positional arguments to pass to the logger.
() **kwargs str keyword arguments to pass to the logger.
{} Source code in libdebug/liblog.py def debugger(self: LibLog, message: str, *args: str, **kwargs: str) -> None:\n \"\"\"Log a message to the debugger logger.\n\n Args:\n message (str): the message to log.\n *args: positional arguments to pass to the logger.\n **kwargs: keyword arguments to pass to the logger.\n \"\"\"\n header = f\"[{ANSIColors.RED}DEBUGGER{ANSIColors.DEFAULT_COLOR}]\"\n self.debugger_logger.debug(f\"{header} {message}\", *args, **kwargs)\n"},{"location":"from_pydoc/generated/liblog/#libdebug.liblog.LibLog.error","title":"error(message, *args, **kwargs)","text":"Log an error message to the general logger.
Parameters:
Name Type Description Defaultmessage str the message to log.
required*args str positional arguments to pass to the logger.
() **kwargs str keyword arguments to pass to the logger.
{} Source code in libdebug/liblog.py def error(self: LibLog, message: str, *args: str, **kwargs: str) -> None:\n \"\"\"Log an error message to the general logger.\n\n Args:\n message (str): the message to log.\n *args: positional arguments to pass to the logger.\n **kwargs: keyword arguments to pass to the logger.\n \"\"\"\n header = f\"[{ANSIColors.RED}ERROR{ANSIColors.DEFAULT_COLOR}]\"\n self.general_logger.error(f\"{header} {message}\", *args, **kwargs)\n"},{"location":"from_pydoc/generated/liblog/#libdebug.liblog.LibLog.info","title":"info(message, *args, **kwargs)","text":"Log a info message to the general logger.
Parameters:
Name Type Description Defaultmessage str the message to log.
required*args str positional arguments to pass to the logger.
() **kwargs str keyword arguments to pass to the logger.
{} Source code in libdebug/liblog.py def info(self: LibLog, message: str, *args: str, **kwargs: str) -> None:\n \"\"\"Log a info message to the general logger.\n\n Args:\n message (str): the message to log.\n *args: positional arguments to pass to the logger.\n **kwargs: keyword arguments to pass to the logger.\n \"\"\"\n header = f\"[{ANSIColors.GREEN}INFO{ANSIColors.DEFAULT_COLOR}]\"\n self.general_logger.info(f\"{header} {message}\", *args, **kwargs)\n"},{"location":"from_pydoc/generated/liblog/#libdebug.liblog.LibLog.pipe","title":"pipe(message, *args, **kwargs)","text":"Log a message to the pipe logger.
Parameters:
Name Type Description Defaultmessage str the message to log.
required*args str positional arguments to pass to the logger.
() **kwargs str keyword arguments to pass to the logger.
{} Source code in libdebug/liblog.py def pipe(self: LibLog, message: str, *args: str, **kwargs: str) -> None:\n \"\"\"Log a message to the pipe logger.\n\n Args:\n message (str): the message to log.\n *args: positional arguments to pass to the logger.\n **kwargs: keyword arguments to pass to the logger.\n \"\"\"\n header = f\"[{ANSIColors.BLUE}PIPE{ANSIColors.DEFAULT_COLOR}]\"\n self.pipe_logger.debug(f\"{header} {message}\", *args, **kwargs)\n"},{"location":"from_pydoc/generated/liblog/#libdebug.liblog.LibLog.warning","title":"warning(message, *args, **kwargs)","text":"Log a warning message to the general logger.
Parameters:
Name Type Description Defaultmessage str the message to log.
required*args str positional arguments to pass to the logger.
() **kwargs str keyword arguments to pass to the logger.
{} Source code in libdebug/liblog.py def warning(self: LibLog, message: str, *args: str, **kwargs: str) -> None:\n \"\"\"Log a warning message to the general logger.\n\n Args:\n message (str): the message to log.\n *args: positional arguments to pass to the logger.\n **kwargs: keyword arguments to pass to the logger.\n \"\"\"\n header = f\"[{ANSIColors.BRIGHT_YELLOW}WARNING{ANSIColors.DEFAULT_COLOR}]\"\n self.general_logger.warning(f\"{header} {message}\", *args, **kwargs)\n"},{"location":"from_pydoc/generated/architectures/breakpoint_validator/","title":"libdebug.architectures.breakpoint_validator","text":""},{"location":"from_pydoc/generated/architectures/breakpoint_validator/#libdebug.architectures.breakpoint_validator.validate_hardware_breakpoint","title":"validate_hardware_breakpoint(arch, bp)","text":"Validate a hardware breakpoint for the specified architecture.
Source code inlibdebug/architectures/breakpoint_validator.py def validate_hardware_breakpoint(arch: str, bp: Breakpoint) -> None:\n \"\"\"Validate a hardware breakpoint for the specified architecture.\"\"\"\n if arch == \"aarch64\":\n validate_breakpoint_aarch64(bp)\n elif arch == \"amd64\":\n validate_breakpoint_amd64(bp)\n elif arch == \"i386\":\n validate_breakpoint_i386(bp)\n else:\n raise ValueError(f\"Architecture {arch} not supported\")\n"},{"location":"from_pydoc/generated/architectures/call_utilities_manager/","title":"libdebug.architectures.call_utilities_manager","text":""},{"location":"from_pydoc/generated/architectures/call_utilities_manager/#libdebug.architectures.call_utilities_manager.CallUtilitiesManager","title":"CallUtilitiesManager","text":" Bases: ABC
An architecture-independent interface for call instruction utilities.
Source code inlibdebug/architectures/call_utilities_manager.py class CallUtilitiesManager(ABC):\n \"\"\"An architecture-independent interface for call instruction utilities.\"\"\"\n\n @abstractmethod\n def is_call(self: CallUtilitiesManager, opcode_window: bytes) -> bool:\n \"\"\"Check if the current instruction is a call instruction.\"\"\"\n\n @abstractmethod\n def compute_call_skip(self: CallUtilitiesManager, opcode_window: bytes) -> int:\n \"\"\"Compute the address where to skip after the current call instruction.\"\"\"\n\n @abstractmethod\n def get_call_and_skip_amount(self, opcode_window: bytes) -> tuple[bool, int]:\n \"\"\"Check if the current instruction is a call instruction and compute the instruction size.\"\"\"\n"},{"location":"from_pydoc/generated/architectures/call_utilities_manager/#libdebug.architectures.call_utilities_manager.CallUtilitiesManager.compute_call_skip","title":"compute_call_skip(opcode_window) abstractmethod","text":"Compute the address where to skip after the current call instruction.
Source code inlibdebug/architectures/call_utilities_manager.py @abstractmethod\ndef compute_call_skip(self: CallUtilitiesManager, opcode_window: bytes) -> int:\n \"\"\"Compute the address where to skip after the current call instruction.\"\"\"\n"},{"location":"from_pydoc/generated/architectures/call_utilities_manager/#libdebug.architectures.call_utilities_manager.CallUtilitiesManager.get_call_and_skip_amount","title":"get_call_and_skip_amount(opcode_window) abstractmethod","text":"Check if the current instruction is a call instruction and compute the instruction size.
Source code inlibdebug/architectures/call_utilities_manager.py @abstractmethod\ndef get_call_and_skip_amount(self, opcode_window: bytes) -> tuple[bool, int]:\n \"\"\"Check if the current instruction is a call instruction and compute the instruction size.\"\"\"\n"},{"location":"from_pydoc/generated/architectures/call_utilities_manager/#libdebug.architectures.call_utilities_manager.CallUtilitiesManager.is_call","title":"is_call(opcode_window) abstractmethod","text":"Check if the current instruction is a call instruction.
Source code inlibdebug/architectures/call_utilities_manager.py @abstractmethod\ndef is_call(self: CallUtilitiesManager, opcode_window: bytes) -> bool:\n \"\"\"Check if the current instruction is a call instruction.\"\"\"\n"},{"location":"from_pydoc/generated/architectures/call_utilities_provider/","title":"libdebug.architectures.call_utilities_provider","text":""},{"location":"from_pydoc/generated/architectures/call_utilities_provider/#libdebug.architectures.call_utilities_provider.call_utilities_provider","title":"call_utilities_provider(architecture)","text":"Returns an instance of the call utilities provider to be used by the _InternalDebugger class.
libdebug/architectures/call_utilities_provider.py def call_utilities_provider(architecture: str) -> CallUtilitiesManager:\n \"\"\"Returns an instance of the call utilities provider to be used by the `_InternalDebugger` class.\"\"\"\n match architecture:\n case \"amd64\":\n return _amd64_call_utilities\n case \"aarch64\":\n return _aarch64_call_utilities\n case \"i386\":\n return _i386_call_utilities\n case _:\n raise NotImplementedError(f\"Architecture {architecture} not available.\")\n"},{"location":"from_pydoc/generated/architectures/ptrace_software_breakpoint_patcher/","title":"libdebug.architectures.ptrace_software_breakpoint_patcher","text":""},{"location":"from_pydoc/generated/architectures/ptrace_software_breakpoint_patcher/#libdebug.architectures.ptrace_software_breakpoint_patcher.software_breakpoint_byte_size","title":"software_breakpoint_byte_size(architecture)","text":"Return the size of a software breakpoint instruction.
Source code inlibdebug/architectures/ptrace_software_breakpoint_patcher.py def software_breakpoint_byte_size(architecture: str) -> int:\n \"\"\"Return the size of a software breakpoint instruction.\"\"\"\n match architecture:\n case \"amd64\" | \"i386\":\n return 1\n case \"aarch64\":\n return 4\n case _:\n raise ValueError(f\"Unsupported architecture: {architecture}\")\n"},{"location":"from_pydoc/generated/architectures/register_helper/","title":"libdebug.architectures.register_helper","text":""},{"location":"from_pydoc/generated/architectures/register_helper/#libdebug.architectures.register_helper.register_holder_provider","title":"register_holder_provider(architecture, register_file, fp_register_file)","text":"Returns an instance of the register holder to be used by the _InternalDebugger class.
libdebug/architectures/register_helper.py def register_holder_provider(\n architecture: str,\n register_file: object,\n fp_register_file: object,\n) -> RegisterHolder:\n \"\"\"Returns an instance of the register holder to be used by the `_InternalDebugger` class.\"\"\"\n match architecture:\n case \"amd64\":\n return Amd64PtraceRegisterHolder(register_file, fp_register_file)\n case \"aarch64\":\n return Aarch64PtraceRegisterHolder(register_file, fp_register_file)\n case \"i386\":\n if libcontext.platform == \"amd64\":\n return I386OverAMD64PtraceRegisterHolder(register_file, fp_register_file)\n else:\n return I386PtraceRegisterHolder(register_file, fp_register_file)\n case _:\n raise NotImplementedError(f\"Architecture {architecture} not available.\")\n"},{"location":"from_pydoc/generated/architectures/stack_unwinding_manager/","title":"libdebug.architectures.stack_unwinding_manager","text":""},{"location":"from_pydoc/generated/architectures/stack_unwinding_manager/#libdebug.architectures.stack_unwinding_manager.StackUnwindingManager","title":"StackUnwindingManager","text":" Bases: ABC
An architecture-independent interface for stack unwinding.
Source code inlibdebug/architectures/stack_unwinding_manager.py class StackUnwindingManager(ABC):\n \"\"\"An architecture-independent interface for stack unwinding.\"\"\"\n\n @abstractmethod\n def unwind(self: StackUnwindingManager, target: ThreadContext | Snapshot) -> list:\n \"\"\"Unwind the stack of the target process.\"\"\"\n\n @abstractmethod\n def get_return_address(self: StackUnwindingManager, target: ThreadContext | Snapshot, vmaps: list[MemoryMap]) -> int:\n \"\"\"Get the return address of the current function.\"\"\"\n"},{"location":"from_pydoc/generated/architectures/stack_unwinding_manager/#libdebug.architectures.stack_unwinding_manager.StackUnwindingManager.get_return_address","title":"get_return_address(target, vmaps) abstractmethod","text":"Get the return address of the current function.
Source code inlibdebug/architectures/stack_unwinding_manager.py @abstractmethod\ndef get_return_address(self: StackUnwindingManager, target: ThreadContext | Snapshot, vmaps: list[MemoryMap]) -> int:\n \"\"\"Get the return address of the current function.\"\"\"\n"},{"location":"from_pydoc/generated/architectures/stack_unwinding_manager/#libdebug.architectures.stack_unwinding_manager.StackUnwindingManager.unwind","title":"unwind(target) abstractmethod","text":"Unwind the stack of the target process.
Source code inlibdebug/architectures/stack_unwinding_manager.py @abstractmethod\ndef unwind(self: StackUnwindingManager, target: ThreadContext | Snapshot) -> list:\n \"\"\"Unwind the stack of the target process.\"\"\"\n"},{"location":"from_pydoc/generated/architectures/stack_unwinding_provider/","title":"libdebug.architectures.stack_unwinding_provider","text":""},{"location":"from_pydoc/generated/architectures/stack_unwinding_provider/#libdebug.architectures.stack_unwinding_provider.stack_unwinding_provider","title":"stack_unwinding_provider(architecture)","text":"Returns an instance of the stack unwinding provider to be used by the _InternalDebugger class.
libdebug/architectures/stack_unwinding_provider.py def stack_unwinding_provider(architecture: str) -> StackUnwindingManager:\n \"\"\"Returns an instance of the stack unwinding provider to be used by the `_InternalDebugger` class.\"\"\"\n match architecture:\n case \"amd64\":\n return _amd64_stack_unwinder\n case \"aarch64\":\n return _aarch64_stack_unwinder\n case \"i386\":\n return _i386_stack_unwinder\n case _:\n raise NotImplementedError(f\"Architecture {architecture} not available.\")\n"},{"location":"from_pydoc/generated/architectures/syscall_hijacker/","title":"libdebug.architectures.syscall_hijacker","text":""},{"location":"from_pydoc/generated/architectures/syscall_hijacker/#libdebug.architectures.syscall_hijacker.SyscallHijacker","title":"SyscallHijacker","text":"Class that provides syscall hijacking for the x86_64 architecture.
Source code inlibdebug/architectures/syscall_hijacker.py class SyscallHijacker:\n \"\"\"Class that provides syscall hijacking for the x86_64 architecture.\"\"\"\n\n # Allowed arguments for the hijacker\n allowed_args: set[str] = frozenset(\n {\n \"syscall_number\",\n \"syscall_arg0\",\n \"syscall_arg1\",\n \"syscall_arg2\",\n \"syscall_arg3\",\n \"syscall_arg4\",\n \"syscall_arg5\",\n },\n )\n\n def create_hijacker(\n self: SyscallHijacker,\n new_syscall: int,\n **kwargs: int,\n ) -> Callable[[ThreadContext, int], None]:\n \"\"\"Create a new hijacker for the given syscall.\n\n Args:\n new_syscall (int): The new syscall number.\n **kwargs: The keyword arguments.\n \"\"\"\n\n def hijack_on_enter_wrapper(d: ThreadContext, _: int) -> None:\n \"\"\"Wrapper for the hijack_on_enter method.\"\"\"\n self._hijack_on_enter(d, new_syscall, **kwargs)\n\n return hijack_on_enter_wrapper\n\n def _hijack_on_enter(\n self: SyscallHijacker,\n d: ThreadContext,\n new_syscall: int,\n **kwargs: int,\n ) -> None:\n \"\"\"Hijack the syscall on enter.\n\n Args:\n d (ThreadContext): The target ThreadContext.\n new_syscall (int): The new syscall number.\n **kwargs: The keyword arguments.\n \"\"\"\n d.syscall_number = new_syscall\n if \"syscall_arg0\" in kwargs:\n d.syscall_arg0 = kwargs.get(\"syscall_arg0\", False)\n if \"syscall_arg1\" in kwargs:\n d.syscall_arg1 = kwargs.get(\"syscall_arg1\", False)\n if \"syscall_arg2\" in kwargs:\n d.syscall_arg2 = kwargs.get(\"syscall_arg2\", False)\n if \"syscall_arg3\" in kwargs:\n d.syscall_arg3 = kwargs.get(\"syscall_arg3\", False)\n if \"syscall_arg4\" in kwargs:\n d.syscall_arg4 = kwargs.get(\"syscall_arg4\", False)\n if \"syscall_arg5\" in kwargs:\n d.syscall_arg5 = kwargs.get(\"syscall_arg5\", False)\n"},{"location":"from_pydoc/generated/architectures/syscall_hijacker/#libdebug.architectures.syscall_hijacker.SyscallHijacker._hijack_on_enter","title":"_hijack_on_enter(d, new_syscall, **kwargs)","text":"Hijack the syscall on enter.
Parameters:
Name Type Description Defaultd ThreadContext The target ThreadContext.
requirednew_syscall int The new syscall number.
required**kwargs int The keyword arguments.
{} Source code in libdebug/architectures/syscall_hijacker.py def _hijack_on_enter(\n self: SyscallHijacker,\n d: ThreadContext,\n new_syscall: int,\n **kwargs: int,\n) -> None:\n \"\"\"Hijack the syscall on enter.\n\n Args:\n d (ThreadContext): The target ThreadContext.\n new_syscall (int): The new syscall number.\n **kwargs: The keyword arguments.\n \"\"\"\n d.syscall_number = new_syscall\n if \"syscall_arg0\" in kwargs:\n d.syscall_arg0 = kwargs.get(\"syscall_arg0\", False)\n if \"syscall_arg1\" in kwargs:\n d.syscall_arg1 = kwargs.get(\"syscall_arg1\", False)\n if \"syscall_arg2\" in kwargs:\n d.syscall_arg2 = kwargs.get(\"syscall_arg2\", False)\n if \"syscall_arg3\" in kwargs:\n d.syscall_arg3 = kwargs.get(\"syscall_arg3\", False)\n if \"syscall_arg4\" in kwargs:\n d.syscall_arg4 = kwargs.get(\"syscall_arg4\", False)\n if \"syscall_arg5\" in kwargs:\n d.syscall_arg5 = kwargs.get(\"syscall_arg5\", False)\n"},{"location":"from_pydoc/generated/architectures/syscall_hijacker/#libdebug.architectures.syscall_hijacker.SyscallHijacker.create_hijacker","title":"create_hijacker(new_syscall, **kwargs)","text":"Create a new hijacker for the given syscall.
Parameters:
Name Type Description Defaultnew_syscall int The new syscall number.
required**kwargs int The keyword arguments.
{} Source code in libdebug/architectures/syscall_hijacker.py def create_hijacker(\n self: SyscallHijacker,\n new_syscall: int,\n **kwargs: int,\n) -> Callable[[ThreadContext, int], None]:\n \"\"\"Create a new hijacker for the given syscall.\n\n Args:\n new_syscall (int): The new syscall number.\n **kwargs: The keyword arguments.\n \"\"\"\n\n def hijack_on_enter_wrapper(d: ThreadContext, _: int) -> None:\n \"\"\"Wrapper for the hijack_on_enter method.\"\"\"\n self._hijack_on_enter(d, new_syscall, **kwargs)\n\n return hijack_on_enter_wrapper\n"},{"location":"from_pydoc/generated/architectures/thread_context_helper/","title":"libdebug.architectures.thread_context_helper","text":""},{"location":"from_pydoc/generated/architectures/thread_context_helper/#libdebug.architectures.thread_context_helper.thread_context_class_provider","title":"thread_context_class_provider(architecture)","text":"Returns the class of the thread context to be used by the _InternalDebugger class.
libdebug/architectures/thread_context_helper.py def thread_context_class_provider(\n architecture: str,\n) -> type[ThreadContext]:\n \"\"\"Returns the class of the thread context to be used by the `_InternalDebugger` class.\"\"\"\n match architecture:\n case \"amd64\":\n return Amd64ThreadContext\n case \"aarch64\":\n return Aarch64ThreadContext\n case \"i386\":\n if libcontext.platform == \"amd64\":\n return I386OverAMD64ThreadContext\n else:\n return I386ThreadContext\n case _:\n raise NotImplementedError(f\"Architecture {architecture} not available.\")\n"},{"location":"from_pydoc/generated/architectures/aarch64/aarch64_breakpoint_validator/","title":"libdebug.architectures.aarch64.aarch64_breakpoint_validator","text":""},{"location":"from_pydoc/generated/architectures/aarch64/aarch64_breakpoint_validator/#libdebug.architectures.aarch64.aarch64_breakpoint_validator.validate_breakpoint_aarch64","title":"validate_breakpoint_aarch64(bp)","text":"Validate a hardware breakpoint for the AARCH64 architecture.
Source code inlibdebug/architectures/aarch64/aarch64_breakpoint_validator.py def validate_breakpoint_aarch64(bp: Breakpoint) -> None:\n \"\"\"Validate a hardware breakpoint for the AARCH64 architecture.\"\"\"\n if bp.condition not in [\"r\", \"w\", \"rw\", \"x\"]:\n raise ValueError(\"Invalid condition for watchpoints. Supported conditions are 'r', 'w', 'rw', 'x'.\")\n\n if not (1 <= bp.length <= 8):\n raise ValueError(\"Invalid length for watchpoints. Supported lengths are between 1 and 8.\")\n\n if bp.condition != \"x\" and bp.address & 0x7:\n raise ValueError(\"Watchpoint address must be aligned to 8 bytes on aarch64. This is a kernel limitation.\")\n"},{"location":"from_pydoc/generated/architectures/aarch64/aarch64_call_utilities/","title":"libdebug.architectures.aarch64.aarch64_call_utilities","text":""},{"location":"from_pydoc/generated/architectures/aarch64/aarch64_call_utilities/#libdebug.architectures.aarch64.aarch64_call_utilities.Aarch64CallUtilities","title":"Aarch64CallUtilities","text":" Bases: CallUtilitiesManager
Class that provides call utilities for the AArch64 architecture.
Source code inlibdebug/architectures/aarch64/aarch64_call_utilities.py class Aarch64CallUtilities(CallUtilitiesManager):\n \"\"\"Class that provides call utilities for the AArch64 architecture.\"\"\"\n\n def is_call(self: Aarch64CallUtilities, opcode_window: bytes) -> bool:\n \"\"\"Check if the current instruction is a call instruction.\"\"\"\n # Check for BL instruction\n if (opcode_window[3] & 0xFC) == 0x94:\n return True\n\n # Check for BLR instruction\n return bool(opcode_window[3] == 214 and opcode_window[2] & 63 == 63)\n\n def compute_call_skip(self: Aarch64CallUtilities, opcode_window: bytes) -> int:\n \"\"\"Compute the instruction size of the current call instruction.\"\"\"\n # Check for BL instruction\n if self.is_call(opcode_window):\n return 4\n\n return 0\n\n def get_call_and_skip_amount(self: Aarch64CallUtilities, opcode_window: bytes) -> tuple[bool, int]:\n \"\"\"Check if the current instruction is a call instruction and compute the instruction size.\"\"\"\n skip = self.compute_call_skip(opcode_window)\n return skip != 0, skip\n"},{"location":"from_pydoc/generated/architectures/aarch64/aarch64_call_utilities/#libdebug.architectures.aarch64.aarch64_call_utilities.Aarch64CallUtilities.compute_call_skip","title":"compute_call_skip(opcode_window)","text":"Compute the instruction size of the current call instruction.
Source code inlibdebug/architectures/aarch64/aarch64_call_utilities.py def compute_call_skip(self: Aarch64CallUtilities, opcode_window: bytes) -> int:\n \"\"\"Compute the instruction size of the current call instruction.\"\"\"\n # Check for BL instruction\n if self.is_call(opcode_window):\n return 4\n\n return 0\n"},{"location":"from_pydoc/generated/architectures/aarch64/aarch64_call_utilities/#libdebug.architectures.aarch64.aarch64_call_utilities.Aarch64CallUtilities.get_call_and_skip_amount","title":"get_call_and_skip_amount(opcode_window)","text":"Check if the current instruction is a call instruction and compute the instruction size.
Source code inlibdebug/architectures/aarch64/aarch64_call_utilities.py def get_call_and_skip_amount(self: Aarch64CallUtilities, opcode_window: bytes) -> tuple[bool, int]:\n \"\"\"Check if the current instruction is a call instruction and compute the instruction size.\"\"\"\n skip = self.compute_call_skip(opcode_window)\n return skip != 0, skip\n"},{"location":"from_pydoc/generated/architectures/aarch64/aarch64_call_utilities/#libdebug.architectures.aarch64.aarch64_call_utilities.Aarch64CallUtilities.is_call","title":"is_call(opcode_window)","text":"Check if the current instruction is a call instruction.
Source code inlibdebug/architectures/aarch64/aarch64_call_utilities.py def is_call(self: Aarch64CallUtilities, opcode_window: bytes) -> bool:\n \"\"\"Check if the current instruction is a call instruction.\"\"\"\n # Check for BL instruction\n if (opcode_window[3] & 0xFC) == 0x94:\n return True\n\n # Check for BLR instruction\n return bool(opcode_window[3] == 214 and opcode_window[2] & 63 == 63)\n"},{"location":"from_pydoc/generated/architectures/aarch64/aarch64_ptrace_register_holder/","title":"libdebug.architectures.aarch64.aarch64_ptrace_register_holder","text":""},{"location":"from_pydoc/generated/architectures/aarch64/aarch64_ptrace_register_holder/#libdebug.architectures.aarch64.aarch64_ptrace_register_holder.Aarch64PtraceRegisterHolder","title":"Aarch64PtraceRegisterHolder dataclass","text":" Bases: PtraceRegisterHolder
A class that provides views and setters for the register of an aarch64 process.
Source code inlibdebug/architectures/aarch64/aarch64_ptrace_register_holder.py @dataclass\nclass Aarch64PtraceRegisterHolder(PtraceRegisterHolder):\n \"\"\"A class that provides views and setters for the register of an aarch64 process.\"\"\"\n\n def provide_regs_class(self: Aarch64PtraceRegisterHolder) -> type:\n \"\"\"Provide a class to hold the register accessors.\"\"\"\n return Aarch64Registers\n\n def provide_regs(self: Aarch64PtraceRegisterHolder) -> list[str]:\n \"\"\"Provide the list of registers, excluding the vector and fp registers.\"\"\"\n return AARCH64_REGS\n\n def provide_vector_fp_regs(self: Aarch64PtraceRegisterHolder) -> list[tuple[str]]:\n \"\"\"Provide the list of vector and floating point registers.\"\"\"\n return self._vector_fp_registers\n\n def provide_special_regs(self: Aarch64PtraceRegisterHolder) -> list[str]:\n \"\"\"Provide the list of special registers, which are not intended for general-purpose use.\"\"\"\n return AARCH64_SPECIAL_REGS\n\n def apply_on_regs(self: Aarch64PtraceRegisterHolder, target: Aarch64Registers, target_class: type) -> None:\n \"\"\"Apply the register accessors to the Aarch64Registers class.\"\"\"\n target.register_file = self.register_file\n target._fp_register_file = self.fp_register_file\n\n if hasattr(target_class, \"w0\"):\n return\n\n self._vector_fp_registers = []\n\n for i in range(31):\n name_64 = f\"x{i}\"\n name_32 = f\"w{i}\"\n\n setattr(target_class, name_64, _get_property_64(name_64))\n setattr(target_class, name_32, _get_property_32(name_64))\n\n for reg in AARCH64_SPECIAL_REGS:\n setattr(target_class, reg, _get_property_64(reg))\n\n # setup the floating point registers\n for i in range(32):\n name_v = f\"v{i}\"\n name_128 = f\"q{i}\"\n name_64 = f\"d{i}\"\n name_32 = f\"s{i}\"\n name_16 = f\"h{i}\"\n name_8 = f\"b{i}\"\n setattr(target_class, name_v, _get_property_fp_128(name_v, i))\n setattr(target_class, name_128, _get_property_fp_128(name_128, i))\n setattr(target_class, name_64, _get_property_fp_64(name_64, i))\n setattr(target_class, name_32, _get_property_fp_32(name_32, i))\n setattr(target_class, name_16, _get_property_fp_16(name_16, i))\n setattr(target_class, name_8, _get_property_fp_8(name_8, i))\n self._vector_fp_registers.append((name_v, name_128, name_64, name_32, name_16, name_8))\n\n # setup special aarch64 registers\n target_class.pc = _get_property_64(\"pc\")\n target_class.sp = _get_property_64(\"sp\")\n target_class.lr = _get_property_64(\"x30\")\n target_class.fp = _get_property_64(\"x29\")\n target_class.xzr = _get_property_zr(\"xzr\")\n target_class.wzr = _get_property_zr(\"wzr\")\n\n Aarch64PtraceRegisterHolder._vector_fp_registers = self._vector_fp_registers\n\n def apply_on_thread(self: Aarch64PtraceRegisterHolder, target: ThreadContext, target_class: type) -> None:\n \"\"\"Apply the register accessors to the thread class.\"\"\"\n target.register_file = self.register_file\n\n # If the accessors are already defined, we don't need to redefine them\n if hasattr(target_class, \"instruction_pointer\"):\n return\n\n # setup generic \"instruction_pointer\" property\n target_class.instruction_pointer = _get_property_64(\"pc\")\n\n # setup generic syscall properties\n target_class.syscall_return = _get_property_64(\"x0\")\n target_class.syscall_arg0 = _get_property_64(\"x0\")\n target_class.syscall_arg1 = _get_property_64(\"x1\")\n target_class.syscall_arg2 = _get_property_64(\"x2\")\n target_class.syscall_arg3 = _get_property_64(\"x3\")\n target_class.syscall_arg4 = _get_property_64(\"x4\")\n target_class.syscall_arg5 = _get_property_64(\"x5\")\n\n # syscall number handling is special on aarch64, as the original number is stored in x8\n # but writing to x8 isn't enough to change the actual called syscall\n target_class.syscall_number = _get_property_syscall_num()\n\n def cleanup(self: Aarch64PtraceRegisterHolder) -> None:\n \"\"\"Clean up the register accessors from the Aarch64Registers class.\"\"\"\n for attr_name, attr_value in list(Aarch64Registers.__dict__.items()):\n if isinstance(attr_value, property):\n delattr(Aarch64Registers, attr_name)\n"},{"location":"from_pydoc/generated/architectures/aarch64/aarch64_ptrace_register_holder/#libdebug.architectures.aarch64.aarch64_ptrace_register_holder.Aarch64PtraceRegisterHolder.apply_on_regs","title":"apply_on_regs(target, target_class)","text":"Apply the register accessors to the Aarch64Registers class.
Source code inlibdebug/architectures/aarch64/aarch64_ptrace_register_holder.py def apply_on_regs(self: Aarch64PtraceRegisterHolder, target: Aarch64Registers, target_class: type) -> None:\n \"\"\"Apply the register accessors to the Aarch64Registers class.\"\"\"\n target.register_file = self.register_file\n target._fp_register_file = self.fp_register_file\n\n if hasattr(target_class, \"w0\"):\n return\n\n self._vector_fp_registers = []\n\n for i in range(31):\n name_64 = f\"x{i}\"\n name_32 = f\"w{i}\"\n\n setattr(target_class, name_64, _get_property_64(name_64))\n setattr(target_class, name_32, _get_property_32(name_64))\n\n for reg in AARCH64_SPECIAL_REGS:\n setattr(target_class, reg, _get_property_64(reg))\n\n # setup the floating point registers\n for i in range(32):\n name_v = f\"v{i}\"\n name_128 = f\"q{i}\"\n name_64 = f\"d{i}\"\n name_32 = f\"s{i}\"\n name_16 = f\"h{i}\"\n name_8 = f\"b{i}\"\n setattr(target_class, name_v, _get_property_fp_128(name_v, i))\n setattr(target_class, name_128, _get_property_fp_128(name_128, i))\n setattr(target_class, name_64, _get_property_fp_64(name_64, i))\n setattr(target_class, name_32, _get_property_fp_32(name_32, i))\n setattr(target_class, name_16, _get_property_fp_16(name_16, i))\n setattr(target_class, name_8, _get_property_fp_8(name_8, i))\n self._vector_fp_registers.append((name_v, name_128, name_64, name_32, name_16, name_8))\n\n # setup special aarch64 registers\n target_class.pc = _get_property_64(\"pc\")\n target_class.sp = _get_property_64(\"sp\")\n target_class.lr = _get_property_64(\"x30\")\n target_class.fp = _get_property_64(\"x29\")\n target_class.xzr = _get_property_zr(\"xzr\")\n target_class.wzr = _get_property_zr(\"wzr\")\n\n Aarch64PtraceRegisterHolder._vector_fp_registers = self._vector_fp_registers\n"},{"location":"from_pydoc/generated/architectures/aarch64/aarch64_ptrace_register_holder/#libdebug.architectures.aarch64.aarch64_ptrace_register_holder.Aarch64PtraceRegisterHolder.apply_on_thread","title":"apply_on_thread(target, target_class)","text":"Apply the register accessors to the thread class.
Source code inlibdebug/architectures/aarch64/aarch64_ptrace_register_holder.py def apply_on_thread(self: Aarch64PtraceRegisterHolder, target: ThreadContext, target_class: type) -> None:\n \"\"\"Apply the register accessors to the thread class.\"\"\"\n target.register_file = self.register_file\n\n # If the accessors are already defined, we don't need to redefine them\n if hasattr(target_class, \"instruction_pointer\"):\n return\n\n # setup generic \"instruction_pointer\" property\n target_class.instruction_pointer = _get_property_64(\"pc\")\n\n # setup generic syscall properties\n target_class.syscall_return = _get_property_64(\"x0\")\n target_class.syscall_arg0 = _get_property_64(\"x0\")\n target_class.syscall_arg1 = _get_property_64(\"x1\")\n target_class.syscall_arg2 = _get_property_64(\"x2\")\n target_class.syscall_arg3 = _get_property_64(\"x3\")\n target_class.syscall_arg4 = _get_property_64(\"x4\")\n target_class.syscall_arg5 = _get_property_64(\"x5\")\n\n # syscall number handling is special on aarch64, as the original number is stored in x8\n # but writing to x8 isn't enough to change the actual called syscall\n target_class.syscall_number = _get_property_syscall_num()\n"},{"location":"from_pydoc/generated/architectures/aarch64/aarch64_ptrace_register_holder/#libdebug.architectures.aarch64.aarch64_ptrace_register_holder.Aarch64PtraceRegisterHolder.cleanup","title":"cleanup()","text":"Clean up the register accessors from the Aarch64Registers class.
Source code inlibdebug/architectures/aarch64/aarch64_ptrace_register_holder.py def cleanup(self: Aarch64PtraceRegisterHolder) -> None:\n \"\"\"Clean up the register accessors from the Aarch64Registers class.\"\"\"\n for attr_name, attr_value in list(Aarch64Registers.__dict__.items()):\n if isinstance(attr_value, property):\n delattr(Aarch64Registers, attr_name)\n"},{"location":"from_pydoc/generated/architectures/aarch64/aarch64_ptrace_register_holder/#libdebug.architectures.aarch64.aarch64_ptrace_register_holder.Aarch64PtraceRegisterHolder.provide_regs","title":"provide_regs()","text":"Provide the list of registers, excluding the vector and fp registers.
Source code inlibdebug/architectures/aarch64/aarch64_ptrace_register_holder.py def provide_regs(self: Aarch64PtraceRegisterHolder) -> list[str]:\n \"\"\"Provide the list of registers, excluding the vector and fp registers.\"\"\"\n return AARCH64_REGS\n"},{"location":"from_pydoc/generated/architectures/aarch64/aarch64_ptrace_register_holder/#libdebug.architectures.aarch64.aarch64_ptrace_register_holder.Aarch64PtraceRegisterHolder.provide_regs_class","title":"provide_regs_class()","text":"Provide a class to hold the register accessors.
Source code inlibdebug/architectures/aarch64/aarch64_ptrace_register_holder.py def provide_regs_class(self: Aarch64PtraceRegisterHolder) -> type:\n \"\"\"Provide a class to hold the register accessors.\"\"\"\n return Aarch64Registers\n"},{"location":"from_pydoc/generated/architectures/aarch64/aarch64_ptrace_register_holder/#libdebug.architectures.aarch64.aarch64_ptrace_register_holder.Aarch64PtraceRegisterHolder.provide_special_regs","title":"provide_special_regs()","text":"Provide the list of special registers, which are not intended for general-purpose use.
Source code inlibdebug/architectures/aarch64/aarch64_ptrace_register_holder.py def provide_special_regs(self: Aarch64PtraceRegisterHolder) -> list[str]:\n \"\"\"Provide the list of special registers, which are not intended for general-purpose use.\"\"\"\n return AARCH64_SPECIAL_REGS\n"},{"location":"from_pydoc/generated/architectures/aarch64/aarch64_ptrace_register_holder/#libdebug.architectures.aarch64.aarch64_ptrace_register_holder.Aarch64PtraceRegisterHolder.provide_vector_fp_regs","title":"provide_vector_fp_regs()","text":"Provide the list of vector and floating point registers.
Source code inlibdebug/architectures/aarch64/aarch64_ptrace_register_holder.py def provide_vector_fp_regs(self: Aarch64PtraceRegisterHolder) -> list[tuple[str]]:\n \"\"\"Provide the list of vector and floating point registers.\"\"\"\n return self._vector_fp_registers\n"},{"location":"from_pydoc/generated/architectures/aarch64/aarch64_registers/","title":"libdebug.architectures.aarch64.aarch64_registers","text":""},{"location":"from_pydoc/generated/architectures/aarch64/aarch64_registers/#libdebug.architectures.aarch64.aarch64_registers.Aarch64Registers","title":"Aarch64Registers","text":" Bases: Registers
This class holds the state of the architectural-dependent registers of a process.
Source code inlibdebug/architectures/aarch64/aarch64_registers.py class Aarch64Registers(Registers):\n \"\"\"This class holds the state of the architectural-dependent registers of a process.\"\"\"\n"},{"location":"from_pydoc/generated/architectures/aarch64/aarch64_stack_unwinder/","title":"libdebug.architectures.aarch64.aarch64_stack_unwinder","text":""},{"location":"from_pydoc/generated/architectures/aarch64/aarch64_stack_unwinder/#libdebug.architectures.aarch64.aarch64_stack_unwinder.Aarch64StackUnwinder","title":"Aarch64StackUnwinder","text":" Bases: StackUnwindingManager
Class that provides stack unwinding for the AArch64 architecture.
Source code inlibdebug/architectures/aarch64/aarch64_stack_unwinder.py class Aarch64StackUnwinder(StackUnwindingManager):\n \"\"\"Class that provides stack unwinding for the AArch64 architecture.\"\"\"\n\n def unwind(self: Aarch64StackUnwinder, target: ThreadContext | Snapshot) -> list:\n \"\"\"Unwind the stack of a process.\n\n Args:\n target (ThreadContext): The target ThreadContext.\n\n Returns:\n list: A list of return addresses.\n \"\"\"\n assert hasattr(target.regs, \"pc\")\n\n frame_pointer = target.regs.x29\n\n # Instead of isinstance, we check if the target has the maps attribute to avoid circular imports\n vmaps = target.maps if hasattr(target, \"maps\") else target._internal_debugger.debugging_interface.get_maps()\n\n initial_link_register = None\n\n try:\n initial_link_register = self.get_return_address(target, vmaps)\n except ValueError:\n liblog.warning(\n \"Failed to get the return address. Check stack frame registers (e.g., base pointer). The stack trace may be incomplete.\",\n )\n\n stack_trace = [target.regs.pc, initial_link_register] if initial_link_register else [target.regs.pc]\n\n # Follow the frame chain\n while frame_pointer:\n try:\n link_register = int.from_bytes(target.memory[frame_pointer + 8, 8, \"absolute\"], sys.byteorder)\n frame_pointer = int.from_bytes(target.memory[frame_pointer, 8, \"absolute\"], sys.byteorder)\n\n if not vmaps.filter(link_register):\n break\n\n # Leaf functions don't set the previous stack frame pointer\n # But they set the link register to the return address\n # Non-leaf functions set both\n if initial_link_register and link_register == initial_link_register:\n initial_link_register = None\n continue\n\n stack_trace.append(link_register)\n except (OSError, ValueError):\n break\n\n return stack_trace\n\n def get_return_address(\n self: Aarch64StackUnwinder,\n target: ThreadContext | Snapshot,\n vmaps: MemoryMapList[MemoryMap],\n ) -> int:\n \"\"\"Get the return address of the current function.\n\n Args:\n target (ThreadContext): The target ThreadContext.\n vmaps (MemoryMapList[MemoryMap]): The memory maps of the process.\n\n Returns:\n int: The return address.\n \"\"\"\n return_address = target.regs.x30\n\n if not vmaps.filter(return_address):\n raise ValueError(\"Return address not in any valid memory map\")\n\n return return_address\n"},{"location":"from_pydoc/generated/architectures/aarch64/aarch64_stack_unwinder/#libdebug.architectures.aarch64.aarch64_stack_unwinder.Aarch64StackUnwinder.get_return_address","title":"get_return_address(target, vmaps)","text":"Get the return address of the current function.
Parameters:
Name Type Description Defaulttarget ThreadContext The target ThreadContext.
requiredvmaps MemoryMapList[MemoryMap] The memory maps of the process.
requiredReturns:
Name Type Descriptionint int The return address.
Source code inlibdebug/architectures/aarch64/aarch64_stack_unwinder.py def get_return_address(\n self: Aarch64StackUnwinder,\n target: ThreadContext | Snapshot,\n vmaps: MemoryMapList[MemoryMap],\n) -> int:\n \"\"\"Get the return address of the current function.\n\n Args:\n target (ThreadContext): The target ThreadContext.\n vmaps (MemoryMapList[MemoryMap]): The memory maps of the process.\n\n Returns:\n int: The return address.\n \"\"\"\n return_address = target.regs.x30\n\n if not vmaps.filter(return_address):\n raise ValueError(\"Return address not in any valid memory map\")\n\n return return_address\n"},{"location":"from_pydoc/generated/architectures/aarch64/aarch64_stack_unwinder/#libdebug.architectures.aarch64.aarch64_stack_unwinder.Aarch64StackUnwinder.unwind","title":"unwind(target)","text":"Unwind the stack of a process.
Parameters:
Name Type Description Defaulttarget ThreadContext The target ThreadContext.
requiredReturns:
Name Type Descriptionlist list A list of return addresses.
Source code inlibdebug/architectures/aarch64/aarch64_stack_unwinder.py def unwind(self: Aarch64StackUnwinder, target: ThreadContext | Snapshot) -> list:\n \"\"\"Unwind the stack of a process.\n\n Args:\n target (ThreadContext): The target ThreadContext.\n\n Returns:\n list: A list of return addresses.\n \"\"\"\n assert hasattr(target.regs, \"pc\")\n\n frame_pointer = target.regs.x29\n\n # Instead of isinstance, we check if the target has the maps attribute to avoid circular imports\n vmaps = target.maps if hasattr(target, \"maps\") else target._internal_debugger.debugging_interface.get_maps()\n\n initial_link_register = None\n\n try:\n initial_link_register = self.get_return_address(target, vmaps)\n except ValueError:\n liblog.warning(\n \"Failed to get the return address. Check stack frame registers (e.g., base pointer). The stack trace may be incomplete.\",\n )\n\n stack_trace = [target.regs.pc, initial_link_register] if initial_link_register else [target.regs.pc]\n\n # Follow the frame chain\n while frame_pointer:\n try:\n link_register = int.from_bytes(target.memory[frame_pointer + 8, 8, \"absolute\"], sys.byteorder)\n frame_pointer = int.from_bytes(target.memory[frame_pointer, 8, \"absolute\"], sys.byteorder)\n\n if not vmaps.filter(link_register):\n break\n\n # Leaf functions don't set the previous stack frame pointer\n # But they set the link register to the return address\n # Non-leaf functions set both\n if initial_link_register and link_register == initial_link_register:\n initial_link_register = None\n continue\n\n stack_trace.append(link_register)\n except (OSError, ValueError):\n break\n\n return stack_trace\n"},{"location":"from_pydoc/generated/architectures/aarch64/aarch64_thread_context/","title":"libdebug.architectures.aarch64.aarch64_thread_context","text":""},{"location":"from_pydoc/generated/architectures/aarch64/aarch64_thread_context/#libdebug.architectures.aarch64.aarch64_thread_context.Aarch64ThreadContext","title":"Aarch64ThreadContext","text":" Bases: ThreadContext
This object represents a thread in the context of the target aarch64 process. It holds information about the thread's state, registers and stack.
Source code inlibdebug/architectures/aarch64/aarch64_thread_context.py class Aarch64ThreadContext(ThreadContext):\n \"\"\"This object represents a thread in the context of the target aarch64 process. It holds information about the thread's state, registers and stack.\"\"\"\n\n def __init__(self: Aarch64ThreadContext, thread_id: int, registers: Aarch64PtraceRegisterHolder) -> None:\n \"\"\"Initialize the thread context with the given thread id.\"\"\"\n super().__init__(thread_id, registers)\n\n # Register the thread properties\n self._register_holder.apply_on_thread(self, Aarch64ThreadContext)\n"},{"location":"from_pydoc/generated/architectures/aarch64/aarch64_thread_context/#libdebug.architectures.aarch64.aarch64_thread_context.Aarch64ThreadContext.__init__","title":"__init__(thread_id, registers)","text":"Initialize the thread context with the given thread id.
Source code inlibdebug/architectures/aarch64/aarch64_thread_context.py def __init__(self: Aarch64ThreadContext, thread_id: int, registers: Aarch64PtraceRegisterHolder) -> None:\n \"\"\"Initialize the thread context with the given thread id.\"\"\"\n super().__init__(thread_id, registers)\n\n # Register the thread properties\n self._register_holder.apply_on_thread(self, Aarch64ThreadContext)\n"},{"location":"from_pydoc/generated/architectures/amd64/amd64_breakpoint_validator/","title":"libdebug.architectures.amd64.amd64_breakpoint_validator","text":""},{"location":"from_pydoc/generated/architectures/amd64/amd64_breakpoint_validator/#libdebug.architectures.amd64.amd64_breakpoint_validator.validate_breakpoint_amd64","title":"validate_breakpoint_amd64(bp)","text":"Validate a hardware breakpoint for the AMD64 architecture.
Source code inlibdebug/architectures/amd64/amd64_breakpoint_validator.py def validate_breakpoint_amd64(bp: Breakpoint) -> None:\n \"\"\"Validate a hardware breakpoint for the AMD64 architecture.\"\"\"\n if bp.condition not in [\"w\", \"rw\", \"x\"]:\n raise ValueError(\"Invalid condition for watchpoints. Supported conditions are 'w', 'rw', 'x'.\")\n\n if bp.length not in [1, 2, 4, 8]:\n raise ValueError(\"Invalid length for watchpoints. Supported lengths are 1, 2, 4, 8.\")\n"},{"location":"from_pydoc/generated/architectures/amd64/amd64_call_utilities/","title":"libdebug.architectures.amd64.amd64_call_utilities","text":""},{"location":"from_pydoc/generated/architectures/amd64/amd64_call_utilities/#libdebug.architectures.amd64.amd64_call_utilities.Amd64CallUtilities","title":"Amd64CallUtilities","text":" Bases: CallUtilitiesManager
Class that provides call utilities for the x86_64 architecture.
Source code inlibdebug/architectures/amd64/amd64_call_utilities.py class Amd64CallUtilities(CallUtilitiesManager):\n \"\"\"Class that provides call utilities for the x86_64 architecture.\"\"\"\n\n def is_call(self, opcode_window: bytes) -> bool:\n \"\"\"Check if the current instruction is a call instruction.\"\"\"\n # Check for direct CALL (E8 xx xx xx xx)\n if opcode_window[0] == 0xE8:\n return True\n\n # Check for indirect CALL using ModR/M (FF /2)\n if opcode_window[0] == 0xFF:\n # Extract ModR/M byte\n modRM = opcode_window[1]\n reg = (modRM >> 3) & 0x07 # Middle three bits\n\n if reg == 2:\n return True\n\n return False\n\n def compute_call_skip(self, opcode_window: bytes) -> int:\n \"\"\"Compute the instruction size of the current call instruction.\"\"\"\n # Check for direct CALL (E8 xx xx xx xx)\n if opcode_window[0] == 0xE8:\n return 5 # Direct CALL\n\n # Check for indirect CALL using ModR/M (FF /2)\n if opcode_window[0] == 0xFF:\n # Extract ModR/M byte\n modRM = opcode_window[1]\n mod = (modRM >> 6) & 0x03 # First two bits\n reg = (modRM >> 3) & 0x07 # Next three bits\n\n # Check if reg field is 010 (indirect CALL)\n if reg == 2:\n if mod == 0:\n if (modRM & 0x07) == 4:\n return 3 + (4 if opcode_window[2] == 0x25 else 0) # SIB byte + optional disp32\n elif (modRM & 0x07) == 5:\n return 6 # disp32\n return 2 # No displacement\n elif mod == 1:\n return 3 # disp8\n elif mod == 2:\n return 6 # disp32\n elif mod == 3:\n return 2 # Register direct\n\n return 0 # Not a CALL\n\n def get_call_and_skip_amount(self, opcode_window: bytes) -> tuple[bool, int]:\n \"\"\"Check if the current instruction is a call instruction and compute the instruction size.\"\"\"\n skip = self.compute_call_skip(opcode_window)\n return skip != 0, skip\n"},{"location":"from_pydoc/generated/architectures/amd64/amd64_call_utilities/#libdebug.architectures.amd64.amd64_call_utilities.Amd64CallUtilities.compute_call_skip","title":"compute_call_skip(opcode_window)","text":"Compute the instruction size of the current call instruction.
Source code inlibdebug/architectures/amd64/amd64_call_utilities.py def compute_call_skip(self, opcode_window: bytes) -> int:\n \"\"\"Compute the instruction size of the current call instruction.\"\"\"\n # Check for direct CALL (E8 xx xx xx xx)\n if opcode_window[0] == 0xE8:\n return 5 # Direct CALL\n\n # Check for indirect CALL using ModR/M (FF /2)\n if opcode_window[0] == 0xFF:\n # Extract ModR/M byte\n modRM = opcode_window[1]\n mod = (modRM >> 6) & 0x03 # First two bits\n reg = (modRM >> 3) & 0x07 # Next three bits\n\n # Check if reg field is 010 (indirect CALL)\n if reg == 2:\n if mod == 0:\n if (modRM & 0x07) == 4:\n return 3 + (4 if opcode_window[2] == 0x25 else 0) # SIB byte + optional disp32\n elif (modRM & 0x07) == 5:\n return 6 # disp32\n return 2 # No displacement\n elif mod == 1:\n return 3 # disp8\n elif mod == 2:\n return 6 # disp32\n elif mod == 3:\n return 2 # Register direct\n\n return 0 # Not a CALL\n"},{"location":"from_pydoc/generated/architectures/amd64/amd64_call_utilities/#libdebug.architectures.amd64.amd64_call_utilities.Amd64CallUtilities.get_call_and_skip_amount","title":"get_call_and_skip_amount(opcode_window)","text":"Check if the current instruction is a call instruction and compute the instruction size.
Source code inlibdebug/architectures/amd64/amd64_call_utilities.py def get_call_and_skip_amount(self, opcode_window: bytes) -> tuple[bool, int]:\n \"\"\"Check if the current instruction is a call instruction and compute the instruction size.\"\"\"\n skip = self.compute_call_skip(opcode_window)\n return skip != 0, skip\n"},{"location":"from_pydoc/generated/architectures/amd64/amd64_call_utilities/#libdebug.architectures.amd64.amd64_call_utilities.Amd64CallUtilities.is_call","title":"is_call(opcode_window)","text":"Check if the current instruction is a call instruction.
Source code inlibdebug/architectures/amd64/amd64_call_utilities.py def is_call(self, opcode_window: bytes) -> bool:\n \"\"\"Check if the current instruction is a call instruction.\"\"\"\n # Check for direct CALL (E8 xx xx xx xx)\n if opcode_window[0] == 0xE8:\n return True\n\n # Check for indirect CALL using ModR/M (FF /2)\n if opcode_window[0] == 0xFF:\n # Extract ModR/M byte\n modRM = opcode_window[1]\n reg = (modRM >> 3) & 0x07 # Middle three bits\n\n if reg == 2:\n return True\n\n return False\n"},{"location":"from_pydoc/generated/architectures/amd64/amd64_ptrace_register_holder/","title":"libdebug.architectures.amd64.amd64_ptrace_register_holder","text":""},{"location":"from_pydoc/generated/architectures/amd64/amd64_ptrace_register_holder/#libdebug.architectures.amd64.amd64_ptrace_register_holder.Amd64PtraceRegisterHolder","title":"Amd64PtraceRegisterHolder dataclass","text":" Bases: PtraceRegisterHolder
A class that provides views and setters for the registers of an x86_64 process.
Source code inlibdebug/architectures/amd64/amd64_ptrace_register_holder.py @dataclass\nclass Amd64PtraceRegisterHolder(PtraceRegisterHolder):\n \"\"\"A class that provides views and setters for the registers of an x86_64 process.\"\"\"\n\n def provide_regs_class(self: Amd64PtraceRegisterHolder) -> type:\n \"\"\"Provide a class to hold the register accessors.\"\"\"\n return Amd64Registers\n\n def provide_regs(self: Amd64PtraceRegisterHolder) -> list[str]:\n \"\"\"Provide the list of registers, excluding the vector and fp registers.\"\"\"\n return AMD64_REGS\n\n def provide_vector_fp_regs(self: Amd64PtraceRegisterHolder) -> list[tuple[str]]:\n \"\"\"Provide the list of vector and floating point registers.\"\"\"\n return self._vector_fp_registers\n\n def provide_special_regs(self: Amd64PtraceRegisterHolder) -> list[str]:\n \"\"\"Provide the list of special registers, which are not intended for general-purpose use.\"\"\"\n return AMD64_SPECIAL_REGS\n\n def apply_on_regs(self: Amd64PtraceRegisterHolder, target: Amd64Registers, target_class: type) -> None:\n \"\"\"Apply the register accessors to the Amd64Registers class.\"\"\"\n target.register_file = self.register_file\n target._fp_register_file = self.fp_register_file\n\n # If the accessors are already defined, we don't need to redefine them\n if hasattr(target_class, \"rip\"):\n return\n\n self._vector_fp_registers = []\n\n # setup accessors\n for name in AMD64_GP_REGS:\n name_64 = \"r\" + name + \"x\"\n name_32 = \"e\" + name + \"x\"\n name_16 = name + \"x\"\n name_8l = name + \"l\"\n name_8h = name + \"h\"\n\n setattr(target_class, name_64, _get_property_64(name_64))\n setattr(target_class, name_32, _get_property_32(name_64))\n setattr(target_class, name_16, _get_property_16(name_64))\n setattr(target_class, name_8l, _get_property_8l(name_64))\n setattr(target_class, name_8h, _get_property_8h(name_64))\n\n for name in AMD64_BASE_REGS:\n name_64 = \"r\" + name\n name_32 = \"e\" + name\n name_16 = name\n name_8l = name + \"l\"\n\n setattr(target_class, name_64, _get_property_64(name_64))\n setattr(target_class, name_32, _get_property_32(name_64))\n setattr(target_class, name_16, _get_property_16(name_64))\n setattr(target_class, name_8l, _get_property_8l(name_64))\n\n for name in AMD64_EXT_REGS:\n name_64 = name\n name_32 = name + \"d\"\n name_16 = name + \"w\"\n name_8l = name + \"b\"\n\n setattr(target_class, name_64, _get_property_64(name_64))\n setattr(target_class, name_32, _get_property_32(name_64))\n setattr(target_class, name_16, _get_property_16(name_64))\n setattr(target_class, name_8l, _get_property_8l(name_64))\n\n for name in AMD64_SPECIAL_REGS:\n setattr(target_class, name, _get_property_64(name))\n\n # setup special registers\n target_class.rip = _get_property_64(\"rip\")\n\n # setup floating-point registers\n # see libdebug/cffi/ptrace_cffi_build.py for the possible values of fp_register_file.type\n self._handle_fp_legacy(target_class)\n\n match self.fp_register_file.type:\n case 0:\n self._handle_vector_512(target_class)\n case 1:\n self._handle_vector_896(target_class)\n case 2:\n self._handle_vector_2696(target_class)\n case _:\n raise NotImplementedError(\n f\"Floating-point register file type {self.fp_register_file.type} not available.\",\n )\n\n Amd64PtraceRegisterHolder._vector_fp_registers = self._vector_fp_registers\n\n def apply_on_thread(self: Amd64PtraceRegisterHolder, target: ThreadContext, target_class: type) -> None:\n \"\"\"Apply the register accessors to the thread class.\"\"\"\n target.register_file = self.register_file\n\n # If the accessors are already defined, we don't need to redefine them\n if hasattr(target_class, \"instruction_pointer\"):\n return\n\n # setup generic \"instruction_pointer\" property\n target_class.instruction_pointer = _get_property_64(\"rip\")\n\n # setup generic syscall properties\n target_class.syscall_number = _get_property_64(\"orig_rax\")\n target_class.syscall_return = _get_property_64(\"rax\")\n target_class.syscall_arg0 = _get_property_64(\"rdi\")\n target_class.syscall_arg1 = _get_property_64(\"rsi\")\n target_class.syscall_arg2 = _get_property_64(\"rdx\")\n target_class.syscall_arg3 = _get_property_64(\"r10\")\n target_class.syscall_arg4 = _get_property_64(\"r8\")\n target_class.syscall_arg5 = _get_property_64(\"r9\")\n\n def _handle_fp_legacy(self: Amd64PtraceRegisterHolder, target_class: type) -> None:\n \"\"\"Handle legacy mmx and st registers.\"\"\"\n for index in range(8):\n name_mm = f\"mm{index}\"\n setattr(target_class, name_mm, _get_property_fp_mmx(name_mm, index))\n\n name_st = f\"st{index}\"\n setattr(target_class, name_st, _get_property_fp_st(name_st, index))\n\n self._vector_fp_registers.append((name_mm, name_st))\n\n def _handle_vector_512(self: Amd64PtraceRegisterHolder, target_class: type) -> None:\n \"\"\"Handle the case where the xsave area is 512 bytes long, which means we just have the xmm registers.\"\"\"\n for index in range(16):\n name_xmm = f\"xmm{index}\"\n setattr(target_class, name_xmm, _get_property_fp_xmm0(name_xmm, index))\n self._vector_fp_registers.append((name_xmm,))\n\n def _handle_vector_896(self: Amd64PtraceRegisterHolder, target_class: type) -> None:\n \"\"\"Handle the case where the xsave area is 896 bytes long, which means we have the xmm and ymm registers.\"\"\"\n for index in range(16):\n name_xmm = f\"xmm{index}\"\n setattr(target_class, name_xmm, _get_property_fp_xmm0(name_xmm, index))\n\n name_ymm = f\"ymm{index}\"\n setattr(target_class, name_ymm, _get_property_fp_ymm0(name_ymm, index))\n\n self._vector_fp_registers.append((name_xmm, name_ymm))\n\n def _handle_vector_2696(self: Amd64PtraceRegisterHolder, target_class: type) -> None:\n \"\"\"Handle the case where the xsave area is 2696 bytes long, which means we have 32 zmm registers.\"\"\"\n for index in range(16):\n name_xmm = f\"xmm{index}\"\n setattr(target_class, name_xmm, _get_property_fp_xmm0(name_xmm, index))\n\n name_ymm = f\"ymm{index}\"\n setattr(target_class, name_ymm, _get_property_fp_ymm0(name_ymm, index))\n\n name_zmm = f\"zmm{index}\"\n setattr(target_class, name_zmm, _get_property_fp_zmm0(name_zmm, index))\n\n self._vector_fp_registers.append((name_xmm, name_ymm, name_zmm))\n\n for index in range(16):\n name_xmm = f\"xmm{index + 16}\"\n setattr(target_class, name_xmm, _get_property_fp_xmm1(name_xmm, index))\n\n name_ymm = f\"ymm{index + 16}\"\n setattr(target_class, name_ymm, _get_property_fp_ymm1(name_ymm, index))\n\n name_zmm = f\"zmm{index + 16}\"\n setattr(target_class, name_zmm, _get_property_fp_zmm1(name_zmm, index))\n\n self._vector_fp_registers.append((name_xmm, name_ymm, name_zmm))\n\n def cleanup(self: Amd64PtraceRegisterHolder) -> None:\n \"\"\"Clean up the register accessors from the Amd64Registers class.\"\"\"\n for attr_name, attr_value in list(Amd64Registers.__dict__.items()):\n if isinstance(attr_value, property):\n delattr(Amd64Registers, attr_name)\n"},{"location":"from_pydoc/generated/architectures/amd64/amd64_ptrace_register_holder/#libdebug.architectures.amd64.amd64_ptrace_register_holder.Amd64PtraceRegisterHolder._handle_fp_legacy","title":"_handle_fp_legacy(target_class)","text":"Handle legacy mmx and st registers.
Source code inlibdebug/architectures/amd64/amd64_ptrace_register_holder.py def _handle_fp_legacy(self: Amd64PtraceRegisterHolder, target_class: type) -> None:\n \"\"\"Handle legacy mmx and st registers.\"\"\"\n for index in range(8):\n name_mm = f\"mm{index}\"\n setattr(target_class, name_mm, _get_property_fp_mmx(name_mm, index))\n\n name_st = f\"st{index}\"\n setattr(target_class, name_st, _get_property_fp_st(name_st, index))\n\n self._vector_fp_registers.append((name_mm, name_st))\n"},{"location":"from_pydoc/generated/architectures/amd64/amd64_ptrace_register_holder/#libdebug.architectures.amd64.amd64_ptrace_register_holder.Amd64PtraceRegisterHolder._handle_vector_2696","title":"_handle_vector_2696(target_class)","text":"Handle the case where the xsave area is 2696 bytes long, which means we have 32 zmm registers.
Source code inlibdebug/architectures/amd64/amd64_ptrace_register_holder.py def _handle_vector_2696(self: Amd64PtraceRegisterHolder, target_class: type) -> None:\n \"\"\"Handle the case where the xsave area is 2696 bytes long, which means we have 32 zmm registers.\"\"\"\n for index in range(16):\n name_xmm = f\"xmm{index}\"\n setattr(target_class, name_xmm, _get_property_fp_xmm0(name_xmm, index))\n\n name_ymm = f\"ymm{index}\"\n setattr(target_class, name_ymm, _get_property_fp_ymm0(name_ymm, index))\n\n name_zmm = f\"zmm{index}\"\n setattr(target_class, name_zmm, _get_property_fp_zmm0(name_zmm, index))\n\n self._vector_fp_registers.append((name_xmm, name_ymm, name_zmm))\n\n for index in range(16):\n name_xmm = f\"xmm{index + 16}\"\n setattr(target_class, name_xmm, _get_property_fp_xmm1(name_xmm, index))\n\n name_ymm = f\"ymm{index + 16}\"\n setattr(target_class, name_ymm, _get_property_fp_ymm1(name_ymm, index))\n\n name_zmm = f\"zmm{index + 16}\"\n setattr(target_class, name_zmm, _get_property_fp_zmm1(name_zmm, index))\n\n self._vector_fp_registers.append((name_xmm, name_ymm, name_zmm))\n"},{"location":"from_pydoc/generated/architectures/amd64/amd64_ptrace_register_holder/#libdebug.architectures.amd64.amd64_ptrace_register_holder.Amd64PtraceRegisterHolder._handle_vector_512","title":"_handle_vector_512(target_class)","text":"Handle the case where the xsave area is 512 bytes long, which means we just have the xmm registers.
Source code inlibdebug/architectures/amd64/amd64_ptrace_register_holder.py def _handle_vector_512(self: Amd64PtraceRegisterHolder, target_class: type) -> None:\n \"\"\"Handle the case where the xsave area is 512 bytes long, which means we just have the xmm registers.\"\"\"\n for index in range(16):\n name_xmm = f\"xmm{index}\"\n setattr(target_class, name_xmm, _get_property_fp_xmm0(name_xmm, index))\n self._vector_fp_registers.append((name_xmm,))\n"},{"location":"from_pydoc/generated/architectures/amd64/amd64_ptrace_register_holder/#libdebug.architectures.amd64.amd64_ptrace_register_holder.Amd64PtraceRegisterHolder._handle_vector_896","title":"_handle_vector_896(target_class)","text":"Handle the case where the xsave area is 896 bytes long, which means we have the xmm and ymm registers.
Source code inlibdebug/architectures/amd64/amd64_ptrace_register_holder.py def _handle_vector_896(self: Amd64PtraceRegisterHolder, target_class: type) -> None:\n \"\"\"Handle the case where the xsave area is 896 bytes long, which means we have the xmm and ymm registers.\"\"\"\n for index in range(16):\n name_xmm = f\"xmm{index}\"\n setattr(target_class, name_xmm, _get_property_fp_xmm0(name_xmm, index))\n\n name_ymm = f\"ymm{index}\"\n setattr(target_class, name_ymm, _get_property_fp_ymm0(name_ymm, index))\n\n self._vector_fp_registers.append((name_xmm, name_ymm))\n"},{"location":"from_pydoc/generated/architectures/amd64/amd64_ptrace_register_holder/#libdebug.architectures.amd64.amd64_ptrace_register_holder.Amd64PtraceRegisterHolder.apply_on_regs","title":"apply_on_regs(target, target_class)","text":"Apply the register accessors to the Amd64Registers class.
Source code inlibdebug/architectures/amd64/amd64_ptrace_register_holder.py def apply_on_regs(self: Amd64PtraceRegisterHolder, target: Amd64Registers, target_class: type) -> None:\n \"\"\"Apply the register accessors to the Amd64Registers class.\"\"\"\n target.register_file = self.register_file\n target._fp_register_file = self.fp_register_file\n\n # If the accessors are already defined, we don't need to redefine them\n if hasattr(target_class, \"rip\"):\n return\n\n self._vector_fp_registers = []\n\n # setup accessors\n for name in AMD64_GP_REGS:\n name_64 = \"r\" + name + \"x\"\n name_32 = \"e\" + name + \"x\"\n name_16 = name + \"x\"\n name_8l = name + \"l\"\n name_8h = name + \"h\"\n\n setattr(target_class, name_64, _get_property_64(name_64))\n setattr(target_class, name_32, _get_property_32(name_64))\n setattr(target_class, name_16, _get_property_16(name_64))\n setattr(target_class, name_8l, _get_property_8l(name_64))\n setattr(target_class, name_8h, _get_property_8h(name_64))\n\n for name in AMD64_BASE_REGS:\n name_64 = \"r\" + name\n name_32 = \"e\" + name\n name_16 = name\n name_8l = name + \"l\"\n\n setattr(target_class, name_64, _get_property_64(name_64))\n setattr(target_class, name_32, _get_property_32(name_64))\n setattr(target_class, name_16, _get_property_16(name_64))\n setattr(target_class, name_8l, _get_property_8l(name_64))\n\n for name in AMD64_EXT_REGS:\n name_64 = name\n name_32 = name + \"d\"\n name_16 = name + \"w\"\n name_8l = name + \"b\"\n\n setattr(target_class, name_64, _get_property_64(name_64))\n setattr(target_class, name_32, _get_property_32(name_64))\n setattr(target_class, name_16, _get_property_16(name_64))\n setattr(target_class, name_8l, _get_property_8l(name_64))\n\n for name in AMD64_SPECIAL_REGS:\n setattr(target_class, name, _get_property_64(name))\n\n # setup special registers\n target_class.rip = _get_property_64(\"rip\")\n\n # setup floating-point registers\n # see libdebug/cffi/ptrace_cffi_build.py for the possible values of fp_register_file.type\n self._handle_fp_legacy(target_class)\n\n match self.fp_register_file.type:\n case 0:\n self._handle_vector_512(target_class)\n case 1:\n self._handle_vector_896(target_class)\n case 2:\n self._handle_vector_2696(target_class)\n case _:\n raise NotImplementedError(\n f\"Floating-point register file type {self.fp_register_file.type} not available.\",\n )\n\n Amd64PtraceRegisterHolder._vector_fp_registers = self._vector_fp_registers\n"},{"location":"from_pydoc/generated/architectures/amd64/amd64_ptrace_register_holder/#libdebug.architectures.amd64.amd64_ptrace_register_holder.Amd64PtraceRegisterHolder.apply_on_thread","title":"apply_on_thread(target, target_class)","text":"Apply the register accessors to the thread class.
Source code inlibdebug/architectures/amd64/amd64_ptrace_register_holder.py def apply_on_thread(self: Amd64PtraceRegisterHolder, target: ThreadContext, target_class: type) -> None:\n \"\"\"Apply the register accessors to the thread class.\"\"\"\n target.register_file = self.register_file\n\n # If the accessors are already defined, we don't need to redefine them\n if hasattr(target_class, \"instruction_pointer\"):\n return\n\n # setup generic \"instruction_pointer\" property\n target_class.instruction_pointer = _get_property_64(\"rip\")\n\n # setup generic syscall properties\n target_class.syscall_number = _get_property_64(\"orig_rax\")\n target_class.syscall_return = _get_property_64(\"rax\")\n target_class.syscall_arg0 = _get_property_64(\"rdi\")\n target_class.syscall_arg1 = _get_property_64(\"rsi\")\n target_class.syscall_arg2 = _get_property_64(\"rdx\")\n target_class.syscall_arg3 = _get_property_64(\"r10\")\n target_class.syscall_arg4 = _get_property_64(\"r8\")\n target_class.syscall_arg5 = _get_property_64(\"r9\")\n"},{"location":"from_pydoc/generated/architectures/amd64/amd64_ptrace_register_holder/#libdebug.architectures.amd64.amd64_ptrace_register_holder.Amd64PtraceRegisterHolder.cleanup","title":"cleanup()","text":"Clean up the register accessors from the Amd64Registers class.
Source code inlibdebug/architectures/amd64/amd64_ptrace_register_holder.py def cleanup(self: Amd64PtraceRegisterHolder) -> None:\n \"\"\"Clean up the register accessors from the Amd64Registers class.\"\"\"\n for attr_name, attr_value in list(Amd64Registers.__dict__.items()):\n if isinstance(attr_value, property):\n delattr(Amd64Registers, attr_name)\n"},{"location":"from_pydoc/generated/architectures/amd64/amd64_ptrace_register_holder/#libdebug.architectures.amd64.amd64_ptrace_register_holder.Amd64PtraceRegisterHolder.provide_regs","title":"provide_regs()","text":"Provide the list of registers, excluding the vector and fp registers.
Source code inlibdebug/architectures/amd64/amd64_ptrace_register_holder.py def provide_regs(self: Amd64PtraceRegisterHolder) -> list[str]:\n \"\"\"Provide the list of registers, excluding the vector and fp registers.\"\"\"\n return AMD64_REGS\n"},{"location":"from_pydoc/generated/architectures/amd64/amd64_ptrace_register_holder/#libdebug.architectures.amd64.amd64_ptrace_register_holder.Amd64PtraceRegisterHolder.provide_regs_class","title":"provide_regs_class()","text":"Provide a class to hold the register accessors.
Source code inlibdebug/architectures/amd64/amd64_ptrace_register_holder.py def provide_regs_class(self: Amd64PtraceRegisterHolder) -> type:\n \"\"\"Provide a class to hold the register accessors.\"\"\"\n return Amd64Registers\n"},{"location":"from_pydoc/generated/architectures/amd64/amd64_ptrace_register_holder/#libdebug.architectures.amd64.amd64_ptrace_register_holder.Amd64PtraceRegisterHolder.provide_special_regs","title":"provide_special_regs()","text":"Provide the list of special registers, which are not intended for general-purpose use.
Source code inlibdebug/architectures/amd64/amd64_ptrace_register_holder.py def provide_special_regs(self: Amd64PtraceRegisterHolder) -> list[str]:\n \"\"\"Provide the list of special registers, which are not intended for general-purpose use.\"\"\"\n return AMD64_SPECIAL_REGS\n"},{"location":"from_pydoc/generated/architectures/amd64/amd64_ptrace_register_holder/#libdebug.architectures.amd64.amd64_ptrace_register_holder.Amd64PtraceRegisterHolder.provide_vector_fp_regs","title":"provide_vector_fp_regs()","text":"Provide the list of vector and floating point registers.
Source code inlibdebug/architectures/amd64/amd64_ptrace_register_holder.py def provide_vector_fp_regs(self: Amd64PtraceRegisterHolder) -> list[tuple[str]]:\n \"\"\"Provide the list of vector and floating point registers.\"\"\"\n return self._vector_fp_registers\n"},{"location":"from_pydoc/generated/architectures/amd64/amd64_registers/","title":"libdebug.architectures.amd64.amd64_registers","text":""},{"location":"from_pydoc/generated/architectures/amd64/amd64_registers/#libdebug.architectures.amd64.amd64_registers.Amd64Registers","title":"Amd64Registers","text":" Bases: Registers
This class holds the state of the architectural-dependent registers of a process.
Source code inlibdebug/architectures/amd64/amd64_registers.py class Amd64Registers(Registers):\n \"\"\"This class holds the state of the architectural-dependent registers of a process.\"\"\"\n"},{"location":"from_pydoc/generated/architectures/amd64/amd64_stack_unwinder/","title":"libdebug.architectures.amd64.amd64_stack_unwinder","text":""},{"location":"from_pydoc/generated/architectures/amd64/amd64_stack_unwinder/#libdebug.architectures.amd64.amd64_stack_unwinder.Amd64StackUnwinder","title":"Amd64StackUnwinder","text":" Bases: StackUnwindingManager
Class that provides stack unwinding for the x86_64 architecture.
Source code inlibdebug/architectures/amd64/amd64_stack_unwinder.py class Amd64StackUnwinder(StackUnwindingManager):\n \"\"\"Class that provides stack unwinding for the x86_64 architecture.\"\"\"\n\n def unwind(self: Amd64StackUnwinder, target: ThreadContext | Snapshot) -> list:\n \"\"\"Unwind the stack of a process.\n\n Args:\n target (ThreadContext): The target ThreadContext.\n\n Returns:\n list: A list of return addresses.\n \"\"\"\n assert hasattr(target.regs, \"rip\")\n assert hasattr(target.regs, \"rbp\")\n\n current_rbp = target.regs.rbp\n stack_trace = [target.regs.rip]\n\n # Instead of isinstance, we check if the target has the maps attribute to avoid circular imports\n vmaps = target.maps if hasattr(target, \"maps\") else target._internal_debugger.debugging_interface.get_maps()\n\n while current_rbp:\n try:\n # Read the return address\n return_address = int.from_bytes(target.memory[current_rbp + 8, 8, \"absolute\"], sys.byteorder)\n\n if not any(vmap.start <= return_address < vmap.end for vmap in vmaps):\n break\n\n # Read the previous rbp and set it as the current one\n current_rbp = int.from_bytes(target.memory[current_rbp, 8, \"absolute\"], sys.byteorder)\n\n stack_trace.append(return_address)\n except (OSError, ValueError):\n break\n\n # If we are in the prologue of a function, we need to get the return address from the stack\n # using a slightly more complex method\n try:\n first_return_address = self.get_return_address(target, vmaps)\n\n if len(stack_trace) > 1:\n if first_return_address != stack_trace[1]:\n stack_trace.insert(1, first_return_address)\n else:\n stack_trace.append(first_return_address)\n except (OSError, ValueError):\n liblog.warning(\n \"Failed to get the return address. Check stack frame registers (e.g., base pointer). The stack trace may be incomplete.\",\n )\n\n return stack_trace\n\n def get_return_address(self: Amd64StackUnwinder, target: ThreadContext | Snapshot, vmaps: MemoryMapList[MemoryMap]) -> int:\n \"\"\"Get the return address of the current function.\n\n Args:\n target (ThreadContext): The target ThreadContext.\n vmaps (MemoryMapList[MemoryMap]): The memory maps of the process.\n\n Returns:\n int: The return address.\n \"\"\"\n instruction_window = target.memory[target.regs.rip, 4, \"absolute\"]\n\n # Check if the instruction window is a function preamble and handle each case\n return_address = None\n\n if self._preamble_state(instruction_window) == 0:\n return_address = target.memory[target.regs.rbp + 8, 8, \"absolute\"]\n elif self._preamble_state(instruction_window) == 1:\n return_address = target.memory[target.regs.rsp, 8, \"absolute\"]\n else:\n return_address = target.memory[target.regs.rsp + 8, 8, \"absolute\"]\n\n return_address = int.from_bytes(return_address, byteorder=\"little\")\n\n if not vmaps.filter(return_address):\n raise ValueError(\"Return address not in memory maps.\")\n\n return return_address\n\n def _preamble_state(self: Amd64StackUnwinder, instruction_window: bytes) -> int:\n \"\"\"Check if the instruction window is a function preamble and if so at what stage.\n\n Args:\n instruction_window (bytes): The instruction window.\n\n Returns:\n int: 0 if not a preamble, 1 if rbp has not been pushed yet, 2 otherwise\n \"\"\"\n preamble_state = 0\n\n # endbr64 and push rbp\n if b\"\\xf3\\x0f\\x1e\\xfa\" in instruction_window or b\"\\x55\" in instruction_window:\n preamble_state = 1\n # mov rbp, rsp\n elif b\"\\x48\\x89\\xe5\" in instruction_window:\n preamble_state = 2\n\n return preamble_state\n"},{"location":"from_pydoc/generated/architectures/amd64/amd64_stack_unwinder/#libdebug.architectures.amd64.amd64_stack_unwinder.Amd64StackUnwinder._preamble_state","title":"_preamble_state(instruction_window)","text":"Check if the instruction window is a function preamble and if so at what stage.
Parameters:
Name Type Description Defaultinstruction_window bytes The instruction window.
requiredReturns:
Name Type Descriptionint int 0 if not a preamble, 1 if rbp has not been pushed yet, 2 otherwise
Source code inlibdebug/architectures/amd64/amd64_stack_unwinder.py def _preamble_state(self: Amd64StackUnwinder, instruction_window: bytes) -> int:\n \"\"\"Check if the instruction window is a function preamble and if so at what stage.\n\n Args:\n instruction_window (bytes): The instruction window.\n\n Returns:\n int: 0 if not a preamble, 1 if rbp has not been pushed yet, 2 otherwise\n \"\"\"\n preamble_state = 0\n\n # endbr64 and push rbp\n if b\"\\xf3\\x0f\\x1e\\xfa\" in instruction_window or b\"\\x55\" in instruction_window:\n preamble_state = 1\n # mov rbp, rsp\n elif b\"\\x48\\x89\\xe5\" in instruction_window:\n preamble_state = 2\n\n return preamble_state\n"},{"location":"from_pydoc/generated/architectures/amd64/amd64_stack_unwinder/#libdebug.architectures.amd64.amd64_stack_unwinder.Amd64StackUnwinder.get_return_address","title":"get_return_address(target, vmaps)","text":"Get the return address of the current function.
Parameters:
Name Type Description Defaulttarget ThreadContext The target ThreadContext.
requiredvmaps MemoryMapList[MemoryMap] The memory maps of the process.
requiredReturns:
Name Type Descriptionint int The return address.
Source code inlibdebug/architectures/amd64/amd64_stack_unwinder.py def get_return_address(self: Amd64StackUnwinder, target: ThreadContext | Snapshot, vmaps: MemoryMapList[MemoryMap]) -> int:\n \"\"\"Get the return address of the current function.\n\n Args:\n target (ThreadContext): The target ThreadContext.\n vmaps (MemoryMapList[MemoryMap]): The memory maps of the process.\n\n Returns:\n int: The return address.\n \"\"\"\n instruction_window = target.memory[target.regs.rip, 4, \"absolute\"]\n\n # Check if the instruction window is a function preamble and handle each case\n return_address = None\n\n if self._preamble_state(instruction_window) == 0:\n return_address = target.memory[target.regs.rbp + 8, 8, \"absolute\"]\n elif self._preamble_state(instruction_window) == 1:\n return_address = target.memory[target.regs.rsp, 8, \"absolute\"]\n else:\n return_address = target.memory[target.regs.rsp + 8, 8, \"absolute\"]\n\n return_address = int.from_bytes(return_address, byteorder=\"little\")\n\n if not vmaps.filter(return_address):\n raise ValueError(\"Return address not in memory maps.\")\n\n return return_address\n"},{"location":"from_pydoc/generated/architectures/amd64/amd64_stack_unwinder/#libdebug.architectures.amd64.amd64_stack_unwinder.Amd64StackUnwinder.unwind","title":"unwind(target)","text":"Unwind the stack of a process.
Parameters:
Name Type Description Defaulttarget ThreadContext The target ThreadContext.
requiredReturns:
Name Type Descriptionlist list A list of return addresses.
Source code inlibdebug/architectures/amd64/amd64_stack_unwinder.py def unwind(self: Amd64StackUnwinder, target: ThreadContext | Snapshot) -> list:\n \"\"\"Unwind the stack of a process.\n\n Args:\n target (ThreadContext): The target ThreadContext.\n\n Returns:\n list: A list of return addresses.\n \"\"\"\n assert hasattr(target.regs, \"rip\")\n assert hasattr(target.regs, \"rbp\")\n\n current_rbp = target.regs.rbp\n stack_trace = [target.regs.rip]\n\n # Instead of isinstance, we check if the target has the maps attribute to avoid circular imports\n vmaps = target.maps if hasattr(target, \"maps\") else target._internal_debugger.debugging_interface.get_maps()\n\n while current_rbp:\n try:\n # Read the return address\n return_address = int.from_bytes(target.memory[current_rbp + 8, 8, \"absolute\"], sys.byteorder)\n\n if not any(vmap.start <= return_address < vmap.end for vmap in vmaps):\n break\n\n # Read the previous rbp and set it as the current one\n current_rbp = int.from_bytes(target.memory[current_rbp, 8, \"absolute\"], sys.byteorder)\n\n stack_trace.append(return_address)\n except (OSError, ValueError):\n break\n\n # If we are in the prologue of a function, we need to get the return address from the stack\n # using a slightly more complex method\n try:\n first_return_address = self.get_return_address(target, vmaps)\n\n if len(stack_trace) > 1:\n if first_return_address != stack_trace[1]:\n stack_trace.insert(1, first_return_address)\n else:\n stack_trace.append(first_return_address)\n except (OSError, ValueError):\n liblog.warning(\n \"Failed to get the return address. Check stack frame registers (e.g., base pointer). The stack trace may be incomplete.\",\n )\n\n return stack_trace\n"},{"location":"from_pydoc/generated/architectures/amd64/amd64_thread_context/","title":"libdebug.architectures.amd64.amd64_thread_context","text":""},{"location":"from_pydoc/generated/architectures/amd64/amd64_thread_context/#libdebug.architectures.amd64.amd64_thread_context.Amd64ThreadContext","title":"Amd64ThreadContext","text":" Bases: ThreadContext
This object represents a thread in the context of the target amd64 process. It holds information about the thread's state, registers and stack.
Source code inlibdebug/architectures/amd64/amd64_thread_context.py class Amd64ThreadContext(ThreadContext):\n \"\"\"This object represents a thread in the context of the target amd64 process. It holds information about the thread's state, registers and stack.\"\"\"\n\n def __init__(self: Amd64ThreadContext, thread_id: int, registers: Amd64PtraceRegisterHolder) -> None:\n \"\"\"Initialize the thread context with the given thread id.\"\"\"\n super().__init__(thread_id, registers)\n\n # Register the thread properties\n self._register_holder.apply_on_thread(self, Amd64ThreadContext)\n"},{"location":"from_pydoc/generated/architectures/amd64/amd64_thread_context/#libdebug.architectures.amd64.amd64_thread_context.Amd64ThreadContext.__init__","title":"__init__(thread_id, registers)","text":"Initialize the thread context with the given thread id.
Source code inlibdebug/architectures/amd64/amd64_thread_context.py def __init__(self: Amd64ThreadContext, thread_id: int, registers: Amd64PtraceRegisterHolder) -> None:\n \"\"\"Initialize the thread context with the given thread id.\"\"\"\n super().__init__(thread_id, registers)\n\n # Register the thread properties\n self._register_holder.apply_on_thread(self, Amd64ThreadContext)\n"},{"location":"from_pydoc/generated/architectures/amd64/compat/i386_over_amd64_ptrace_register_holder/","title":"libdebug.architectures.amd64.compat.i386_over_amd64_ptrace_register_holder","text":""},{"location":"from_pydoc/generated/architectures/amd64/compat/i386_over_amd64_ptrace_register_holder/#libdebug.architectures.amd64.compat.i386_over_amd64_ptrace_register_holder.I386OverAMD64PtraceRegisterHolder","title":"I386OverAMD64PtraceRegisterHolder dataclass","text":" Bases: I386PtraceRegisterHolder
A class that provides views and setters for the registers of an x86_64 process.
Source code inlibdebug/architectures/amd64/compat/i386_over_amd64_ptrace_register_holder.py @dataclass\nclass I386OverAMD64PtraceRegisterHolder(I386PtraceRegisterHolder):\n \"\"\"A class that provides views and setters for the registers of an x86_64 process.\"\"\"\n\n def provide_regs_class(self: I386OverAMD64PtraceRegisterHolder) -> type:\n \"\"\"Provide a class to hold the register accessors.\"\"\"\n return I386OverAMD64Registers\n\n def apply_on_regs(\n self: I386OverAMD64PtraceRegisterHolder,\n target: I386OverAMD64Registers,\n target_class: type,\n ) -> None:\n \"\"\"Apply the register accessors to the I386Registers class.\"\"\"\n target.register_file = self.register_file\n target._fp_register_file = self.fp_register_file\n\n # If the accessors are already defined, we don't need to redefine them\n if hasattr(target_class, \"eip\"):\n return\n\n self._vector_fp_registers = []\n\n # setup accessors\n for name in I386_GP_REGS:\n name_64 = \"r\" + name + \"x\"\n name_32 = \"e\" + name + \"x\"\n name_16 = name + \"x\"\n name_8l = name + \"l\"\n name_8h = name + \"h\"\n\n setattr(target_class, name_32, _get_property_32(name_64))\n setattr(target_class, name_16, _get_property_16(name_64))\n setattr(target_class, name_8l, _get_property_8l(name_64))\n setattr(target_class, name_8h, _get_property_8h(name_64))\n\n for name in I386_BASE_REGS:\n name_64 = \"r\" + name\n name_32 = \"e\" + name\n name_16 = name\n name_8l = name + \"l\"\n\n setattr(target_class, name_32, _get_property_32(name_64))\n setattr(target_class, name_16, _get_property_16(name_64))\n setattr(target_class, name_8l, _get_property_8l(name_64))\n\n for name in I386_SPECIAL_REGS:\n setattr(target_class, name, _get_property_32(name))\n\n # setup special registers\n target_class.eip = _get_property_32(\"rip\")\n\n self._handle_fp_legacy(target_class)\n\n match self.fp_register_file.type:\n case 0:\n self._handle_vector_512(target_class)\n case 1:\n self._handle_vector_896(target_class)\n case 2:\n self._handle_vector_2696(target_class)\n case _:\n raise NotImplementedError(\n f\"Floating-point register file type {self.fp_register_file.type} not available.\",\n )\n\n I386OverAMD64PtraceRegisterHolder._vector_fp_registers = self._vector_fp_registers\n\n def apply_on_thread(self: I386OverAMD64PtraceRegisterHolder, target: ThreadContext, target_class: type) -> None:\n \"\"\"Apply the register accessors to the thread class.\"\"\"\n target.register_file = self.register_file\n\n # If the accessors are already defined, we don't need to redefine them\n if hasattr(target_class, \"instruction_pointer\"):\n return\n\n # setup generic \"instruction_pointer\" property\n target_class.instruction_pointer = _get_property_32(\"rip\")\n\n # setup generic syscall properties\n target_class.syscall_number = _get_property_32(\"orig_rax\")\n target_class.syscall_return = _get_property_32(\"rax\")\n target_class.syscall_arg0 = _get_property_32(\"rbx\")\n target_class.syscall_arg1 = _get_property_32(\"rcx\")\n target_class.syscall_arg2 = _get_property_32(\"rdx\")\n target_class.syscall_arg3 = _get_property_32(\"rsi\")\n target_class.syscall_arg4 = _get_property_32(\"rdi\")\n target_class.syscall_arg5 = _get_property_32(\"rbp\")\n\n def cleanup(self: I386OverAMD64PtraceRegisterHolder) -> None:\n \"\"\"Clean up the register accessors from the I386OverAMD64Registers class.\"\"\"\n for attr_name, attr_value in list(I386OverAMD64Registers.__dict__.items()):\n if isinstance(attr_value, property):\n delattr(I386OverAMD64Registers, attr_name)\n"},{"location":"from_pydoc/generated/architectures/amd64/compat/i386_over_amd64_ptrace_register_holder/#libdebug.architectures.amd64.compat.i386_over_amd64_ptrace_register_holder.I386OverAMD64PtraceRegisterHolder.apply_on_regs","title":"apply_on_regs(target, target_class)","text":"Apply the register accessors to the I386Registers class.
Source code inlibdebug/architectures/amd64/compat/i386_over_amd64_ptrace_register_holder.py def apply_on_regs(\n self: I386OverAMD64PtraceRegisterHolder,\n target: I386OverAMD64Registers,\n target_class: type,\n) -> None:\n \"\"\"Apply the register accessors to the I386Registers class.\"\"\"\n target.register_file = self.register_file\n target._fp_register_file = self.fp_register_file\n\n # If the accessors are already defined, we don't need to redefine them\n if hasattr(target_class, \"eip\"):\n return\n\n self._vector_fp_registers = []\n\n # setup accessors\n for name in I386_GP_REGS:\n name_64 = \"r\" + name + \"x\"\n name_32 = \"e\" + name + \"x\"\n name_16 = name + \"x\"\n name_8l = name + \"l\"\n name_8h = name + \"h\"\n\n setattr(target_class, name_32, _get_property_32(name_64))\n setattr(target_class, name_16, _get_property_16(name_64))\n setattr(target_class, name_8l, _get_property_8l(name_64))\n setattr(target_class, name_8h, _get_property_8h(name_64))\n\n for name in I386_BASE_REGS:\n name_64 = \"r\" + name\n name_32 = \"e\" + name\n name_16 = name\n name_8l = name + \"l\"\n\n setattr(target_class, name_32, _get_property_32(name_64))\n setattr(target_class, name_16, _get_property_16(name_64))\n setattr(target_class, name_8l, _get_property_8l(name_64))\n\n for name in I386_SPECIAL_REGS:\n setattr(target_class, name, _get_property_32(name))\n\n # setup special registers\n target_class.eip = _get_property_32(\"rip\")\n\n self._handle_fp_legacy(target_class)\n\n match self.fp_register_file.type:\n case 0:\n self._handle_vector_512(target_class)\n case 1:\n self._handle_vector_896(target_class)\n case 2:\n self._handle_vector_2696(target_class)\n case _:\n raise NotImplementedError(\n f\"Floating-point register file type {self.fp_register_file.type} not available.\",\n )\n\n I386OverAMD64PtraceRegisterHolder._vector_fp_registers = self._vector_fp_registers\n"},{"location":"from_pydoc/generated/architectures/amd64/compat/i386_over_amd64_ptrace_register_holder/#libdebug.architectures.amd64.compat.i386_over_amd64_ptrace_register_holder.I386OverAMD64PtraceRegisterHolder.apply_on_thread","title":"apply_on_thread(target, target_class)","text":"Apply the register accessors to the thread class.
Source code inlibdebug/architectures/amd64/compat/i386_over_amd64_ptrace_register_holder.py def apply_on_thread(self: I386OverAMD64PtraceRegisterHolder, target: ThreadContext, target_class: type) -> None:\n \"\"\"Apply the register accessors to the thread class.\"\"\"\n target.register_file = self.register_file\n\n # If the accessors are already defined, we don't need to redefine them\n if hasattr(target_class, \"instruction_pointer\"):\n return\n\n # setup generic \"instruction_pointer\" property\n target_class.instruction_pointer = _get_property_32(\"rip\")\n\n # setup generic syscall properties\n target_class.syscall_number = _get_property_32(\"orig_rax\")\n target_class.syscall_return = _get_property_32(\"rax\")\n target_class.syscall_arg0 = _get_property_32(\"rbx\")\n target_class.syscall_arg1 = _get_property_32(\"rcx\")\n target_class.syscall_arg2 = _get_property_32(\"rdx\")\n target_class.syscall_arg3 = _get_property_32(\"rsi\")\n target_class.syscall_arg4 = _get_property_32(\"rdi\")\n target_class.syscall_arg5 = _get_property_32(\"rbp\")\n"},{"location":"from_pydoc/generated/architectures/amd64/compat/i386_over_amd64_ptrace_register_holder/#libdebug.architectures.amd64.compat.i386_over_amd64_ptrace_register_holder.I386OverAMD64PtraceRegisterHolder.cleanup","title":"cleanup()","text":"Clean up the register accessors from the I386OverAMD64Registers class.
Source code inlibdebug/architectures/amd64/compat/i386_over_amd64_ptrace_register_holder.py def cleanup(self: I386OverAMD64PtraceRegisterHolder) -> None:\n \"\"\"Clean up the register accessors from the I386OverAMD64Registers class.\"\"\"\n for attr_name, attr_value in list(I386OverAMD64Registers.__dict__.items()):\n if isinstance(attr_value, property):\n delattr(I386OverAMD64Registers, attr_name)\n"},{"location":"from_pydoc/generated/architectures/amd64/compat/i386_over_amd64_ptrace_register_holder/#libdebug.architectures.amd64.compat.i386_over_amd64_ptrace_register_holder.I386OverAMD64PtraceRegisterHolder.provide_regs_class","title":"provide_regs_class()","text":"Provide a class to hold the register accessors.
Source code inlibdebug/architectures/amd64/compat/i386_over_amd64_ptrace_register_holder.py def provide_regs_class(self: I386OverAMD64PtraceRegisterHolder) -> type:\n \"\"\"Provide a class to hold the register accessors.\"\"\"\n return I386OverAMD64Registers\n"},{"location":"from_pydoc/generated/architectures/amd64/compat/i386_over_amd64_registers/","title":"libdebug.architectures.amd64.compat.i386_over_amd64_registers","text":""},{"location":"from_pydoc/generated/architectures/amd64/compat/i386_over_amd64_registers/#libdebug.architectures.amd64.compat.i386_over_amd64_registers.I386OverAMD64Registers","title":"I386OverAMD64Registers","text":" Bases: Registers
This class holds the state of the architectural-dependent registers of a process.
Source code inlibdebug/architectures/amd64/compat/i386_over_amd64_registers.py class I386OverAMD64Registers(Registers):\n \"\"\"This class holds the state of the architectural-dependent registers of a process.\"\"\"\n"},{"location":"from_pydoc/generated/architectures/amd64/compat/i386_over_amd64_thread_context/","title":"libdebug.architectures.amd64.compat.i386_over_amd64_thread_context","text":""},{"location":"from_pydoc/generated/architectures/amd64/compat/i386_over_amd64_thread_context/#libdebug.architectures.amd64.compat.i386_over_amd64_thread_context.I386OverAMD64ThreadContext","title":"I386OverAMD64ThreadContext","text":" Bases: ThreadContext
This object represents a thread in the context of the target i386 process when running on amd64. It holds information about the thread's state, registers and stack.
Source code inlibdebug/architectures/amd64/compat/i386_over_amd64_thread_context.py class I386OverAMD64ThreadContext(ThreadContext):\n \"\"\"This object represents a thread in the context of the target i386 process when running on amd64. It holds information about the thread's state, registers and stack.\"\"\"\n\n def __init__(\n self: I386OverAMD64ThreadContext,\n thread_id: int,\n registers: I386OverAMD64PtraceRegisterHolder,\n ) -> None:\n \"\"\"Initialize the thread context with the given thread id.\"\"\"\n super().__init__(thread_id, registers)\n\n # Register the thread properties\n self._register_holder.apply_on_thread(self, I386OverAMD64ThreadContext)\n"},{"location":"from_pydoc/generated/architectures/amd64/compat/i386_over_amd64_thread_context/#libdebug.architectures.amd64.compat.i386_over_amd64_thread_context.I386OverAMD64ThreadContext.__init__","title":"__init__(thread_id, registers)","text":"Initialize the thread context with the given thread id.
Source code inlibdebug/architectures/amd64/compat/i386_over_amd64_thread_context.py def __init__(\n self: I386OverAMD64ThreadContext,\n thread_id: int,\n registers: I386OverAMD64PtraceRegisterHolder,\n) -> None:\n \"\"\"Initialize the thread context with the given thread id.\"\"\"\n super().__init__(thread_id, registers)\n\n # Register the thread properties\n self._register_holder.apply_on_thread(self, I386OverAMD64ThreadContext)\n"},{"location":"from_pydoc/generated/architectures/i386/i386_breakpoint_validator/","title":"libdebug.architectures.i386.i386_breakpoint_validator","text":""},{"location":"from_pydoc/generated/architectures/i386/i386_breakpoint_validator/#libdebug.architectures.i386.i386_breakpoint_validator.validate_breakpoint_i386","title":"validate_breakpoint_i386(bp)","text":"Validate a hardware breakpoint for the i386 architecture.
Source code inlibdebug/architectures/i386/i386_breakpoint_validator.py def validate_breakpoint_i386(bp: Breakpoint) -> None:\n \"\"\"Validate a hardware breakpoint for the i386 architecture.\"\"\"\n if bp.condition not in [\"w\", \"rw\", \"x\"]:\n raise ValueError(\"Invalid condition for watchpoints. Supported conditions are 'w', 'rw', 'x'.\")\n\n if bp.length not in [1, 2, 4]:\n raise ValueError(\"Invalid length for watchpoints. Supported lengths are 1, 2, 4.\")\n"},{"location":"from_pydoc/generated/architectures/i386/i386_call_utilities/","title":"libdebug.architectures.i386.i386_call_utilities","text":""},{"location":"from_pydoc/generated/architectures/i386/i386_call_utilities/#libdebug.architectures.i386.i386_call_utilities.I386CallUtilities","title":"I386CallUtilities","text":" Bases: CallUtilitiesManager
Class that provides call utilities for the i386 architecture.
Source code inlibdebug/architectures/i386/i386_call_utilities.py class I386CallUtilities(CallUtilitiesManager):\n \"\"\"Class that provides call utilities for the i386 architecture.\"\"\"\n\n def is_call(self: I386CallUtilities, opcode_window: bytes) -> bool:\n \"\"\"Check if the current instruction is a call instruction.\"\"\"\n # Check for direct CALL (E8 xx xx xx xx)\n if opcode_window[0] == 0xE8:\n return True\n\n # Check for indirect CALL using ModR/M (FF /2)\n if opcode_window[0] == 0xFF:\n # Extract ModR/M byte\n modRM = opcode_window[1]\n reg = (modRM >> 3) & 0x07\n\n if reg == 2:\n return True\n\n return False\n\n def compute_call_skip(self: I386CallUtilities, opcode_window: bytes) -> int:\n \"\"\"Compute the instruction size of the current call instruction.\"\"\"\n # Check for direct CALL (E8 xx xx xx xx)\n if opcode_window[0] == 0xE8:\n return 5\n\n # Check for indirect CALL using ModR/M (FF /2)\n if opcode_window[0] == 0xFF:\n # Extract ModR/M byte\n modRM = opcode_window[1]\n mod = (modRM >> 6) & 0x03\n reg = (modRM >> 3) & 0x07\n\n if reg == 2:\n if mod == 0:\n if (modRM & 0x07) == 4:\n return 3 + (4 if opcode_window[2] == 0x25 else 0)\n elif (modRM & 0x07) == 5:\n return 6\n return 2\n elif mod == 1:\n return 3\n elif mod == 2:\n return 6\n elif mod == 3:\n return 2\n\n return 0\n\n def get_call_and_skip_amount(self: I386CallUtilities, opcode_window: bytes) -> tuple[bool, int]:\n skip = self.compute_call_skip(opcode_window)\n return skip != 0, skip\n"},{"location":"from_pydoc/generated/architectures/i386/i386_call_utilities/#libdebug.architectures.i386.i386_call_utilities.I386CallUtilities.compute_call_skip","title":"compute_call_skip(opcode_window)","text":"Compute the instruction size of the current call instruction.
Source code inlibdebug/architectures/i386/i386_call_utilities.py def compute_call_skip(self: I386CallUtilities, opcode_window: bytes) -> int:\n \"\"\"Compute the instruction size of the current call instruction.\"\"\"\n # Check for direct CALL (E8 xx xx xx xx)\n if opcode_window[0] == 0xE8:\n return 5\n\n # Check for indirect CALL using ModR/M (FF /2)\n if opcode_window[0] == 0xFF:\n # Extract ModR/M byte\n modRM = opcode_window[1]\n mod = (modRM >> 6) & 0x03\n reg = (modRM >> 3) & 0x07\n\n if reg == 2:\n if mod == 0:\n if (modRM & 0x07) == 4:\n return 3 + (4 if opcode_window[2] == 0x25 else 0)\n elif (modRM & 0x07) == 5:\n return 6\n return 2\n elif mod == 1:\n return 3\n elif mod == 2:\n return 6\n elif mod == 3:\n return 2\n\n return 0\n"},{"location":"from_pydoc/generated/architectures/i386/i386_call_utilities/#libdebug.architectures.i386.i386_call_utilities.I386CallUtilities.is_call","title":"is_call(opcode_window)","text":"Check if the current instruction is a call instruction.
Source code inlibdebug/architectures/i386/i386_call_utilities.py def is_call(self: I386CallUtilities, opcode_window: bytes) -> bool:\n \"\"\"Check if the current instruction is a call instruction.\"\"\"\n # Check for direct CALL (E8 xx xx xx xx)\n if opcode_window[0] == 0xE8:\n return True\n\n # Check for indirect CALL using ModR/M (FF /2)\n if opcode_window[0] == 0xFF:\n # Extract ModR/M byte\n modRM = opcode_window[1]\n reg = (modRM >> 3) & 0x07\n\n if reg == 2:\n return True\n\n return False\n"},{"location":"from_pydoc/generated/architectures/i386/i386_ptrace_register_holder/","title":"libdebug.architectures.i386.i386_ptrace_register_holder","text":""},{"location":"from_pydoc/generated/architectures/i386/i386_ptrace_register_holder/#libdebug.architectures.i386.i386_ptrace_register_holder.I386PtraceRegisterHolder","title":"I386PtraceRegisterHolder dataclass","text":" Bases: PtraceRegisterHolder
A class that provides views and setters for the registers of an i386 process.
Source code inlibdebug/architectures/i386/i386_ptrace_register_holder.py @dataclass\nclass I386PtraceRegisterHolder(PtraceRegisterHolder):\n \"\"\"A class that provides views and setters for the registers of an i386 process.\"\"\"\n\n def provide_regs_class(self: I386PtraceRegisterHolder) -> type:\n \"\"\"Provide a class to hold the register accessors.\"\"\"\n return I386Registers\n\n def provide_regs(self: I386PtraceRegisterHolder) -> list[str]:\n \"\"\"Provide the list of registers, excluding the vector and fp registers.\"\"\"\n return I386_REGS\n\n def provide_vector_fp_regs(self: I386PtraceRegisterHolder) -> list[str]:\n \"\"\"Provide the list of vector and floating point registers.\"\"\"\n return self._vector_fp_registers\n\n def provide_special_regs(self: I386PtraceRegisterHolder) -> list[str]:\n \"\"\"Provide the list of special registers, which are not intended for general-purpose use.\"\"\"\n return I386_SPECIAL_REGS\n\n def apply_on_regs(self: I386PtraceRegisterHolder, target: I386Registers, target_class: type) -> None:\n \"\"\"Apply the register accessors to the I386Registers class.\"\"\"\n target.register_file = self.register_file\n target._fp_register_file = self.fp_register_file\n\n # If the accessors are already defined, we don't need to redefine them\n if hasattr(target_class, \"eip\"):\n return\n\n self._vector_fp_registers = []\n\n # setup accessors\n for name in I386_GP_REGS:\n name_32 = \"e\" + name + \"x\"\n name_16 = name + \"x\"\n name_8l = name + \"l\"\n name_8h = name + \"h\"\n\n setattr(target_class, name_32, _get_property_32(name_32))\n setattr(target_class, name_16, _get_property_16(name_32))\n setattr(target_class, name_8l, _get_property_8l(name_32))\n setattr(target_class, name_8h, _get_property_8h(name_32))\n\n for name in I386_BASE_REGS:\n name_32 = \"e\" + name\n name_16 = name\n name_8l = name + \"l\"\n\n setattr(target_class, name_32, _get_property_32(name_32))\n setattr(target_class, name_16, _get_property_16(name_32))\n setattr(target_class, name_8l, _get_property_8l(name_32))\n\n for name in I386_SPECIAL_REGS:\n setattr(target_class, name, _get_property_32(name))\n\n # setup special registers\n target_class.eip = _get_property_32(\"eip\")\n\n self._handle_fp_legacy(target_class)\n\n match self.fp_register_file.type:\n case 0:\n self._handle_vector_512(target_class)\n case 1:\n self._handle_vector_896(target_class)\n case 2:\n self._handle_vector_2696(target_class)\n case _:\n raise NotImplementedError(\n f\"Floating-point register file type {self.fp_register_file.type} not available.\",\n )\n\n I386PtraceRegisterHolder._vector_fp_registers = self._vector_fp_registers\n\n def apply_on_thread(self: I386PtraceRegisterHolder, target: ThreadContext, target_class: type) -> None:\n \"\"\"Apply the register accessors to the thread class.\"\"\"\n target.register_file = self.register_file\n\n # If the accessors are already defined, we don't need to redefine them\n if hasattr(target_class, \"instruction_pointer\"):\n return\n\n # setup generic \"instruction_pointer\" property\n target_class.instruction_pointer = _get_property_32(\"eip\")\n\n # setup generic syscall properties\n target_class.syscall_number = _get_property_32(\"orig_eax\")\n target_class.syscall_return = _get_property_32(\"eax\")\n target_class.syscall_arg0 = _get_property_32(\"ebx\")\n target_class.syscall_arg1 = _get_property_32(\"ecx\")\n target_class.syscall_arg2 = _get_property_32(\"edx\")\n target_class.syscall_arg3 = _get_property_32(\"esi\")\n target_class.syscall_arg4 = _get_property_32(\"edi\")\n target_class.syscall_arg5 = _get_property_32(\"ebp\")\n\n def _handle_fp_legacy(self: I386PtraceRegisterHolder, target_class: type) -> None:\n \"\"\"Handle legacy mmx and st registers.\"\"\"\n for index in range(8):\n name_mm = f\"mm{index}\"\n setattr(target_class, name_mm, _get_property_fp_mmx(name_mm, index))\n\n name_st = f\"st{index}\"\n setattr(target_class, name_st, _get_property_fp_st(name_st, index))\n\n self._vector_fp_registers.append((name_mm, name_st))\n\n def _handle_vector_512(self: I386PtraceRegisterHolder, target_class: type) -> None:\n \"\"\"Handle the case where the xsave area is 512 bytes long, which means we just have the xmm registers.\"\"\"\n # i386 only gets 8 registers\n for index in range(8):\n name_xmm = f\"xmm{index}\"\n setattr(target_class, name_xmm, _get_property_fp_xmm0(name_xmm, index))\n self._vector_fp_registers.append((name_xmm,))\n\n def _handle_vector_896(self: I386PtraceRegisterHolder, target_class: type) -> None:\n \"\"\"Handle the case where the xsave area is 896 bytes long, which means we have the xmm and ymm registers.\"\"\"\n # i386 only gets 8 registers\n for index in range(8):\n name_xmm = f\"xmm{index}\"\n setattr(target_class, name_xmm, _get_property_fp_xmm0(name_xmm, index))\n\n name_ymm = f\"ymm{index}\"\n setattr(target_class, name_ymm, _get_property_fp_ymm0(name_ymm, index))\n\n self._vector_fp_registers.append((name_xmm, name_ymm))\n\n def _handle_vector_2696(self: I386PtraceRegisterHolder, target_class: type) -> None:\n \"\"\"Handle the case where the xsave area is 2696 bytes long, which means we have 32 zmm registers.\"\"\"\n # i386 only gets 8 registers\n for index in range(8):\n name_xmm = f\"xmm{index}\"\n setattr(target_class, name_xmm, _get_property_fp_xmm0(name_xmm, index))\n\n name_ymm = f\"ymm{index}\"\n setattr(target_class, name_ymm, _get_property_fp_ymm0(name_ymm, index))\n\n name_zmm = f\"zmm{index}\"\n setattr(target_class, name_zmm, _get_property_fp_zmm0(name_zmm, index))\n\n self._vector_fp_registers.append((name_xmm, name_ymm, name_zmm))\n\n def cleanup(self: I386PtraceRegisterHolder) -> None:\n \"\"\"Clean up the register accessors from the class.\"\"\"\n for attr_name, attr_value in list(I386Registers.__dict__.items()):\n if isinstance(attr_value, property):\n delattr(I386Registers, attr_name)\n"},{"location":"from_pydoc/generated/architectures/i386/i386_ptrace_register_holder/#libdebug.architectures.i386.i386_ptrace_register_holder.I386PtraceRegisterHolder._handle_fp_legacy","title":"_handle_fp_legacy(target_class)","text":"Handle legacy mmx and st registers.
Source code inlibdebug/architectures/i386/i386_ptrace_register_holder.py def _handle_fp_legacy(self: I386PtraceRegisterHolder, target_class: type) -> None:\n \"\"\"Handle legacy mmx and st registers.\"\"\"\n for index in range(8):\n name_mm = f\"mm{index}\"\n setattr(target_class, name_mm, _get_property_fp_mmx(name_mm, index))\n\n name_st = f\"st{index}\"\n setattr(target_class, name_st, _get_property_fp_st(name_st, index))\n\n self._vector_fp_registers.append((name_mm, name_st))\n"},{"location":"from_pydoc/generated/architectures/i386/i386_ptrace_register_holder/#libdebug.architectures.i386.i386_ptrace_register_holder.I386PtraceRegisterHolder._handle_vector_2696","title":"_handle_vector_2696(target_class)","text":"Handle the case where the xsave area is 2696 bytes long, which means we have 32 zmm registers.
Source code inlibdebug/architectures/i386/i386_ptrace_register_holder.py def _handle_vector_2696(self: I386PtraceRegisterHolder, target_class: type) -> None:\n \"\"\"Handle the case where the xsave area is 2696 bytes long, which means we have 32 zmm registers.\"\"\"\n # i386 only gets 8 registers\n for index in range(8):\n name_xmm = f\"xmm{index}\"\n setattr(target_class, name_xmm, _get_property_fp_xmm0(name_xmm, index))\n\n name_ymm = f\"ymm{index}\"\n setattr(target_class, name_ymm, _get_property_fp_ymm0(name_ymm, index))\n\n name_zmm = f\"zmm{index}\"\n setattr(target_class, name_zmm, _get_property_fp_zmm0(name_zmm, index))\n\n self._vector_fp_registers.append((name_xmm, name_ymm, name_zmm))\n"},{"location":"from_pydoc/generated/architectures/i386/i386_ptrace_register_holder/#libdebug.architectures.i386.i386_ptrace_register_holder.I386PtraceRegisterHolder._handle_vector_512","title":"_handle_vector_512(target_class)","text":"Handle the case where the xsave area is 512 bytes long, which means we just have the xmm registers.
Source code inlibdebug/architectures/i386/i386_ptrace_register_holder.py def _handle_vector_512(self: I386PtraceRegisterHolder, target_class: type) -> None:\n \"\"\"Handle the case where the xsave area is 512 bytes long, which means we just have the xmm registers.\"\"\"\n # i386 only gets 8 registers\n for index in range(8):\n name_xmm = f\"xmm{index}\"\n setattr(target_class, name_xmm, _get_property_fp_xmm0(name_xmm, index))\n self._vector_fp_registers.append((name_xmm,))\n"},{"location":"from_pydoc/generated/architectures/i386/i386_ptrace_register_holder/#libdebug.architectures.i386.i386_ptrace_register_holder.I386PtraceRegisterHolder._handle_vector_896","title":"_handle_vector_896(target_class)","text":"Handle the case where the xsave area is 896 bytes long, which means we have the xmm and ymm registers.
Source code inlibdebug/architectures/i386/i386_ptrace_register_holder.py def _handle_vector_896(self: I386PtraceRegisterHolder, target_class: type) -> None:\n \"\"\"Handle the case where the xsave area is 896 bytes long, which means we have the xmm and ymm registers.\"\"\"\n # i386 only gets 8 registers\n for index in range(8):\n name_xmm = f\"xmm{index}\"\n setattr(target_class, name_xmm, _get_property_fp_xmm0(name_xmm, index))\n\n name_ymm = f\"ymm{index}\"\n setattr(target_class, name_ymm, _get_property_fp_ymm0(name_ymm, index))\n\n self._vector_fp_registers.append((name_xmm, name_ymm))\n"},{"location":"from_pydoc/generated/architectures/i386/i386_ptrace_register_holder/#libdebug.architectures.i386.i386_ptrace_register_holder.I386PtraceRegisterHolder.apply_on_regs","title":"apply_on_regs(target, target_class)","text":"Apply the register accessors to the I386Registers class.
Source code inlibdebug/architectures/i386/i386_ptrace_register_holder.py def apply_on_regs(self: I386PtraceRegisterHolder, target: I386Registers, target_class: type) -> None:\n \"\"\"Apply the register accessors to the I386Registers class.\"\"\"\n target.register_file = self.register_file\n target._fp_register_file = self.fp_register_file\n\n # If the accessors are already defined, we don't need to redefine them\n if hasattr(target_class, \"eip\"):\n return\n\n self._vector_fp_registers = []\n\n # setup accessors\n for name in I386_GP_REGS:\n name_32 = \"e\" + name + \"x\"\n name_16 = name + \"x\"\n name_8l = name + \"l\"\n name_8h = name + \"h\"\n\n setattr(target_class, name_32, _get_property_32(name_32))\n setattr(target_class, name_16, _get_property_16(name_32))\n setattr(target_class, name_8l, _get_property_8l(name_32))\n setattr(target_class, name_8h, _get_property_8h(name_32))\n\n for name in I386_BASE_REGS:\n name_32 = \"e\" + name\n name_16 = name\n name_8l = name + \"l\"\n\n setattr(target_class, name_32, _get_property_32(name_32))\n setattr(target_class, name_16, _get_property_16(name_32))\n setattr(target_class, name_8l, _get_property_8l(name_32))\n\n for name in I386_SPECIAL_REGS:\n setattr(target_class, name, _get_property_32(name))\n\n # setup special registers\n target_class.eip = _get_property_32(\"eip\")\n\n self._handle_fp_legacy(target_class)\n\n match self.fp_register_file.type:\n case 0:\n self._handle_vector_512(target_class)\n case 1:\n self._handle_vector_896(target_class)\n case 2:\n self._handle_vector_2696(target_class)\n case _:\n raise NotImplementedError(\n f\"Floating-point register file type {self.fp_register_file.type} not available.\",\n )\n\n I386PtraceRegisterHolder._vector_fp_registers = self._vector_fp_registers\n"},{"location":"from_pydoc/generated/architectures/i386/i386_ptrace_register_holder/#libdebug.architectures.i386.i386_ptrace_register_holder.I386PtraceRegisterHolder.apply_on_thread","title":"apply_on_thread(target, target_class)","text":"Apply the register accessors to the thread class.
Source code inlibdebug/architectures/i386/i386_ptrace_register_holder.py def apply_on_thread(self: I386PtraceRegisterHolder, target: ThreadContext, target_class: type) -> None:\n \"\"\"Apply the register accessors to the thread class.\"\"\"\n target.register_file = self.register_file\n\n # If the accessors are already defined, we don't need to redefine them\n if hasattr(target_class, \"instruction_pointer\"):\n return\n\n # setup generic \"instruction_pointer\" property\n target_class.instruction_pointer = _get_property_32(\"eip\")\n\n # setup generic syscall properties\n target_class.syscall_number = _get_property_32(\"orig_eax\")\n target_class.syscall_return = _get_property_32(\"eax\")\n target_class.syscall_arg0 = _get_property_32(\"ebx\")\n target_class.syscall_arg1 = _get_property_32(\"ecx\")\n target_class.syscall_arg2 = _get_property_32(\"edx\")\n target_class.syscall_arg3 = _get_property_32(\"esi\")\n target_class.syscall_arg4 = _get_property_32(\"edi\")\n target_class.syscall_arg5 = _get_property_32(\"ebp\")\n"},{"location":"from_pydoc/generated/architectures/i386/i386_ptrace_register_holder/#libdebug.architectures.i386.i386_ptrace_register_holder.I386PtraceRegisterHolder.cleanup","title":"cleanup()","text":"Clean up the register accessors from the class.
Source code inlibdebug/architectures/i386/i386_ptrace_register_holder.py def cleanup(self: I386PtraceRegisterHolder) -> None:\n \"\"\"Clean up the register accessors from the class.\"\"\"\n for attr_name, attr_value in list(I386Registers.__dict__.items()):\n if isinstance(attr_value, property):\n delattr(I386Registers, attr_name)\n"},{"location":"from_pydoc/generated/architectures/i386/i386_ptrace_register_holder/#libdebug.architectures.i386.i386_ptrace_register_holder.I386PtraceRegisterHolder.provide_regs","title":"provide_regs()","text":"Provide the list of registers, excluding the vector and fp registers.
Source code inlibdebug/architectures/i386/i386_ptrace_register_holder.py def provide_regs(self: I386PtraceRegisterHolder) -> list[str]:\n \"\"\"Provide the list of registers, excluding the vector and fp registers.\"\"\"\n return I386_REGS\n"},{"location":"from_pydoc/generated/architectures/i386/i386_ptrace_register_holder/#libdebug.architectures.i386.i386_ptrace_register_holder.I386PtraceRegisterHolder.provide_regs_class","title":"provide_regs_class()","text":"Provide a class to hold the register accessors.
Source code inlibdebug/architectures/i386/i386_ptrace_register_holder.py def provide_regs_class(self: I386PtraceRegisterHolder) -> type:\n \"\"\"Provide a class to hold the register accessors.\"\"\"\n return I386Registers\n"},{"location":"from_pydoc/generated/architectures/i386/i386_ptrace_register_holder/#libdebug.architectures.i386.i386_ptrace_register_holder.I386PtraceRegisterHolder.provide_special_regs","title":"provide_special_regs()","text":"Provide the list of special registers, which are not intended for general-purpose use.
Source code inlibdebug/architectures/i386/i386_ptrace_register_holder.py def provide_special_regs(self: I386PtraceRegisterHolder) -> list[str]:\n \"\"\"Provide the list of special registers, which are not intended for general-purpose use.\"\"\"\n return I386_SPECIAL_REGS\n"},{"location":"from_pydoc/generated/architectures/i386/i386_ptrace_register_holder/#libdebug.architectures.i386.i386_ptrace_register_holder.I386PtraceRegisterHolder.provide_vector_fp_regs","title":"provide_vector_fp_regs()","text":"Provide the list of vector and floating point registers.
Source code inlibdebug/architectures/i386/i386_ptrace_register_holder.py def provide_vector_fp_regs(self: I386PtraceRegisterHolder) -> list[str]:\n \"\"\"Provide the list of vector and floating point registers.\"\"\"\n return self._vector_fp_registers\n"},{"location":"from_pydoc/generated/architectures/i386/i386_registers/","title":"libdebug.architectures.i386.i386_registers","text":""},{"location":"from_pydoc/generated/architectures/i386/i386_registers/#libdebug.architectures.i386.i386_registers.I386Registers","title":"I386Registers","text":" Bases: Registers
This class holds the state of the architectural-dependent registers of a process.
Source code inlibdebug/architectures/i386/i386_registers.py class I386Registers(Registers):\n \"\"\"This class holds the state of the architectural-dependent registers of a process.\"\"\"\n"},{"location":"from_pydoc/generated/architectures/i386/i386_stack_unwinder/","title":"libdebug.architectures.i386.i386_stack_unwinder","text":""},{"location":"from_pydoc/generated/architectures/i386/i386_stack_unwinder/#libdebug.architectures.i386.i386_stack_unwinder.I386StackUnwinder","title":"I386StackUnwinder","text":" Bases: StackUnwindingManager
Class that provides stack unwinding for the i386 architecture.
Source code inlibdebug/architectures/i386/i386_stack_unwinder.py class I386StackUnwinder(StackUnwindingManager):\n \"\"\"Class that provides stack unwinding for the i386 architecture.\"\"\"\n\n def unwind(self: I386StackUnwinder, target: ThreadContext | Snapshot) -> list:\n \"\"\"Unwind the stack of a process.\n\n Args:\n target (ThreadContext): The target ThreadContext.\n\n Returns:\n list: A list of return addresses.\n \"\"\"\n assert hasattr(target.regs, \"eip\")\n assert hasattr(target.regs, \"ebp\")\n\n current_ebp = target.regs.ebp\n stack_trace = [target.regs.eip]\n\n # Instead of isinstance, we check if the target has the maps attribute to avoid circular imports\n vmaps = target.maps if hasattr(target, \"maps\") else target._internal_debugger.debugging_interface.get_maps()\n\n while current_ebp:\n try:\n # Read the return address\n return_address = int.from_bytes(target.memory[current_ebp + 4, 4], byteorder=\"little\")\n\n if not any(vmap.start <= return_address < vmap.end for vmap in vmaps):\n break\n\n # Read the previous ebp and set it as the current one\n current_ebp = int.from_bytes(target.memory[current_ebp, 4], byteorder=\"little\")\n\n stack_trace.append(return_address)\n except (OSError, ValueError):\n break\n\n # If we are in the prologue of a function, we need to get the return address from the stack\n # using a slightly more complex method\n try:\n first_return_address = self.get_return_address(target, vmaps)\n\n if len(stack_trace) > 1:\n if first_return_address != stack_trace[1]:\n stack_trace.insert(1, first_return_address)\n else:\n stack_trace.append(first_return_address)\n except (OSError, ValueError):\n liblog.warning(\n \"Failed to get the return address from the stack. Check stack frame registers (e.g., base pointer). The stack trace may be incomplete.\",\n )\n\n return stack_trace\n\n def get_return_address(self: I386StackUnwinder, target: ThreadContext | Snapshot, vmaps: MemoryMapList[MemoryMap]) -> int:\n \"\"\"Get the return address of the current function.\n\n Args:\n target (ThreadContext): The target ThreadContext.\n vmaps (list[MemoryMap]): The memory maps of the process.\n\n Returns:\n int: The return address.\n \"\"\"\n instruction_window = target.memory[target.regs.eip, 4]\n\n # Check if the instruction window is a function preamble and handle each case\n return_address = None\n\n if self._preamble_state(instruction_window) == 0:\n return_address = target.memory[target.regs.ebp + 4, 4]\n elif self._preamble_state(instruction_window) == 1:\n return_address = target.memory[target.regs.esp, 4]\n else:\n return_address = target.memory[target.regs.esp + 4, 4]\n\n return_address = int.from_bytes(return_address, byteorder=\"little\")\n\n if not vmaps.filter(return_address):\n raise ValueError(\"Return address is not in any memory map.\")\n\n return return_address\n\n def _preamble_state(self: I386StackUnwinder, instruction_window: bytes) -> int:\n \"\"\"Check if the instruction window is a function preamble and, if so, at what stage.\n\n Args:\n instruction_window (bytes): The instruction window.\n\n Returns:\n int: 0 if not a preamble, 1 if ebp has not been pushed yet, 2 otherwise\n \"\"\"\n preamble_state = 0\n\n # endbr32 and push ebp\n if b\"\\xf3\\x0f\\x1e\\xfb\" in instruction_window or b\"\\x55\" in instruction_window:\n preamble_state = 1\n\n # mov ebp, esp\n elif b\"\\x89\\xe5\" in instruction_window:\n preamble_state = 2\n\n return preamble_state\n"},{"location":"from_pydoc/generated/architectures/i386/i386_stack_unwinder/#libdebug.architectures.i386.i386_stack_unwinder.I386StackUnwinder._preamble_state","title":"_preamble_state(instruction_window)","text":"Check if the instruction window is a function preamble and, if so, at what stage.
Parameters:
Name Type Description Defaultinstruction_window bytes The instruction window.
requiredReturns:
Name Type Descriptionint int 0 if not a preamble, 1 if ebp has not been pushed yet, 2 otherwise
Source code inlibdebug/architectures/i386/i386_stack_unwinder.py def _preamble_state(self: I386StackUnwinder, instruction_window: bytes) -> int:\n \"\"\"Check if the instruction window is a function preamble and, if so, at what stage.\n\n Args:\n instruction_window (bytes): The instruction window.\n\n Returns:\n int: 0 if not a preamble, 1 if ebp has not been pushed yet, 2 otherwise\n \"\"\"\n preamble_state = 0\n\n # endbr32 and push ebp\n if b\"\\xf3\\x0f\\x1e\\xfb\" in instruction_window or b\"\\x55\" in instruction_window:\n preamble_state = 1\n\n # mov ebp, esp\n elif b\"\\x89\\xe5\" in instruction_window:\n preamble_state = 2\n\n return preamble_state\n"},{"location":"from_pydoc/generated/architectures/i386/i386_stack_unwinder/#libdebug.architectures.i386.i386_stack_unwinder.I386StackUnwinder.get_return_address","title":"get_return_address(target, vmaps)","text":"Get the return address of the current function.
Parameters:
Name Type Description Defaulttarget ThreadContext The target ThreadContext.
requiredvmaps list[MemoryMap] The memory maps of the process.
requiredReturns:
Name Type Descriptionint int The return address.
Source code inlibdebug/architectures/i386/i386_stack_unwinder.py def get_return_address(self: I386StackUnwinder, target: ThreadContext | Snapshot, vmaps: MemoryMapList[MemoryMap]) -> int:\n \"\"\"Get the return address of the current function.\n\n Args:\n target (ThreadContext): The target ThreadContext.\n vmaps (list[MemoryMap]): The memory maps of the process.\n\n Returns:\n int: The return address.\n \"\"\"\n instruction_window = target.memory[target.regs.eip, 4]\n\n # Check if the instruction window is a function preamble and handle each case\n return_address = None\n\n if self._preamble_state(instruction_window) == 0:\n return_address = target.memory[target.regs.ebp + 4, 4]\n elif self._preamble_state(instruction_window) == 1:\n return_address = target.memory[target.regs.esp, 4]\n else:\n return_address = target.memory[target.regs.esp + 4, 4]\n\n return_address = int.from_bytes(return_address, byteorder=\"little\")\n\n if not vmaps.filter(return_address):\n raise ValueError(\"Return address is not in any memory map.\")\n\n return return_address\n"},{"location":"from_pydoc/generated/architectures/i386/i386_stack_unwinder/#libdebug.architectures.i386.i386_stack_unwinder.I386StackUnwinder.unwind","title":"unwind(target)","text":"Unwind the stack of a process.
Parameters:
Name Type Description Defaulttarget ThreadContext The target ThreadContext.
requiredReturns:
Name Type Descriptionlist list A list of return addresses.
Source code inlibdebug/architectures/i386/i386_stack_unwinder.py def unwind(self: I386StackUnwinder, target: ThreadContext | Snapshot) -> list:\n \"\"\"Unwind the stack of a process.\n\n Args:\n target (ThreadContext): The target ThreadContext.\n\n Returns:\n list: A list of return addresses.\n \"\"\"\n assert hasattr(target.regs, \"eip\")\n assert hasattr(target.regs, \"ebp\")\n\n current_ebp = target.regs.ebp\n stack_trace = [target.regs.eip]\n\n # Instead of isinstance, we check if the target has the maps attribute to avoid circular imports\n vmaps = target.maps if hasattr(target, \"maps\") else target._internal_debugger.debugging_interface.get_maps()\n\n while current_ebp:\n try:\n # Read the return address\n return_address = int.from_bytes(target.memory[current_ebp + 4, 4], byteorder=\"little\")\n\n if not any(vmap.start <= return_address < vmap.end for vmap in vmaps):\n break\n\n # Read the previous ebp and set it as the current one\n current_ebp = int.from_bytes(target.memory[current_ebp, 4], byteorder=\"little\")\n\n stack_trace.append(return_address)\n except (OSError, ValueError):\n break\n\n # If we are in the prologue of a function, we need to get the return address from the stack\n # using a slightly more complex method\n try:\n first_return_address = self.get_return_address(target, vmaps)\n\n if len(stack_trace) > 1:\n if first_return_address != stack_trace[1]:\n stack_trace.insert(1, first_return_address)\n else:\n stack_trace.append(first_return_address)\n except (OSError, ValueError):\n liblog.warning(\n \"Failed to get the return address from the stack. Check stack frame registers (e.g., base pointer). The stack trace may be incomplete.\",\n )\n\n return stack_trace\n"},{"location":"from_pydoc/generated/architectures/i386/i386_thread_context/","title":"libdebug.architectures.i386.i386_thread_context","text":""},{"location":"from_pydoc/generated/architectures/i386/i386_thread_context/#libdebug.architectures.i386.i386_thread_context.I386ThreadContext","title":"I386ThreadContext","text":" Bases: ThreadContext
This object represents a thread in the context of the target i386 process. It holds information about the thread's state, registers and stack.
Source code inlibdebug/architectures/i386/i386_thread_context.py class I386ThreadContext(ThreadContext):\n \"\"\"This object represents a thread in the context of the target i386 process. It holds information about the thread's state, registers and stack.\"\"\"\n\n def __init__(self: I386ThreadContext, thread_id: int, registers: I386PtraceRegisterHolder) -> None:\n \"\"\"Initialize the thread context with the given thread id.\"\"\"\n super().__init__(thread_id, registers)\n\n # Register the thread properties\n self._register_holder.apply_on_thread(self, I386ThreadContext)\n"},{"location":"from_pydoc/generated/architectures/i386/i386_thread_context/#libdebug.architectures.i386.i386_thread_context.I386ThreadContext.__init__","title":"__init__(thread_id, registers)","text":"Initialize the thread context with the given thread id.
Source code inlibdebug/architectures/i386/i386_thread_context.py def __init__(self: I386ThreadContext, thread_id: int, registers: I386PtraceRegisterHolder) -> None:\n \"\"\"Initialize the thread context with the given thread id.\"\"\"\n super().__init__(thread_id, registers)\n\n # Register the thread properties\n self._register_holder.apply_on_thread(self, I386ThreadContext)\n"},{"location":"from_pydoc/generated/builtin/antidebug_syscall_handler/","title":"libdebug.builtin.antidebug_syscall_handler","text":""},{"location":"from_pydoc/generated/builtin/antidebug_syscall_handler/#libdebug.builtin.antidebug_syscall_handler.on_enter_ptrace","title":"on_enter_ptrace(t, handler)","text":"Callback for ptrace syscall onenter.
Source code inlibdebug/builtin/antidebug_syscall_handler.py def on_enter_ptrace(t: ThreadContext, handler: SyscallHandler) -> None:\n \"\"\"Callback for ptrace syscall onenter.\"\"\"\n handler._command = t.syscall_arg0\n\n command = Commands(t.syscall_arg0)\n liblog.debugger(f\"entered ptrace syscall with request: {command.name}\")\n"},{"location":"from_pydoc/generated/builtin/antidebug_syscall_handler/#libdebug.builtin.antidebug_syscall_handler.on_exit_ptrace","title":"on_exit_ptrace(t, handler)","text":"Callback for ptrace syscall onexit.
Source code inlibdebug/builtin/antidebug_syscall_handler.py def on_exit_ptrace(t: ThreadContext, handler: SyscallHandler) -> None:\n \"\"\"Callback for ptrace syscall onexit.\"\"\"\n if handler._command is None:\n liblog.error(\"ptrace onexit called without corresponding onenter. This should not happen.\")\n return\n\n match handler._command:\n case Commands.PTRACE_TRACEME:\n if not handler._traceme_called:\n handler._traceme_called = True\n t.syscall_return = 0\n case _:\n liblog.error(f\"ptrace syscall with request {handler._command} not supported\")\n"},{"location":"from_pydoc/generated/builtin/pretty_print_syscall_handler/","title":"libdebug.builtin.pretty_print_syscall_handler","text":""},{"location":"from_pydoc/generated/builtin/pretty_print_syscall_handler/#libdebug.builtin.pretty_print_syscall_handler.pprint_on_enter","title":"pprint_on_enter(t, syscall_number, **kwargs)","text":"Function that will be called when a syscall is entered in pretty print mode.
Parameters:
Name Type Description Defaultt ThreadContext the thread context.
requiredsyscall_number int the syscall number.
required**kwargs bool the keyword arguments.
{} Source code in libdebug/builtin/pretty_print_syscall_handler.py def pprint_on_enter(t: ThreadContext, syscall_number: int, **kwargs: int) -> None:\n \"\"\"Function that will be called when a syscall is entered in pretty print mode.\n\n Args:\n t (ThreadContext): the thread context.\n syscall_number (int): the syscall number.\n **kwargs (bool): the keyword arguments.\n \"\"\"\n syscall_name = resolve_syscall_name(t._internal_debugger.arch, syscall_number)\n syscall_args = resolve_syscall_arguments(t._internal_debugger.arch, syscall_number)\n\n values = [\n t.syscall_arg0,\n t.syscall_arg1,\n t.syscall_arg2,\n t.syscall_arg3,\n t.syscall_arg4,\n t.syscall_arg5,\n ]\n\n # Print the thread id\n header = f\"{ANSIColors.BOLD}{t.tid}{ANSIColors.RESET} \"\n\n if \"old_args\" in kwargs:\n old_args = kwargs[\"old_args\"]\n entries = [\n f\"{arg} = {ANSIColors.BRIGHT_YELLOW}0x{value:x}{ANSIColors.DEFAULT_COLOR}\"\n if old_value == value\n else f\"{arg} = {ANSIColors.STRIKE}{ANSIColors.BRIGHT_YELLOW}0x{old_value:x}{ANSIColors.RESET} {ANSIColors.BRIGHT_YELLOW}0x{value:x}{ANSIColors.DEFAULT_COLOR}\"\n for arg, value, old_value in zip(syscall_args, values, old_args, strict=False)\n if arg is not None\n ]\n else:\n entries = [\n f\"{arg} = {ANSIColors.BRIGHT_YELLOW}0x{value:x}{ANSIColors.DEFAULT_COLOR}\"\n for arg, value in zip(syscall_args, values, strict=False)\n if arg is not None\n ]\n\n hijacked = kwargs.get(\"hijacked\", False)\n user_handled = kwargs.get(\"callback\", False)\n hijacker = kwargs.get(\"hijacker\", None)\n if hijacked:\n print(\n f\"{header}{ANSIColors.RED}(hijacked) {ANSIColors.STRIKE}{ANSIColors.BLUE}{syscall_name}{ANSIColors.DEFAULT_COLOR}({', '.join(entries)}){ANSIColors.RESET}\",\n )\n elif user_handled:\n print(\n f\"{header}{ANSIColors.RED}(callback) {ANSIColors.BLUE}{syscall_name}{ANSIColors.DEFAULT_COLOR}({', '.join(entries)}) = \",\n end=\"\",\n )\n elif hijacker:\n print(\n f\"{header}{ANSIColors.RED}(executed) {ANSIColors.BLUE}{syscall_name}{ANSIColors.DEFAULT_COLOR}({', '.join(entries)}) = \",\n end=\"\",\n )\n else:\n print(\n f\"{header}{ANSIColors.BLUE}{syscall_name}{ANSIColors.DEFAULT_COLOR}({', '.join(entries)}) = \",\n end=\"\",\n )\n"},{"location":"from_pydoc/generated/builtin/pretty_print_syscall_handler/#libdebug.builtin.pretty_print_syscall_handler.pprint_on_exit","title":"pprint_on_exit(syscall_return)","text":"Function that will be called when a syscall is exited in pretty print mode.
Parameters:
Name Type Description Defaultsyscall_return int | list[int] the syscall return value.
required Source code inlibdebug/builtin/pretty_print_syscall_handler.py def pprint_on_exit(syscall_return: int | tuple[int, int]) -> None:\n \"\"\"Function that will be called when a syscall is exited in pretty print mode.\n\n Args:\n syscall_return (int | list[int]): the syscall return value.\n \"\"\"\n if isinstance(syscall_return, tuple):\n print(\n f\"{ANSIColors.YELLOW}{ANSIColors.STRIKE}0x{syscall_return[0]:x}{ANSIColors.RESET} {ANSIColors.YELLOW}0x{syscall_return[1]:x}{ANSIColors.RESET}\",\n )\n else:\n print(f\"{ANSIColors.YELLOW}0x{syscall_return:x}{ANSIColors.RESET}\")\n"},{"location":"from_pydoc/generated/commlink/buffer_data/","title":"libdebug.commlink.buffer_data","text":""},{"location":"from_pydoc/generated/commlink/buffer_data/#libdebug.commlink.buffer_data.BufferData","title":"BufferData","text":"Class that represents a buffer to store data coming from stdout and stderr.
Source code inlibdebug/commlink/buffer_data.py class BufferData:\n \"\"\"Class that represents a buffer to store data coming from stdout and stderr.\"\"\"\n\n def __init__(self: BufferData, data: bytes) -> None:\n \"\"\"Initializes the BufferData object.\"\"\"\n self.data = data\n\n def clear(self: BufferData) -> None:\n \"\"\"Clears the buffer.\"\"\"\n self.data = b\"\"\n\n def get_data(self: BufferData) -> bytes:\n \"\"\"Returns the data stored in the buffer.\"\"\"\n return self.data\n\n def append(self, data: bytes) -> None:\n \"\"\"Appends data to the buffer.\"\"\"\n self.data += data\n\n def overwrite(self, data: bytes) -> None:\n \"\"\"Overwrites the buffer with the given data.\"\"\"\n self.data = data\n\n def find(self: BufferData, pattern: bytes) -> int:\n \"\"\"Finds the first occurrence of the given pattern in the buffer.\"\"\"\n return self.data.find(pattern)\n\n def __len__(self: BufferData) -> int:\n \"\"\"Returns the length of the buffer.\"\"\"\n return len(self.data)\n\n def __repr__(self: BufferData) -> str:\n \"\"\"Returns a string representation of the buffer.\"\"\"\n return self.data.__repr__()\n\n def __getitem__(self: BufferData, key: int) -> bytes:\n \"\"\"Returns the item at the given index.\"\"\"\n return self.data[key]\n"},{"location":"from_pydoc/generated/commlink/buffer_data/#libdebug.commlink.buffer_data.BufferData.__getitem__","title":"__getitem__(key)","text":"Returns the item at the given index.
Source code inlibdebug/commlink/buffer_data.py def __getitem__(self: BufferData, key: int) -> bytes:\n \"\"\"Returns the item at the given index.\"\"\"\n return self.data[key]\n"},{"location":"from_pydoc/generated/commlink/buffer_data/#libdebug.commlink.buffer_data.BufferData.__init__","title":"__init__(data)","text":"Initializes the BufferData object.
Source code inlibdebug/commlink/buffer_data.py def __init__(self: BufferData, data: bytes) -> None:\n \"\"\"Initializes the BufferData object.\"\"\"\n self.data = data\n"},{"location":"from_pydoc/generated/commlink/buffer_data/#libdebug.commlink.buffer_data.BufferData.__len__","title":"__len__()","text":"Returns the length of the buffer.
Source code inlibdebug/commlink/buffer_data.py def __len__(self: BufferData) -> int:\n \"\"\"Returns the length of the buffer.\"\"\"\n return len(self.data)\n"},{"location":"from_pydoc/generated/commlink/buffer_data/#libdebug.commlink.buffer_data.BufferData.__repr__","title":"__repr__()","text":"Returns a string representation of the buffer.
Source code inlibdebug/commlink/buffer_data.py def __repr__(self: BufferData) -> str:\n \"\"\"Returns a string representation of the buffer.\"\"\"\n return self.data.__repr__()\n"},{"location":"from_pydoc/generated/commlink/buffer_data/#libdebug.commlink.buffer_data.BufferData.append","title":"append(data)","text":"Appends data to the buffer.
Source code inlibdebug/commlink/buffer_data.py def append(self, data: bytes) -> None:\n \"\"\"Appends data to the buffer.\"\"\"\n self.data += data\n"},{"location":"from_pydoc/generated/commlink/buffer_data/#libdebug.commlink.buffer_data.BufferData.clear","title":"clear()","text":"Clears the buffer.
Source code inlibdebug/commlink/buffer_data.py def clear(self: BufferData) -> None:\n \"\"\"Clears the buffer.\"\"\"\n self.data = b\"\"\n"},{"location":"from_pydoc/generated/commlink/buffer_data/#libdebug.commlink.buffer_data.BufferData.find","title":"find(pattern)","text":"Finds the first occurrence of the given pattern in the buffer.
Source code inlibdebug/commlink/buffer_data.py def find(self: BufferData, pattern: bytes) -> int:\n \"\"\"Finds the first occurrence of the given pattern in the buffer.\"\"\"\n return self.data.find(pattern)\n"},{"location":"from_pydoc/generated/commlink/buffer_data/#libdebug.commlink.buffer_data.BufferData.get_data","title":"get_data()","text":"Returns the data stored in the buffer.
Source code inlibdebug/commlink/buffer_data.py def get_data(self: BufferData) -> bytes:\n \"\"\"Returns the data stored in the buffer.\"\"\"\n return self.data\n"},{"location":"from_pydoc/generated/commlink/buffer_data/#libdebug.commlink.buffer_data.BufferData.overwrite","title":"overwrite(data)","text":"Overwrites the buffer with the given data.
Source code inlibdebug/commlink/buffer_data.py def overwrite(self, data: bytes) -> None:\n \"\"\"Overwrites the buffer with the given data.\"\"\"\n self.data = data\n"},{"location":"from_pydoc/generated/commlink/libterminal/","title":"libdebug.commlink.libterminal","text":""},{"location":"from_pydoc/generated/commlink/libterminal/#libdebug.commlink.libterminal.LibTerminal","title":"LibTerminal","text":"Class that represents a terminal to interact with the child process.
Source code inlibdebug/commlink/libterminal.py class LibTerminal:\n \"\"\"Class that represents a terminal to interact with the child process.\"\"\"\n\n def __init__(\n self: LibTerminal,\n prompt: str,\n sendline: callable,\n end_interactive_event: Event,\n auto_quit: bool,\n ) -> None:\n \"\"\"Initializes the LibTerminal object.\"\"\"\n # Provide the internal debugger instance\n self._internal_debugger = provide_internal_debugger(self)\n\n # Function to send a line to the child process\n self._sendline: callable = sendline\n\n # Event to signal the end of the interactive session\n self.__end_interactive_event: Event = end_interactive_event\n\n # Flag to indicate if the terminal should automatically quit when the debugged process stops\n self._auto_quit: bool = auto_quit\n\n # Initialize the message queue for the prompt_toolkit application\n self._app_message_queue: Queue = Queue()\n\n # Initialize the thread reference for the prompt_toolkit application\n self._app_thread: threading.Thread | None = None\n\n # Flag to indicate if the terminal has warned the user about the stop of the debugged process\n self._has_warned_stop: bool = False\n\n # Backup the original stdout and stderr\n self._stdout_backup: object = sys.stdout\n self._stderr_backup: object = sys.stderr\n\n # Redirect stdout and stderr to the terminal\n sys.stdout = StdWrapper(self._stdout_backup, self)\n sys.stderr = StdWrapper(self._stderr_backup, self)\n\n # Redirect the loggers to the terminal\n for handler in liblog.general_logger.handlers:\n if isinstance(handler, StreamHandler):\n handler.stream = sys.stderr\n\n for handler in liblog.pipe_logger.handlers:\n if isinstance(handler, StreamHandler):\n handler.stream = sys.stderr\n\n for handler in liblog.debugger_logger.handlers:\n if isinstance(handler, StreamHandler):\n handler.stream = sys.stderr\n\n # Save the original stdin settings, if needed. Just in case\n if not self._internal_debugger.stdin_settings_backup:\n self._internal_debugger.stdin_settings_backup = tcgetattr(sys.stdin.fileno())\n\n # Create the history file, if it does not exist\n if not PATH_HISTORY.exists():\n PATH_HISTORY.parent.mkdir(parents=True, exist_ok=True)\n PATH_HISTORY.touch()\n\n self._run_prompt(prompt)\n\n def _run_prompt(self: LibTerminal, prompt: str) -> None:\n \"\"\"Run the prompt_toolkit application.\"\"\"\n input_field = TextArea(\n height=3,\n prompt=prompt,\n style=\"class:input-field\",\n history=FileHistory(str(PATH_HISTORY)),\n auto_suggest=AutoSuggestFromHistory(),\n )\n\n kb = KeyBindings()\n\n @kb.add(\"enter\")\n def on_enter(event: KeyPressEvent) -> None:\n \"\"\"Send the user input to the child process.\"\"\"\n buffer = event.app.current_buffer\n cmd = buffer.text\n if cmd:\n try:\n self._sendline(cmd.encode(\"utf-8\"))\n buffer.history.append_string(cmd)\n except RuntimeError:\n liblog.warning(\"The stdin pipe of the child process is not available anymore\")\n finally:\n buffer.reset()\n\n @kb.add(\"c-c\")\n @kb.add(\"c-d\")\n def app_exit(event: KeyPressEvent) -> None:\n \"\"\"Manage the key bindings for the exit of the application.\"\"\"\n # Flush the output field\n update_output(event.app)\n # Signal the end of the interactive session\n self.__end_interactive_event.set()\n while self.__end_interactive_event.is_set():\n # Wait to be sure that the other thread is not polling from the child process's\n # stderr and stdout pipes anymore\n pass\n event.app.exit()\n\n @kb.add(\"tab\")\n def accept_suggestion(event: KeyPressEvent) -> None:\n \"\"\"Accept the auto-suggestion.\"\"\"\n buffer = event.current_buffer\n suggestion = buffer.suggestion\n if suggestion:\n buffer.insert_text(suggestion.text)\n\n layout = Layout(input_field)\n\n # Note: The refresh_interval is set to 0.5 seconds is an arbitrary trade-off between the\n # responsiveness of the terminal and the CPU usage. Little values also cause difficulties\n # in the management of the copy-paste. We might consider to change the value in the future or\n # to make it dynamic/configurable.\n app = Application(\n layout=layout,\n key_bindings=kb,\n full_screen=False,\n refresh_interval=0.5,\n )\n\n def update_output(app: Application) -> None:\n \"\"\"Update the output field with the messages in the queue.\"\"\"\n if (\n not self._internal_debugger.running\n and (event_type := self._internal_debugger.resume_context.get_event_type())\n and not self._has_warned_stop\n ):\n liblog.warning(\n f\"The debugged process has stopped due to the following event(s). {event_type}\",\n )\n self._has_warned_stop = True\n if self._auto_quit:\n # Flush the output field and exit the application\n self.__end_interactive_event.set()\n\n while self.__end_interactive_event.is_set():\n # Wait to be sure that the other thread is not polling from the child process\n # stderr and stdout pipes anymore\n pass\n\n # Update the output field with the messages in the queue\n msg = b\"\"\n if not self._app_message_queue.empty():\n msg += self._app_message_queue.get()\n\n if msg:\n if not msg.endswith(b\"\\n\"):\n # Add a newline character at the end of the message\n # to avoid the prompt_toolkit bug that causes the last line to be\n # overwritten by the prompt\n msg += b\"\\n\"\n run_in_terminal(lambda: sys.stdout.buffer.write(msg))\n run_in_terminal(lambda: sys.stdout.buffer.flush())\n\n if self._has_warned_stop and self._auto_quit:\n app.exit()\n\n # Add the update_output function to the event loop\n app.on_invalidate.add_handler(update_output)\n\n # Run in another thread\n self._app_thread = threading.Thread(target=app.run, daemon=True)\n self._app_thread.start()\n\n def _write_manager(self, payload: bytes) -> int:\n \"\"\"Put the payload in the message queue for the prompt_toolkit application.\"\"\"\n if isinstance(payload, bytes):\n # We want the special characters to be displayed correctly\n self._app_message_queue.put(payload.decode(\"utf-8\", errors=\"backslashreplace\").encode(\"utf-8\"))\n else:\n # We need to encode the payload to bytes\n self._app_message_queue.put(payload.encode(\"utf-8\"))\n\n def reset(self: LibTerminal) -> None:\n \"\"\"Reset the terminal to its original state.\"\"\"\n # Wait for the prompt_toolkit application to finish\n # This (included the timeout) is necessary to avoid race conditions and deadlocks\n while self._app_thread.join(0.1):\n pass\n\n # Restore the original stdout and stderr\n sys.stdout = self._stdout_backup\n sys.stderr = self._stderr_backup\n\n # Restore the loggers\n for handler in liblog.general_logger.handlers:\n if isinstance(handler, StreamHandler):\n handler.stream = sys.stderr\n\n for handler in liblog.pipe_logger.handlers:\n if isinstance(handler, StreamHandler):\n handler.stream = sys.stderr\n\n for handler in liblog.debugger_logger.handlers:\n if isinstance(handler, StreamHandler):\n handler.stream = sys.stderr\n\n # Restore the original stdin settings\n tcsetattr(sys.stdin.fileno(), TCSANOW, self._internal_debugger.stdin_settings_backup)\n"},{"location":"from_pydoc/generated/commlink/libterminal/#libdebug.commlink.libterminal.LibTerminal.__init__","title":"__init__(prompt, sendline, end_interactive_event, auto_quit)","text":"Initializes the LibTerminal object.
Source code inlibdebug/commlink/libterminal.py def __init__(\n self: LibTerminal,\n prompt: str,\n sendline: callable,\n end_interactive_event: Event,\n auto_quit: bool,\n) -> None:\n \"\"\"Initializes the LibTerminal object.\"\"\"\n # Provide the internal debugger instance\n self._internal_debugger = provide_internal_debugger(self)\n\n # Function to send a line to the child process\n self._sendline: callable = sendline\n\n # Event to signal the end of the interactive session\n self.__end_interactive_event: Event = end_interactive_event\n\n # Flag to indicate if the terminal should automatically quit when the debugged process stops\n self._auto_quit: bool = auto_quit\n\n # Initialize the message queue for the prompt_toolkit application\n self._app_message_queue: Queue = Queue()\n\n # Initialize the thread reference for the prompt_toolkit application\n self._app_thread: threading.Thread | None = None\n\n # Flag to indicate if the terminal has warned the user about the stop of the debugged process\n self._has_warned_stop: bool = False\n\n # Backup the original stdout and stderr\n self._stdout_backup: object = sys.stdout\n self._stderr_backup: object = sys.stderr\n\n # Redirect stdout and stderr to the terminal\n sys.stdout = StdWrapper(self._stdout_backup, self)\n sys.stderr = StdWrapper(self._stderr_backup, self)\n\n # Redirect the loggers to the terminal\n for handler in liblog.general_logger.handlers:\n if isinstance(handler, StreamHandler):\n handler.stream = sys.stderr\n\n for handler in liblog.pipe_logger.handlers:\n if isinstance(handler, StreamHandler):\n handler.stream = sys.stderr\n\n for handler in liblog.debugger_logger.handlers:\n if isinstance(handler, StreamHandler):\n handler.stream = sys.stderr\n\n # Save the original stdin settings, if needed. Just in case\n if not self._internal_debugger.stdin_settings_backup:\n self._internal_debugger.stdin_settings_backup = tcgetattr(sys.stdin.fileno())\n\n # Create the history file, if it does not exist\n if not PATH_HISTORY.exists():\n PATH_HISTORY.parent.mkdir(parents=True, exist_ok=True)\n PATH_HISTORY.touch()\n\n self._run_prompt(prompt)\n"},{"location":"from_pydoc/generated/commlink/libterminal/#libdebug.commlink.libterminal.LibTerminal._run_prompt","title":"_run_prompt(prompt)","text":"Run the prompt_toolkit application.
Source code inlibdebug/commlink/libterminal.py def _run_prompt(self: LibTerminal, prompt: str) -> None:\n \"\"\"Run the prompt_toolkit application.\"\"\"\n input_field = TextArea(\n height=3,\n prompt=prompt,\n style=\"class:input-field\",\n history=FileHistory(str(PATH_HISTORY)),\n auto_suggest=AutoSuggestFromHistory(),\n )\n\n kb = KeyBindings()\n\n @kb.add(\"enter\")\n def on_enter(event: KeyPressEvent) -> None:\n \"\"\"Send the user input to the child process.\"\"\"\n buffer = event.app.current_buffer\n cmd = buffer.text\n if cmd:\n try:\n self._sendline(cmd.encode(\"utf-8\"))\n buffer.history.append_string(cmd)\n except RuntimeError:\n liblog.warning(\"The stdin pipe of the child process is not available anymore\")\n finally:\n buffer.reset()\n\n @kb.add(\"c-c\")\n @kb.add(\"c-d\")\n def app_exit(event: KeyPressEvent) -> None:\n \"\"\"Manage the key bindings for the exit of the application.\"\"\"\n # Flush the output field\n update_output(event.app)\n # Signal the end of the interactive session\n self.__end_interactive_event.set()\n while self.__end_interactive_event.is_set():\n # Wait to be sure that the other thread is not polling from the child process's\n # stderr and stdout pipes anymore\n pass\n event.app.exit()\n\n @kb.add(\"tab\")\n def accept_suggestion(event: KeyPressEvent) -> None:\n \"\"\"Accept the auto-suggestion.\"\"\"\n buffer = event.current_buffer\n suggestion = buffer.suggestion\n if suggestion:\n buffer.insert_text(suggestion.text)\n\n layout = Layout(input_field)\n\n # Note: The refresh_interval is set to 0.5 seconds is an arbitrary trade-off between the\n # responsiveness of the terminal and the CPU usage. Little values also cause difficulties\n # in the management of the copy-paste. We might consider to change the value in the future or\n # to make it dynamic/configurable.\n app = Application(\n layout=layout,\n key_bindings=kb,\n full_screen=False,\n refresh_interval=0.5,\n )\n\n def update_output(app: Application) -> None:\n \"\"\"Update the output field with the messages in the queue.\"\"\"\n if (\n not self._internal_debugger.running\n and (event_type := self._internal_debugger.resume_context.get_event_type())\n and not self._has_warned_stop\n ):\n liblog.warning(\n f\"The debugged process has stopped due to the following event(s). {event_type}\",\n )\n self._has_warned_stop = True\n if self._auto_quit:\n # Flush the output field and exit the application\n self.__end_interactive_event.set()\n\n while self.__end_interactive_event.is_set():\n # Wait to be sure that the other thread is not polling from the child process\n # stderr and stdout pipes anymore\n pass\n\n # Update the output field with the messages in the queue\n msg = b\"\"\n if not self._app_message_queue.empty():\n msg += self._app_message_queue.get()\n\n if msg:\n if not msg.endswith(b\"\\n\"):\n # Add a newline character at the end of the message\n # to avoid the prompt_toolkit bug that causes the last line to be\n # overwritten by the prompt\n msg += b\"\\n\"\n run_in_terminal(lambda: sys.stdout.buffer.write(msg))\n run_in_terminal(lambda: sys.stdout.buffer.flush())\n\n if self._has_warned_stop and self._auto_quit:\n app.exit()\n\n # Add the update_output function to the event loop\n app.on_invalidate.add_handler(update_output)\n\n # Run in another thread\n self._app_thread = threading.Thread(target=app.run, daemon=True)\n self._app_thread.start()\n"},{"location":"from_pydoc/generated/commlink/libterminal/#libdebug.commlink.libterminal.LibTerminal._write_manager","title":"_write_manager(payload)","text":"Put the payload in the message queue for the prompt_toolkit application.
Source code inlibdebug/commlink/libterminal.py def _write_manager(self, payload: bytes) -> int:\n \"\"\"Put the payload in the message queue for the prompt_toolkit application.\"\"\"\n if isinstance(payload, bytes):\n # We want the special characters to be displayed correctly\n self._app_message_queue.put(payload.decode(\"utf-8\", errors=\"backslashreplace\").encode(\"utf-8\"))\n else:\n # We need to encode the payload to bytes\n self._app_message_queue.put(payload.encode(\"utf-8\"))\n"},{"location":"from_pydoc/generated/commlink/libterminal/#libdebug.commlink.libterminal.LibTerminal.reset","title":"reset()","text":"Reset the terminal to its original state.
Source code inlibdebug/commlink/libterminal.py def reset(self: LibTerminal) -> None:\n \"\"\"Reset the terminal to its original state.\"\"\"\n # Wait for the prompt_toolkit application to finish\n # This (included the timeout) is necessary to avoid race conditions and deadlocks\n while self._app_thread.join(0.1):\n pass\n\n # Restore the original stdout and stderr\n sys.stdout = self._stdout_backup\n sys.stderr = self._stderr_backup\n\n # Restore the loggers\n for handler in liblog.general_logger.handlers:\n if isinstance(handler, StreamHandler):\n handler.stream = sys.stderr\n\n for handler in liblog.pipe_logger.handlers:\n if isinstance(handler, StreamHandler):\n handler.stream = sys.stderr\n\n for handler in liblog.debugger_logger.handlers:\n if isinstance(handler, StreamHandler):\n handler.stream = sys.stderr\n\n # Restore the original stdin settings\n tcsetattr(sys.stdin.fileno(), TCSANOW, self._internal_debugger.stdin_settings_backup)\n"},{"location":"from_pydoc/generated/commlink/pipe_manager/","title":"libdebug.commlink.pipe_manager","text":""},{"location":"from_pydoc/generated/commlink/pipe_manager/#libdebug.commlink.pipe_manager.PipeManager","title":"PipeManager","text":"Class for managing pipes of the child process.
Source code inlibdebug/commlink/pipe_manager.py class PipeManager:\n \"\"\"Class for managing pipes of the child process.\"\"\"\n\n timeout_default: int = 2\n prompt_default: str = \"$ \"\n\n def __init__(self: PipeManager, stdin_write: int, stdout_read: int, stderr_read: int) -> None:\n \"\"\"Initializes the PipeManager class.\n\n Args:\n stdin_write (int): file descriptor for stdin write.\n stdout_read (int): file descriptor for stdout read.\n stderr_read (int): file descriptor for stderr read.\n \"\"\"\n self._stdin_write: int = stdin_write\n self._stdout_read: int = stdout_read\n self._stderr_read: int = stderr_read\n self._stderr_is_open: bool = True\n self._stdout_is_open: bool = True\n self._internal_debugger: InternalDebugger = provide_internal_debugger(self)\n\n self.__stdout_buffer: BufferData = BufferData(b\"\")\n self.__stderr_buffer: BufferData = BufferData(b\"\")\n\n self.__end_interactive_event: Event = Event()\n\n def _raw_recv(\n self: PipeManager,\n numb: int | None = None,\n timeout: float | None = None,\n stderr: bool = False,\n ) -> int:\n \"\"\"Receives at most numb bytes from the child process.\n\n Args:\n numb (int | None, optional): number of bytes to receive. Defaults to None.\n timeout (float, optional): timeout in seconds. Defaults to None.\n stderr (bool, optional): receive from stderr. Defaults to False.\n\n Returns:\n int: number of bytes received.\n \"\"\"\n pipe_read: int = self._stderr_read if stderr else self._stdout_read\n\n if not pipe_read:\n raise RuntimeError(\"No pipe of the child process\")\n\n data_buffer = self.__stderr_buffer if stderr else self.__stdout_buffer\n\n received_numb = 0\n\n if numb is not None and timeout is not None:\n # Checking the numb\n if numb < 0:\n raise ValueError(\"The number of bytes to receive must be positive\")\n\n # Setting the alarm\n end_time = time.time() + timeout\n\n while numb > received_numb:\n if (remaining_time := max(0, end_time - time.time())) == 0:\n # Timeout reached\n break\n\n try:\n ready, _, _ = select.select([pipe_read], [], [], remaining_time)\n if ready:\n data = os.read(pipe_read, 4096)\n received_numb += len(data)\n data_buffer.append(data)\n else:\n # No more data available in the pipe at the moment\n break\n except OSError as e:\n if e.errno != EAGAIN:\n if stderr:\n self._stderr_is_open = False\n else:\n self._stdout_is_open = False\n elif timeout is not None:\n try:\n ready, _, _ = select.select([pipe_read], [], [], timeout)\n if ready:\n data = os.read(pipe_read, 4096)\n received_numb += len(data)\n data_buffer.append(data)\n except OSError as e:\n if e.errno != EAGAIN:\n if stderr:\n self._stderr_is_open = False\n else:\n self._stdout_is_open = False\n else:\n try:\n data = os.read(pipe_read, 4096)\n if data:\n received_numb += len(data)\n data_buffer.append(data)\n except OSError as e:\n if e.errno != EAGAIN:\n if stderr:\n self._stderr_is_open = False\n else:\n self._stdout_is_open = False\n\n if received_numb:\n liblog.pipe(f\"{'stderr' if stderr else 'stdout'} {received_numb}B: {data_buffer[:received_numb]!r}\")\n return received_numb\n\n def close(self: PipeManager) -> None:\n \"\"\"Closes all the pipes of the child process.\"\"\"\n os.close(self._stdin_write)\n os.close(self._stdout_read)\n os.close(self._stderr_read)\n\n def _buffered_recv(self: PipeManager, numb: int, timeout: int, stderr: bool) -> bytes:\n \"\"\"Receives at most numb bytes from the child process stdout or stderr.\n\n Args:\n numb (int): number of bytes to receive.\n timeout (int): timeout in seconds.\n stderr (bool): receive from stderr.\n\n Returns:\n bytes: received bytes from the child process stdout or stderr.\n \"\"\"\n data_buffer = self.__stderr_buffer if stderr else self.__stdout_buffer\n open_flag = self._stderr_is_open if stderr else self._stdout_is_open\n\n data_buffer_len = len(data_buffer)\n\n if data_buffer_len >= numb:\n # We have enough data in the buffer\n received = data_buffer[:numb]\n data_buffer.overwrite(data_buffer[numb:])\n elif open_flag:\n # We can receive more data\n remaining = numb - data_buffer_len\n self._raw_recv(numb=remaining, timeout=timeout, stderr=stderr)\n received = data_buffer[:numb]\n data_buffer.overwrite(data_buffer[numb:])\n elif data_buffer_len != 0:\n # The pipe is not available but we have some data in the buffer. We will return just that\n received = data_buffer.get_data()\n data_buffer.clear()\n else:\n # The pipe is not available and no data is buffered\n raise RuntimeError(f\"Broken {'stderr' if stderr else 'stdout'} pipe. Is the child process still alive?\")\n return received\n\n def recv(\n self: PipeManager,\n numb: int = 4096,\n timeout: int = timeout_default,\n ) -> bytes:\n \"\"\"Receives at most numb bytes from the child process stdout.\n\n Args:\n numb (int, optional): number of bytes to receive. Defaults to 4096.\n timeout (int, optional): timeout in seconds. Defaults to timeout_default.\n\n Returns:\n bytes: received bytes from the child process stdout.\n \"\"\"\n return self._buffered_recv(numb=numb, timeout=timeout, stderr=False)\n\n def recverr(\n self: PipeManager,\n numb: int = 4096,\n timeout: int = timeout_default,\n ) -> bytes:\n \"\"\"Receives at most numb bytes from the child process stderr.\n\n Args:\n numb (int, optional): number of bytes to receive. Defaults to 4096.\n timeout (int, optional): timeout in seconds. Defaults to timeout_default.\n\n Returns:\n bytes: received bytes from the child process stderr.\n \"\"\"\n return self._buffered_recv(numb=numb, timeout=timeout, stderr=True)\n\n def _recvonceuntil(\n self: PipeManager,\n delims: bytes,\n drop: bool = False,\n timeout: float = timeout_default,\n stderr: bool = False,\n optional: bool = False,\n ) -> bytes:\n \"\"\"Receives data from the child process until the delimiters are found.\n\n Args:\n delims (bytes): delimiters where to stop.\n drop (bool, optional): drop the delimiter. Defaults to False.\n timeout (float, optional): timeout in seconds. Defaults to timeout_default.\n stderr (bool, optional): receive from stderr. Defaults to False.\n optional (bool, optional): whether to ignore the wait for the received input if the command is executed when the process is stopped. Defaults to False.\n\n Returns:\n bytes: received data from the child process stdout.\n \"\"\"\n if isinstance(delims, str):\n liblog.warning(\"The delimiters are a string, converting to bytes\")\n delims = delims.encode()\n\n # Buffer for the received data\n data_buffer = self.__stderr_buffer if stderr else self.__stdout_buffer\n\n # Setting the alarm\n end_time = time.time() + timeout\n while True:\n open_flag = self._stderr_is_open if stderr else self._stdout_is_open\n\n if (until := data_buffer.find(delims)) != -1:\n break\n\n if (remaining_time := max(0, end_time - time.time())) == 0:\n raise TimeoutError(\"Timeout reached\")\n\n if not open_flag:\n # The delimiters are not in the buffer and the pipe is not available\n raise RuntimeError(f\"Broken {'stderr' if stderr else 'stdout'} pipe. Is the child process still alive?\")\n\n received_numb = self._raw_recv(stderr=stderr, timeout=remaining_time)\n\n if (\n received_numb == 0\n and not self._internal_debugger.running\n and self._internal_debugger.is_debugging\n and (event := self._internal_debugger.resume_context.get_event_type())\n ):\n # We will not receive more data, the child process is not running\n if optional:\n return b\"\"\n event = self._internal_debugger.resume_context.get_event_type()\n raise RuntimeError(\n f\"Receive until error. The debugged process has stopped due to the following event(s). {event}\",\n )\n received_data = data_buffer[:until]\n if not drop:\n # Include the delimiters in the received data\n received_data += data_buffer[until : until + len(delims)]\n remaining_data = data_buffer[until + len(delims) :]\n data_buffer.overwrite(remaining_data)\n return received_data\n\n def _recvuntil(\n self: PipeManager,\n delims: bytes,\n occurences: int = 1,\n drop: bool = False,\n timeout: float = timeout_default,\n stderr: bool = False,\n optional: bool = False,\n ) -> bytes:\n \"\"\"Receives data from the child process until the delimiters are found occurences time.\n\n Args:\n delims (bytes): delimiters where to stop.\n occurences (int, optional): number of delimiters to find. Defaults to 1.\n drop (bool, optional): drop the delimiter. Defaults to False.\n timeout (float, optional): timeout in seconds. Defaults to timeout_default.\n stderr (bool, optional): receive from stderr. Defaults to False.\n optional (bool, optional): whether to ignore the wait for the received input if the command is executed when the process is stopped. Defaults to False.\n\n Returns:\n bytes: received data from the child process stdout.\n \"\"\"\n if occurences <= 0:\n raise ValueError(\"The number of occurences to receive must be positive\")\n\n # Buffer for the received data\n data_buffer = b\"\"\n\n # Setting the alarm\n end_time = time.time() + timeout\n\n for _ in range(occurences):\n # Adjust the timeout for select to the remaining time\n remaining_time = None if end_time is None else max(0, end_time - time.time())\n\n data_buffer += self._recvonceuntil(\n delims=delims,\n drop=drop,\n timeout=remaining_time,\n stderr=stderr,\n optional=optional,\n )\n\n return data_buffer\n\n def recvuntil(\n self: PipeManager,\n delims: bytes,\n occurences: int = 1,\n drop: bool = False,\n timeout: int = timeout_default,\n optional: bool = False,\n ) -> bytes:\n \"\"\"Receives data from the child process stdout until the delimiters are found.\n\n Args:\n delims (bytes): delimiters where to stop.\n occurences (int, optional): number of delimiters to find. Defaults to 1.\n drop (bool, optional): drop the delimiter. Defaults to False.\n timeout (int, optional): timeout in seconds. Defaults to timeout_default.\n optional (bool, optional): whether to ignore the wait for the received input if the command is executed when the process is stopped. Defaults to False.\n\n Returns:\n bytes: received data from the child process stdout.\n \"\"\"\n return self._recvuntil(\n delims=delims,\n occurences=occurences,\n drop=drop,\n timeout=timeout,\n stderr=False,\n optional=optional,\n )\n\n def recverruntil(\n self: PipeManager,\n delims: bytes,\n occurences: int = 1,\n drop: bool = False,\n timeout: int = timeout_default,\n optional: bool = False,\n ) -> bytes:\n \"\"\"Receives data from the child process stderr until the delimiters are found.\n\n Args:\n delims (bytes): delimiters where to stop.\n occurences (int, optional): number of delimiters to find. Defaults to 1.\n drop (bool, optional): drop the delimiter. Defaults to False.\n timeout (int, optional): timeout in seconds. Defaults to timeout_default.\n optional (bool, optional): whether to ignore the wait for the received input if the command is executed when the process is stopped. Defaults to False.\n\n Returns:\n bytes: received data from the child process stderr.\n \"\"\"\n return self._recvuntil(\n delims=delims,\n occurences=occurences,\n drop=drop,\n timeout=timeout,\n stderr=True,\n optional=optional,\n )\n\n def recvline(\n self: PipeManager,\n numlines: int = 1,\n drop: bool = True,\n timeout: int = timeout_default,\n optional: bool = False,\n ) -> bytes:\n \"\"\"Receives numlines lines from the child process stdout.\n\n Args:\n numlines (int, optional): number of lines to receive. Defaults to 1.\n drop (bool, optional): drop the line ending. Defaults to True.\n timeout (int, optional): timeout in seconds. Defaults to timeout_default.\n optional (bool, optional): whether to ignore the wait for the received input if the command is executed when the process is stopped. Defaults to False.\n\n Returns:\n bytes: received lines from the child process stdout.\n \"\"\"\n return self.recvuntil(delims=b\"\\n\", occurences=numlines, drop=drop, timeout=timeout, optional=optional)\n\n def recverrline(\n self: PipeManager,\n numlines: int = 1,\n drop: bool = True,\n timeout: int = timeout_default,\n optional: bool = False,\n ) -> bytes:\n \"\"\"Receives numlines lines from the child process stderr.\n\n Args:\n numlines (int, optional): number of lines to receive. Defaults to 1.\n drop (bool, optional): drop the line ending. Defaults to True.\n timeout (int, optional): timeout in seconds. Defaults to timeout_default.\n optional (bool, optional): whether to ignore the wait for the received input if the command is executed when the process is stopped. Defaults to False.\n\n Returns:\n bytes: received lines from the child process stdout.\n \"\"\"\n return self.recverruntil(delims=b\"\\n\", occurences=numlines, drop=drop, timeout=timeout, optional=optional)\n\n def send(self: PipeManager, data: bytes) -> int:\n \"\"\"Sends data to the child process stdin.\n\n Args:\n data (bytes): data to send.\n\n Returns:\n int: number of bytes sent.\n\n Raises:\n RuntimeError: no stdin pipe of the child process.\n \"\"\"\n if not self._stdin_write:\n raise RuntimeError(\"No stdin pipe of the child process\")\n\n liblog.pipe(f\"Sending {len(data)} bytes to the child process: {data!r}\")\n\n if isinstance(data, str):\n liblog.warning(\"The input data is a string, converting to bytes\")\n data = data.encode()\n\n try:\n number_bytes = os.write(self._stdin_write, data)\n except OSError as e:\n raise RuntimeError(\"Broken pipe. Is the child process still running?\") from e\n\n return number_bytes\n\n def sendline(self: PipeManager, data: bytes) -> int:\n \"\"\"Sends data to the child process stdin and append a newline.\n\n Args:\n data (bytes): data to send.\n\n Returns:\n int: number of bytes sent.\n \"\"\"\n if isinstance(data, str):\n liblog.warning(\"The input data is a string, converting to bytes\")\n data = data.encode()\n return self.send(data=data + b\"\\n\")\n\n def sendafter(\n self: PipeManager,\n delims: bytes,\n data: bytes,\n occurences: int = 1,\n drop: bool = False,\n timeout: int = timeout_default,\n optional: bool = False,\n ) -> tuple[bytes, int]:\n \"\"\"Sends data to the child process stdin after the delimiters are found in the stdout.\n\n Args:\n delims (bytes): delimiters where to stop.\n data (bytes): data to send.\n occurences (int, optional): number of delimiters to find. Defaults to 1.\n drop (bool, optional): drop the delimiter. Defaults to False.\n timeout (int, optional): timeout in seconds. Defaults to timeout_default.\n optional (bool, optional): whether to ignore the wait for the received input if the command is executed when the process is stopped. Defaults to False.\n\n Returns:\n bytes: received data from the child process stdout.\n int: number of bytes sent.\n \"\"\"\n received = self.recvuntil(delims=delims, occurences=occurences, drop=drop, timeout=timeout, optional=optional)\n sent = self.send(data)\n return (received, sent)\n\n def sendaftererr(\n self: PipeManager,\n delims: bytes,\n data: bytes,\n occurences: int = 1,\n drop: bool = False,\n timeout: int = timeout_default,\n optional: bool = False,\n ) -> tuple[bytes, int]:\n \"\"\"Sends data to the child process stdin after the delimiters are found in stderr.\n\n Args:\n delims (bytes): delimiters where to stop.\n data (bytes): data to send.\n occurences (int, optional): number of delimiters to find. Defaults to 1.\n drop (bool, optional): drop the delimiter. Defaults to False.\n timeout (int, optional): timeout in seconds. Defaults to timeout_default.\n optional (bool, optional): whether to ignore the wait for the received input if the command is executed when the process is stopped. Defaults to False.\n\n Returns:\n bytes: received data from the child process stderr.\n int: number of bytes sent.\n \"\"\"\n received = self.recverruntil(\n delims=delims,\n occurences=occurences,\n drop=drop,\n timeout=timeout,\n optional=optional,\n )\n sent = self.send(data)\n return (received, sent)\n\n def sendlineafter(\n self: PipeManager,\n delims: bytes,\n data: bytes,\n occurences: int = 1,\n drop: bool = False,\n timeout: int = timeout_default,\n optional: bool = False,\n ) -> tuple[bytes, int]:\n \"\"\"Sends line to the child process stdin after the delimiters are found in the stdout.\n\n Args:\n delims (bytes): delimiters where to stop.\n data (bytes): data to send.\n occurences (int, optional): number of delimiters to find. Defaults to 1.\n drop (bool, optional): drop the delimiter. Defaults to False.\n timeout (int, optional): timeout in seconds. Defaults to timeout_default.\n optional (bool, optional): whether to ignore the wait for the received input if the command is executed when the process is stopped. Defaults to False.\n\n Returns:\n bytes: received data from the child process stdout.\n int: number of bytes sent.\n \"\"\"\n received = self.recvuntil(delims=delims, occurences=occurences, drop=drop, timeout=timeout, optional=optional)\n sent = self.sendline(data)\n return (received, sent)\n\n def sendlineaftererr(\n self: PipeManager,\n delims: bytes,\n data: bytes,\n occurences: int = 1,\n drop: bool = False,\n timeout: int = timeout_default,\n optional: bool = False,\n ) -> tuple[bytes, int]:\n \"\"\"Sends line to the child process stdin after the delimiters are found in the stderr.\n\n Args:\n delims (bytes): delimiters where to stop.\n data (bytes): data to send.\n occurences (int, optional): number of delimiters to find. Defaults to 1.\n drop (bool, optional): drop the delimiter. Defaults to False.\n timeout (int, optional): timeout in seconds. Defaults to timeout_default.\n optional (bool, optional): whether to ignore the wait for the received input if the command is executed when the process is stopped. Defaults to False.\n\n Returns:\n bytes: received data from the child process stderr.\n int: number of bytes sent.\n \"\"\"\n received = self.recverruntil(\n delims=delims,\n occurences=occurences,\n drop=drop,\n timeout=timeout,\n optional=optional,\n )\n sent = self.sendline(data)\n return (received, sent)\n\n def _recv_for_interactive(self: PipeManager) -> None:\n \"\"\"Receives data from the child process.\"\"\"\n stdout_has_warned = False\n stderr_has_warned = False\n\n while not (self.__end_interactive_event.is_set() or (stdout_has_warned and stderr_has_warned)):\n # We can afford to treat stdout and stderr sequentially. This approach should also prevent\n # messing up the order of the information printed by the child process.\n # To avoid starvation, we switch between pipes upon receiving a bunch of data from one of them.\n if self._stdout_is_open:\n while True:\n new_recv = self._raw_recv(numb=1024, timeout=0.1)\n payload = self.__stdout_buffer.get_data()\n\n if not (new_recv or payload):\n # No more data available in the stdout pipe at the moment\n break\n\n sys.stdout.write(payload)\n self.__stdout_buffer.clear()\n elif not stdout_has_warned:\n # The child process has closed the stdout pipe and we have to print the warning message\n liblog.warning(\"The stdout pipe of the child process is not available anymore\")\n stdout_has_warned = True\n if self._stderr_is_open:\n while True:\n new_recv = self._raw_recv(stderr=True, numb=1024, timeout=0.1)\n payload = self.__stderr_buffer.get_data()\n\n if not (new_recv or payload):\n # No more data available in the stderr pipe at the moment\n break\n\n sys.stderr.write(payload)\n self.__stderr_buffer.clear()\n elif not stderr_has_warned:\n # The child process has closed the stderr pipe\n liblog.warning(\"The stderr pipe of the child process is not available anymore\")\n stderr_has_warned = True\n\n def interactive(self: PipeManager, prompt: str = prompt_default, auto_quit: bool = False) -> None:\n \"\"\"Manually interact with the child process.\n\n Args:\n prompt (str, optional): prompt for the interactive mode. Defaults to \"$ \" (prompt_default).\n auto_quit (bool, optional): whether to automatically quit the interactive mode when the child process is not running. Defaults to False.\n \"\"\"\n liblog.info(\"Calling interactive mode\")\n\n # Set up and run the terminal\n with extend_internal_debugger(self):\n libterminal = LibTerminal(prompt, self.sendline, self.__end_interactive_event, auto_quit)\n\n # Receive data from the child process's stdout and stderr pipes\n self._recv_for_interactive()\n\n # Be sure that the interactive mode has ended\n # If the the stderr and stdout pipes are closed, the interactive mode will continue until the user manually\n # stops it\n self.__end_interactive_event.wait()\n\n # Unset the interactive mode event\n self.__end_interactive_event.clear()\n\n # Reset the terminal\n libterminal.reset()\n\n liblog.info(\"Exiting interactive mode\")\n"},{"location":"from_pydoc/generated/commlink/pipe_manager/#libdebug.commlink.pipe_manager.PipeManager.__init__","title":"__init__(stdin_write, stdout_read, stderr_read)","text":"Initializes the PipeManager class.
Parameters:
Name Type Description Defaultstdin_write int file descriptor for stdin write.
requiredstdout_read int file descriptor for stdout read.
requiredstderr_read int file descriptor for stderr read.
required Source code inlibdebug/commlink/pipe_manager.py def __init__(self: PipeManager, stdin_write: int, stdout_read: int, stderr_read: int) -> None:\n \"\"\"Initializes the PipeManager class.\n\n Args:\n stdin_write (int): file descriptor for stdin write.\n stdout_read (int): file descriptor for stdout read.\n stderr_read (int): file descriptor for stderr read.\n \"\"\"\n self._stdin_write: int = stdin_write\n self._stdout_read: int = stdout_read\n self._stderr_read: int = stderr_read\n self._stderr_is_open: bool = True\n self._stdout_is_open: bool = True\n self._internal_debugger: InternalDebugger = provide_internal_debugger(self)\n\n self.__stdout_buffer: BufferData = BufferData(b\"\")\n self.__stderr_buffer: BufferData = BufferData(b\"\")\n\n self.__end_interactive_event: Event = Event()\n"},{"location":"from_pydoc/generated/commlink/pipe_manager/#libdebug.commlink.pipe_manager.PipeManager._buffered_recv","title":"_buffered_recv(numb, timeout, stderr)","text":"Receives at most numb bytes from the child process stdout or stderr.
Parameters:
Name Type Description Defaultnumb int number of bytes to receive.
requiredtimeout int timeout in seconds.
requiredstderr bool receive from stderr.
requiredReturns:
Name Type Descriptionbytes bytes received bytes from the child process stdout or stderr.
Source code inlibdebug/commlink/pipe_manager.py def _buffered_recv(self: PipeManager, numb: int, timeout: int, stderr: bool) -> bytes:\n \"\"\"Receives at most numb bytes from the child process stdout or stderr.\n\n Args:\n numb (int): number of bytes to receive.\n timeout (int): timeout in seconds.\n stderr (bool): receive from stderr.\n\n Returns:\n bytes: received bytes from the child process stdout or stderr.\n \"\"\"\n data_buffer = self.__stderr_buffer if stderr else self.__stdout_buffer\n open_flag = self._stderr_is_open if stderr else self._stdout_is_open\n\n data_buffer_len = len(data_buffer)\n\n if data_buffer_len >= numb:\n # We have enough data in the buffer\n received = data_buffer[:numb]\n data_buffer.overwrite(data_buffer[numb:])\n elif open_flag:\n # We can receive more data\n remaining = numb - data_buffer_len\n self._raw_recv(numb=remaining, timeout=timeout, stderr=stderr)\n received = data_buffer[:numb]\n data_buffer.overwrite(data_buffer[numb:])\n elif data_buffer_len != 0:\n # The pipe is not available but we have some data in the buffer. We will return just that\n received = data_buffer.get_data()\n data_buffer.clear()\n else:\n # The pipe is not available and no data is buffered\n raise RuntimeError(f\"Broken {'stderr' if stderr else 'stdout'} pipe. Is the child process still alive?\")\n return received\n"},{"location":"from_pydoc/generated/commlink/pipe_manager/#libdebug.commlink.pipe_manager.PipeManager._raw_recv","title":"_raw_recv(numb=None, timeout=None, stderr=False)","text":"Receives at most numb bytes from the child process.
Parameters:
Name Type Description Defaultnumb int | None number of bytes to receive. Defaults to None.
None timeout float timeout in seconds. Defaults to None.
None stderr bool receive from stderr. Defaults to False.
False Returns:
Name Type Descriptionint int number of bytes received.
Source code inlibdebug/commlink/pipe_manager.py def _raw_recv(\n self: PipeManager,\n numb: int | None = None,\n timeout: float | None = None,\n stderr: bool = False,\n) -> int:\n \"\"\"Receives at most numb bytes from the child process.\n\n Args:\n numb (int | None, optional): number of bytes to receive. Defaults to None.\n timeout (float, optional): timeout in seconds. Defaults to None.\n stderr (bool, optional): receive from stderr. Defaults to False.\n\n Returns:\n int: number of bytes received.\n \"\"\"\n pipe_read: int = self._stderr_read if stderr else self._stdout_read\n\n if not pipe_read:\n raise RuntimeError(\"No pipe of the child process\")\n\n data_buffer = self.__stderr_buffer if stderr else self.__stdout_buffer\n\n received_numb = 0\n\n if numb is not None and timeout is not None:\n # Checking the numb\n if numb < 0:\n raise ValueError(\"The number of bytes to receive must be positive\")\n\n # Setting the alarm\n end_time = time.time() + timeout\n\n while numb > received_numb:\n if (remaining_time := max(0, end_time - time.time())) == 0:\n # Timeout reached\n break\n\n try:\n ready, _, _ = select.select([pipe_read], [], [], remaining_time)\n if ready:\n data = os.read(pipe_read, 4096)\n received_numb += len(data)\n data_buffer.append(data)\n else:\n # No more data available in the pipe at the moment\n break\n except OSError as e:\n if e.errno != EAGAIN:\n if stderr:\n self._stderr_is_open = False\n else:\n self._stdout_is_open = False\n elif timeout is not None:\n try:\n ready, _, _ = select.select([pipe_read], [], [], timeout)\n if ready:\n data = os.read(pipe_read, 4096)\n received_numb += len(data)\n data_buffer.append(data)\n except OSError as e:\n if e.errno != EAGAIN:\n if stderr:\n self._stderr_is_open = False\n else:\n self._stdout_is_open = False\n else:\n try:\n data = os.read(pipe_read, 4096)\n if data:\n received_numb += len(data)\n data_buffer.append(data)\n except OSError as e:\n if e.errno != EAGAIN:\n if stderr:\n self._stderr_is_open = False\n else:\n self._stdout_is_open = False\n\n if received_numb:\n liblog.pipe(f\"{'stderr' if stderr else 'stdout'} {received_numb}B: {data_buffer[:received_numb]!r}\")\n return received_numb\n"},{"location":"from_pydoc/generated/commlink/pipe_manager/#libdebug.commlink.pipe_manager.PipeManager._recv_for_interactive","title":"_recv_for_interactive()","text":"Receives data from the child process.
Source code inlibdebug/commlink/pipe_manager.py def _recv_for_interactive(self: PipeManager) -> None:\n \"\"\"Receives data from the child process.\"\"\"\n stdout_has_warned = False\n stderr_has_warned = False\n\n while not (self.__end_interactive_event.is_set() or (stdout_has_warned and stderr_has_warned)):\n # We can afford to treat stdout and stderr sequentially. This approach should also prevent\n # messing up the order of the information printed by the child process.\n # To avoid starvation, we switch between pipes upon receiving a bunch of data from one of them.\n if self._stdout_is_open:\n while True:\n new_recv = self._raw_recv(numb=1024, timeout=0.1)\n payload = self.__stdout_buffer.get_data()\n\n if not (new_recv or payload):\n # No more data available in the stdout pipe at the moment\n break\n\n sys.stdout.write(payload)\n self.__stdout_buffer.clear()\n elif not stdout_has_warned:\n # The child process has closed the stdout pipe and we have to print the warning message\n liblog.warning(\"The stdout pipe of the child process is not available anymore\")\n stdout_has_warned = True\n if self._stderr_is_open:\n while True:\n new_recv = self._raw_recv(stderr=True, numb=1024, timeout=0.1)\n payload = self.__stderr_buffer.get_data()\n\n if not (new_recv or payload):\n # No more data available in the stderr pipe at the moment\n break\n\n sys.stderr.write(payload)\n self.__stderr_buffer.clear()\n elif not stderr_has_warned:\n # The child process has closed the stderr pipe\n liblog.warning(\"The stderr pipe of the child process is not available anymore\")\n stderr_has_warned = True\n"},{"location":"from_pydoc/generated/commlink/pipe_manager/#libdebug.commlink.pipe_manager.PipeManager._recvonceuntil","title":"_recvonceuntil(delims, drop=False, timeout=timeout_default, stderr=False, optional=False)","text":"Receives data from the child process until the delimiters are found.
Parameters:
Name Type Description Defaultdelims bytes delimiters where to stop.
requireddrop bool drop the delimiter. Defaults to False.
False timeout float timeout in seconds. Defaults to timeout_default.
timeout_default stderr bool receive from stderr. Defaults to False.
False optional bool whether to ignore the wait for the received input if the command is executed when the process is stopped. Defaults to False.
False Returns:
Name Type Descriptionbytes bytes received data from the child process stdout.
Source code inlibdebug/commlink/pipe_manager.py def _recvonceuntil(\n self: PipeManager,\n delims: bytes,\n drop: bool = False,\n timeout: float = timeout_default,\n stderr: bool = False,\n optional: bool = False,\n) -> bytes:\n \"\"\"Receives data from the child process until the delimiters are found.\n\n Args:\n delims (bytes): delimiters where to stop.\n drop (bool, optional): drop the delimiter. Defaults to False.\n timeout (float, optional): timeout in seconds. Defaults to timeout_default.\n stderr (bool, optional): receive from stderr. Defaults to False.\n optional (bool, optional): whether to ignore the wait for the received input if the command is executed when the process is stopped. Defaults to False.\n\n Returns:\n bytes: received data from the child process stdout.\n \"\"\"\n if isinstance(delims, str):\n liblog.warning(\"The delimiters are a string, converting to bytes\")\n delims = delims.encode()\n\n # Buffer for the received data\n data_buffer = self.__stderr_buffer if stderr else self.__stdout_buffer\n\n # Setting the alarm\n end_time = time.time() + timeout\n while True:\n open_flag = self._stderr_is_open if stderr else self._stdout_is_open\n\n if (until := data_buffer.find(delims)) != -1:\n break\n\n if (remaining_time := max(0, end_time - time.time())) == 0:\n raise TimeoutError(\"Timeout reached\")\n\n if not open_flag:\n # The delimiters are not in the buffer and the pipe is not available\n raise RuntimeError(f\"Broken {'stderr' if stderr else 'stdout'} pipe. Is the child process still alive?\")\n\n received_numb = self._raw_recv(stderr=stderr, timeout=remaining_time)\n\n if (\n received_numb == 0\n and not self._internal_debugger.running\n and self._internal_debugger.is_debugging\n and (event := self._internal_debugger.resume_context.get_event_type())\n ):\n # We will not receive more data, the child process is not running\n if optional:\n return b\"\"\n event = self._internal_debugger.resume_context.get_event_type()\n raise RuntimeError(\n f\"Receive until error. The debugged process has stopped due to the following event(s). {event}\",\n )\n received_data = data_buffer[:until]\n if not drop:\n # Include the delimiters in the received data\n received_data += data_buffer[until : until + len(delims)]\n remaining_data = data_buffer[until + len(delims) :]\n data_buffer.overwrite(remaining_data)\n return received_data\n"},{"location":"from_pydoc/generated/commlink/pipe_manager/#libdebug.commlink.pipe_manager.PipeManager._recvuntil","title":"_recvuntil(delims, occurences=1, drop=False, timeout=timeout_default, stderr=False, optional=False)","text":"Receives data from the child process until the delimiters are found occurences time.
Parameters:
Name Type Description Defaultdelims bytes delimiters where to stop.
requiredoccurences int number of delimiters to find. Defaults to 1.
1 drop bool drop the delimiter. Defaults to False.
False timeout float timeout in seconds. Defaults to timeout_default.
timeout_default stderr bool receive from stderr. Defaults to False.
False optional bool whether to ignore the wait for the received input if the command is executed when the process is stopped. Defaults to False.
False Returns:
Name Type Descriptionbytes bytes received data from the child process stdout.
Source code inlibdebug/commlink/pipe_manager.py def _recvuntil(\n self: PipeManager,\n delims: bytes,\n occurences: int = 1,\n drop: bool = False,\n timeout: float = timeout_default,\n stderr: bool = False,\n optional: bool = False,\n) -> bytes:\n \"\"\"Receives data from the child process until the delimiters are found occurences time.\n\n Args:\n delims (bytes): delimiters where to stop.\n occurences (int, optional): number of delimiters to find. Defaults to 1.\n drop (bool, optional): drop the delimiter. Defaults to False.\n timeout (float, optional): timeout in seconds. Defaults to timeout_default.\n stderr (bool, optional): receive from stderr. Defaults to False.\n optional (bool, optional): whether to ignore the wait for the received input if the command is executed when the process is stopped. Defaults to False.\n\n Returns:\n bytes: received data from the child process stdout.\n \"\"\"\n if occurences <= 0:\n raise ValueError(\"The number of occurences to receive must be positive\")\n\n # Buffer for the received data\n data_buffer = b\"\"\n\n # Setting the alarm\n end_time = time.time() + timeout\n\n for _ in range(occurences):\n # Adjust the timeout for select to the remaining time\n remaining_time = None if end_time is None else max(0, end_time - time.time())\n\n data_buffer += self._recvonceuntil(\n delims=delims,\n drop=drop,\n timeout=remaining_time,\n stderr=stderr,\n optional=optional,\n )\n\n return data_buffer\n"},{"location":"from_pydoc/generated/commlink/pipe_manager/#libdebug.commlink.pipe_manager.PipeManager.close","title":"close()","text":"Closes all the pipes of the child process.
Source code inlibdebug/commlink/pipe_manager.py def close(self: PipeManager) -> None:\n \"\"\"Closes all the pipes of the child process.\"\"\"\n os.close(self._stdin_write)\n os.close(self._stdout_read)\n os.close(self._stderr_read)\n"},{"location":"from_pydoc/generated/commlink/pipe_manager/#libdebug.commlink.pipe_manager.PipeManager.interactive","title":"interactive(prompt=prompt_default, auto_quit=False)","text":"Manually interact with the child process.
Parameters:
Name Type Description Defaultprompt str prompt for the interactive mode. Defaults to \"$ \" (prompt_default).
prompt_default auto_quit bool whether to automatically quit the interactive mode when the child process is not running. Defaults to False.
False Source code in libdebug/commlink/pipe_manager.py def interactive(self: PipeManager, prompt: str = prompt_default, auto_quit: bool = False) -> None:\n \"\"\"Manually interact with the child process.\n\n Args:\n prompt (str, optional): prompt for the interactive mode. Defaults to \"$ \" (prompt_default).\n auto_quit (bool, optional): whether to automatically quit the interactive mode when the child process is not running. Defaults to False.\n \"\"\"\n liblog.info(\"Calling interactive mode\")\n\n # Set up and run the terminal\n with extend_internal_debugger(self):\n libterminal = LibTerminal(prompt, self.sendline, self.__end_interactive_event, auto_quit)\n\n # Receive data from the child process's stdout and stderr pipes\n self._recv_for_interactive()\n\n # Be sure that the interactive mode has ended\n # If the the stderr and stdout pipes are closed, the interactive mode will continue until the user manually\n # stops it\n self.__end_interactive_event.wait()\n\n # Unset the interactive mode event\n self.__end_interactive_event.clear()\n\n # Reset the terminal\n libterminal.reset()\n\n liblog.info(\"Exiting interactive mode\")\n"},{"location":"from_pydoc/generated/commlink/pipe_manager/#libdebug.commlink.pipe_manager.PipeManager.recv","title":"recv(numb=4096, timeout=timeout_default)","text":"Receives at most numb bytes from the child process stdout.
Parameters:
Name Type Description Defaultnumb int number of bytes to receive. Defaults to 4096.
4096 timeout int timeout in seconds. Defaults to timeout_default.
timeout_default Returns:
Name Type Descriptionbytes bytes received bytes from the child process stdout.
Source code inlibdebug/commlink/pipe_manager.py def recv(\n self: PipeManager,\n numb: int = 4096,\n timeout: int = timeout_default,\n) -> bytes:\n \"\"\"Receives at most numb bytes from the child process stdout.\n\n Args:\n numb (int, optional): number of bytes to receive. Defaults to 4096.\n timeout (int, optional): timeout in seconds. Defaults to timeout_default.\n\n Returns:\n bytes: received bytes from the child process stdout.\n \"\"\"\n return self._buffered_recv(numb=numb, timeout=timeout, stderr=False)\n"},{"location":"from_pydoc/generated/commlink/pipe_manager/#libdebug.commlink.pipe_manager.PipeManager.recverr","title":"recverr(numb=4096, timeout=timeout_default)","text":"Receives at most numb bytes from the child process stderr.
Parameters:
Name Type Description Defaultnumb int number of bytes to receive. Defaults to 4096.
4096 timeout int timeout in seconds. Defaults to timeout_default.
timeout_default Returns:
Name Type Descriptionbytes bytes received bytes from the child process stderr.
Source code inlibdebug/commlink/pipe_manager.py def recverr(\n self: PipeManager,\n numb: int = 4096,\n timeout: int = timeout_default,\n) -> bytes:\n \"\"\"Receives at most numb bytes from the child process stderr.\n\n Args:\n numb (int, optional): number of bytes to receive. Defaults to 4096.\n timeout (int, optional): timeout in seconds. Defaults to timeout_default.\n\n Returns:\n bytes: received bytes from the child process stderr.\n \"\"\"\n return self._buffered_recv(numb=numb, timeout=timeout, stderr=True)\n"},{"location":"from_pydoc/generated/commlink/pipe_manager/#libdebug.commlink.pipe_manager.PipeManager.recverrline","title":"recverrline(numlines=1, drop=True, timeout=timeout_default, optional=False)","text":"Receives numlines lines from the child process stderr.
Parameters:
Name Type Description Defaultnumlines int number of lines to receive. Defaults to 1.
1 drop bool drop the line ending. Defaults to True.
True timeout int timeout in seconds. Defaults to timeout_default.
timeout_default optional bool whether to ignore the wait for the received input if the command is executed when the process is stopped. Defaults to False.
False Returns:
Name Type Descriptionbytes bytes received lines from the child process stdout.
Source code inlibdebug/commlink/pipe_manager.py def recverrline(\n self: PipeManager,\n numlines: int = 1,\n drop: bool = True,\n timeout: int = timeout_default,\n optional: bool = False,\n) -> bytes:\n \"\"\"Receives numlines lines from the child process stderr.\n\n Args:\n numlines (int, optional): number of lines to receive. Defaults to 1.\n drop (bool, optional): drop the line ending. Defaults to True.\n timeout (int, optional): timeout in seconds. Defaults to timeout_default.\n optional (bool, optional): whether to ignore the wait for the received input if the command is executed when the process is stopped. Defaults to False.\n\n Returns:\n bytes: received lines from the child process stdout.\n \"\"\"\n return self.recverruntil(delims=b\"\\n\", occurences=numlines, drop=drop, timeout=timeout, optional=optional)\n"},{"location":"from_pydoc/generated/commlink/pipe_manager/#libdebug.commlink.pipe_manager.PipeManager.recverruntil","title":"recverruntil(delims, occurences=1, drop=False, timeout=timeout_default, optional=False)","text":"Receives data from the child process stderr until the delimiters are found.
Parameters:
Name Type Description Defaultdelims bytes delimiters where to stop.
requiredoccurences int number of delimiters to find. Defaults to 1.
1 drop bool drop the delimiter. Defaults to False.
False timeout int timeout in seconds. Defaults to timeout_default.
timeout_default optional bool whether to ignore the wait for the received input if the command is executed when the process is stopped. Defaults to False.
False Returns:
Name Type Descriptionbytes bytes received data from the child process stderr.
Source code inlibdebug/commlink/pipe_manager.py def recverruntil(\n self: PipeManager,\n delims: bytes,\n occurences: int = 1,\n drop: bool = False,\n timeout: int = timeout_default,\n optional: bool = False,\n) -> bytes:\n \"\"\"Receives data from the child process stderr until the delimiters are found.\n\n Args:\n delims (bytes): delimiters where to stop.\n occurences (int, optional): number of delimiters to find. Defaults to 1.\n drop (bool, optional): drop the delimiter. Defaults to False.\n timeout (int, optional): timeout in seconds. Defaults to timeout_default.\n optional (bool, optional): whether to ignore the wait for the received input if the command is executed when the process is stopped. Defaults to False.\n\n Returns:\n bytes: received data from the child process stderr.\n \"\"\"\n return self._recvuntil(\n delims=delims,\n occurences=occurences,\n drop=drop,\n timeout=timeout,\n stderr=True,\n optional=optional,\n )\n"},{"location":"from_pydoc/generated/commlink/pipe_manager/#libdebug.commlink.pipe_manager.PipeManager.recvline","title":"recvline(numlines=1, drop=True, timeout=timeout_default, optional=False)","text":"Receives numlines lines from the child process stdout.
Parameters:
Name Type Description Defaultnumlines int number of lines to receive. Defaults to 1.
1 drop bool drop the line ending. Defaults to True.
True timeout int timeout in seconds. Defaults to timeout_default.
timeout_default optional bool whether to ignore the wait for the received input if the command is executed when the process is stopped. Defaults to False.
False Returns:
Name Type Descriptionbytes bytes received lines from the child process stdout.
Source code inlibdebug/commlink/pipe_manager.py def recvline(\n self: PipeManager,\n numlines: int = 1,\n drop: bool = True,\n timeout: int = timeout_default,\n optional: bool = False,\n) -> bytes:\n \"\"\"Receives numlines lines from the child process stdout.\n\n Args:\n numlines (int, optional): number of lines to receive. Defaults to 1.\n drop (bool, optional): drop the line ending. Defaults to True.\n timeout (int, optional): timeout in seconds. Defaults to timeout_default.\n optional (bool, optional): whether to ignore the wait for the received input if the command is executed when the process is stopped. Defaults to False.\n\n Returns:\n bytes: received lines from the child process stdout.\n \"\"\"\n return self.recvuntil(delims=b\"\\n\", occurences=numlines, drop=drop, timeout=timeout, optional=optional)\n"},{"location":"from_pydoc/generated/commlink/pipe_manager/#libdebug.commlink.pipe_manager.PipeManager.recvuntil","title":"recvuntil(delims, occurences=1, drop=False, timeout=timeout_default, optional=False)","text":"Receives data from the child process stdout until the delimiters are found.
Parameters:
Name Type Description Defaultdelims bytes delimiters where to stop.
requiredoccurences int number of delimiters to find. Defaults to 1.
1 drop bool drop the delimiter. Defaults to False.
False timeout int timeout in seconds. Defaults to timeout_default.
timeout_default optional bool whether to ignore the wait for the received input if the command is executed when the process is stopped. Defaults to False.
False Returns:
Name Type Descriptionbytes bytes received data from the child process stdout.
Source code inlibdebug/commlink/pipe_manager.py def recvuntil(\n self: PipeManager,\n delims: bytes,\n occurences: int = 1,\n drop: bool = False,\n timeout: int = timeout_default,\n optional: bool = False,\n) -> bytes:\n \"\"\"Receives data from the child process stdout until the delimiters are found.\n\n Args:\n delims (bytes): delimiters where to stop.\n occurences (int, optional): number of delimiters to find. Defaults to 1.\n drop (bool, optional): drop the delimiter. Defaults to False.\n timeout (int, optional): timeout in seconds. Defaults to timeout_default.\n optional (bool, optional): whether to ignore the wait for the received input if the command is executed when the process is stopped. Defaults to False.\n\n Returns:\n bytes: received data from the child process stdout.\n \"\"\"\n return self._recvuntil(\n delims=delims,\n occurences=occurences,\n drop=drop,\n timeout=timeout,\n stderr=False,\n optional=optional,\n )\n"},{"location":"from_pydoc/generated/commlink/pipe_manager/#libdebug.commlink.pipe_manager.PipeManager.send","title":"send(data)","text":"Sends data to the child process stdin.
Parameters:
Name Type Description Defaultdata bytes data to send.
requiredReturns:
Name Type Descriptionint int number of bytes sent.
Raises:
Type DescriptionRuntimeError no stdin pipe of the child process.
Source code inlibdebug/commlink/pipe_manager.py def send(self: PipeManager, data: bytes) -> int:\n \"\"\"Sends data to the child process stdin.\n\n Args:\n data (bytes): data to send.\n\n Returns:\n int: number of bytes sent.\n\n Raises:\n RuntimeError: no stdin pipe of the child process.\n \"\"\"\n if not self._stdin_write:\n raise RuntimeError(\"No stdin pipe of the child process\")\n\n liblog.pipe(f\"Sending {len(data)} bytes to the child process: {data!r}\")\n\n if isinstance(data, str):\n liblog.warning(\"The input data is a string, converting to bytes\")\n data = data.encode()\n\n try:\n number_bytes = os.write(self._stdin_write, data)\n except OSError as e:\n raise RuntimeError(\"Broken pipe. Is the child process still running?\") from e\n\n return number_bytes\n"},{"location":"from_pydoc/generated/commlink/pipe_manager/#libdebug.commlink.pipe_manager.PipeManager.sendafter","title":"sendafter(delims, data, occurences=1, drop=False, timeout=timeout_default, optional=False)","text":"Sends data to the child process stdin after the delimiters are found in the stdout.
Parameters:
Name Type Description Defaultdelims bytes delimiters where to stop.
requireddata bytes data to send.
requiredoccurences int number of delimiters to find. Defaults to 1.
1 drop bool drop the delimiter. Defaults to False.
False timeout int timeout in seconds. Defaults to timeout_default.
timeout_default optional bool whether to ignore the wait for the received input if the command is executed when the process is stopped. Defaults to False.
False Returns:
Name Type Descriptionbytes bytes received data from the child process stdout.
int int number of bytes sent.
Source code inlibdebug/commlink/pipe_manager.py def sendafter(\n self: PipeManager,\n delims: bytes,\n data: bytes,\n occurences: int = 1,\n drop: bool = False,\n timeout: int = timeout_default,\n optional: bool = False,\n) -> tuple[bytes, int]:\n \"\"\"Sends data to the child process stdin after the delimiters are found in the stdout.\n\n Args:\n delims (bytes): delimiters where to stop.\n data (bytes): data to send.\n occurences (int, optional): number of delimiters to find. Defaults to 1.\n drop (bool, optional): drop the delimiter. Defaults to False.\n timeout (int, optional): timeout in seconds. Defaults to timeout_default.\n optional (bool, optional): whether to ignore the wait for the received input if the command is executed when the process is stopped. Defaults to False.\n\n Returns:\n bytes: received data from the child process stdout.\n int: number of bytes sent.\n \"\"\"\n received = self.recvuntil(delims=delims, occurences=occurences, drop=drop, timeout=timeout, optional=optional)\n sent = self.send(data)\n return (received, sent)\n"},{"location":"from_pydoc/generated/commlink/pipe_manager/#libdebug.commlink.pipe_manager.PipeManager.sendaftererr","title":"sendaftererr(delims, data, occurences=1, drop=False, timeout=timeout_default, optional=False)","text":"Sends data to the child process stdin after the delimiters are found in stderr.
Parameters:
Name Type Description Defaultdelims bytes delimiters where to stop.
requireddata bytes data to send.
requiredoccurences int number of delimiters to find. Defaults to 1.
1 drop bool drop the delimiter. Defaults to False.
False timeout int timeout in seconds. Defaults to timeout_default.
timeout_default optional bool whether to ignore the wait for the received input if the command is executed when the process is stopped. Defaults to False.
False Returns:
Name Type Descriptionbytes bytes received data from the child process stderr.
int int number of bytes sent.
Source code inlibdebug/commlink/pipe_manager.py def sendaftererr(\n self: PipeManager,\n delims: bytes,\n data: bytes,\n occurences: int = 1,\n drop: bool = False,\n timeout: int = timeout_default,\n optional: bool = False,\n) -> tuple[bytes, int]:\n \"\"\"Sends data to the child process stdin after the delimiters are found in stderr.\n\n Args:\n delims (bytes): delimiters where to stop.\n data (bytes): data to send.\n occurences (int, optional): number of delimiters to find. Defaults to 1.\n drop (bool, optional): drop the delimiter. Defaults to False.\n timeout (int, optional): timeout in seconds. Defaults to timeout_default.\n optional (bool, optional): whether to ignore the wait for the received input if the command is executed when the process is stopped. Defaults to False.\n\n Returns:\n bytes: received data from the child process stderr.\n int: number of bytes sent.\n \"\"\"\n received = self.recverruntil(\n delims=delims,\n occurences=occurences,\n drop=drop,\n timeout=timeout,\n optional=optional,\n )\n sent = self.send(data)\n return (received, sent)\n"},{"location":"from_pydoc/generated/commlink/pipe_manager/#libdebug.commlink.pipe_manager.PipeManager.sendline","title":"sendline(data)","text":"Sends data to the child process stdin and append a newline.
Parameters:
Name Type Description Defaultdata bytes data to send.
requiredReturns:
Name Type Descriptionint int number of bytes sent.
Source code inlibdebug/commlink/pipe_manager.py def sendline(self: PipeManager, data: bytes) -> int:\n \"\"\"Sends data to the child process stdin and append a newline.\n\n Args:\n data (bytes): data to send.\n\n Returns:\n int: number of bytes sent.\n \"\"\"\n if isinstance(data, str):\n liblog.warning(\"The input data is a string, converting to bytes\")\n data = data.encode()\n return self.send(data=data + b\"\\n\")\n"},{"location":"from_pydoc/generated/commlink/pipe_manager/#libdebug.commlink.pipe_manager.PipeManager.sendlineafter","title":"sendlineafter(delims, data, occurences=1, drop=False, timeout=timeout_default, optional=False)","text":"Sends line to the child process stdin after the delimiters are found in the stdout.
Parameters:
Name Type Description Defaultdelims bytes delimiters where to stop.
requireddata bytes data to send.
requiredoccurences int number of delimiters to find. Defaults to 1.
1 drop bool drop the delimiter. Defaults to False.
False timeout int timeout in seconds. Defaults to timeout_default.
timeout_default optional bool whether to ignore the wait for the received input if the command is executed when the process is stopped. Defaults to False.
False Returns:
Name Type Descriptionbytes bytes received data from the child process stdout.
int int number of bytes sent.
Source code inlibdebug/commlink/pipe_manager.py def sendlineafter(\n self: PipeManager,\n delims: bytes,\n data: bytes,\n occurences: int = 1,\n drop: bool = False,\n timeout: int = timeout_default,\n optional: bool = False,\n) -> tuple[bytes, int]:\n \"\"\"Sends line to the child process stdin after the delimiters are found in the stdout.\n\n Args:\n delims (bytes): delimiters where to stop.\n data (bytes): data to send.\n occurences (int, optional): number of delimiters to find. Defaults to 1.\n drop (bool, optional): drop the delimiter. Defaults to False.\n timeout (int, optional): timeout in seconds. Defaults to timeout_default.\n optional (bool, optional): whether to ignore the wait for the received input if the command is executed when the process is stopped. Defaults to False.\n\n Returns:\n bytes: received data from the child process stdout.\n int: number of bytes sent.\n \"\"\"\n received = self.recvuntil(delims=delims, occurences=occurences, drop=drop, timeout=timeout, optional=optional)\n sent = self.sendline(data)\n return (received, sent)\n"},{"location":"from_pydoc/generated/commlink/pipe_manager/#libdebug.commlink.pipe_manager.PipeManager.sendlineaftererr","title":"sendlineaftererr(delims, data, occurences=1, drop=False, timeout=timeout_default, optional=False)","text":"Sends line to the child process stdin after the delimiters are found in the stderr.
Parameters:
Name Type Description Defaultdelims bytes delimiters where to stop.
requireddata bytes data to send.
requiredoccurences int number of delimiters to find. Defaults to 1.
1 drop bool drop the delimiter. Defaults to False.
False timeout int timeout in seconds. Defaults to timeout_default.
timeout_default optional bool whether to ignore the wait for the received input if the command is executed when the process is stopped. Defaults to False.
False Returns:
Name Type Descriptionbytes bytes received data from the child process stderr.
int int number of bytes sent.
Source code inlibdebug/commlink/pipe_manager.py def sendlineaftererr(\n self: PipeManager,\n delims: bytes,\n data: bytes,\n occurences: int = 1,\n drop: bool = False,\n timeout: int = timeout_default,\n optional: bool = False,\n) -> tuple[bytes, int]:\n \"\"\"Sends line to the child process stdin after the delimiters are found in the stderr.\n\n Args:\n delims (bytes): delimiters where to stop.\n data (bytes): data to send.\n occurences (int, optional): number of delimiters to find. Defaults to 1.\n drop (bool, optional): drop the delimiter. Defaults to False.\n timeout (int, optional): timeout in seconds. Defaults to timeout_default.\n optional (bool, optional): whether to ignore the wait for the received input if the command is executed when the process is stopped. Defaults to False.\n\n Returns:\n bytes: received data from the child process stderr.\n int: number of bytes sent.\n \"\"\"\n received = self.recverruntil(\n delims=delims,\n occurences=occurences,\n drop=drop,\n timeout=timeout,\n optional=optional,\n )\n sent = self.sendline(data)\n return (received, sent)\n"},{"location":"from_pydoc/generated/commlink/std_wrapper/","title":"libdebug.commlink.std_wrapper","text":""},{"location":"from_pydoc/generated/commlink/std_wrapper/#libdebug.commlink.std_wrapper.StdWrapper","title":"StdWrapper","text":"Wrapper around stderr/stdout to allow for custom write method.
Source code inlibdebug/commlink/std_wrapper.py class StdWrapper:\n \"\"\"Wrapper around stderr/stdout to allow for custom write method.\"\"\"\n\n def __init__(self: StdWrapper, fd: object, terminal: LibTerminal) -> None:\n \"\"\"Initializes the StderrWrapper object.\"\"\"\n self._fd: object = fd\n self._terminal: LibTerminal = terminal\n\n def write(self, payload: bytes | str) -> int:\n \"\"\"Overloads the write method to allow for custom behavior.\"\"\"\n return self._terminal._write_manager(payload)\n\n def __getattr__(self, k: any) -> any:\n \"\"\"Ensure that all other attributes are forwarded to the original file descriptor.\"\"\"\n return getattr(self._fd, k)\n"},{"location":"from_pydoc/generated/commlink/std_wrapper/#libdebug.commlink.std_wrapper.StdWrapper.__getattr__","title":"__getattr__(k)","text":"Ensure that all other attributes are forwarded to the original file descriptor.
Source code inlibdebug/commlink/std_wrapper.py def __getattr__(self, k: any) -> any:\n \"\"\"Ensure that all other attributes are forwarded to the original file descriptor.\"\"\"\n return getattr(self._fd, k)\n"},{"location":"from_pydoc/generated/commlink/std_wrapper/#libdebug.commlink.std_wrapper.StdWrapper.__init__","title":"__init__(fd, terminal)","text":"Initializes the StderrWrapper object.
Source code inlibdebug/commlink/std_wrapper.py def __init__(self: StdWrapper, fd: object, terminal: LibTerminal) -> None:\n \"\"\"Initializes the StderrWrapper object.\"\"\"\n self._fd: object = fd\n self._terminal: LibTerminal = terminal\n"},{"location":"from_pydoc/generated/commlink/std_wrapper/#libdebug.commlink.std_wrapper.StdWrapper.write","title":"write(payload)","text":"Overloads the write method to allow for custom behavior.
Source code inlibdebug/commlink/std_wrapper.py def write(self, payload: bytes | str) -> int:\n \"\"\"Overloads the write method to allow for custom behavior.\"\"\"\n return self._terminal._write_manager(payload)\n"},{"location":"from_pydoc/generated/data/breakpoint/","title":"libdebug.data.breakpoint","text":""},{"location":"from_pydoc/generated/data/breakpoint/#libdebug.data.breakpoint.Breakpoint","title":"Breakpoint dataclass","text":"A breakpoint in the target process.
Attributes:
Name Type Descriptionaddress int The address of the breakpoint in the target process.
symbol str The symbol, if available, of the breakpoint in the target process.
hit_count int The number of times this specific breakpoint has been hit.
hardware bool Whether the breakpoint is a hardware breakpoint or not.
callback Callable[[ThreadContext, Breakpoint], None] The callback defined by the user to execute when the breakpoint is hit.
condition str The breakpoint condition. Available values are \"X\", \"W\", \"RW\". Supported only for hardware breakpoints.
length int The length of the breakpoint area. Supported only for hardware breakpoints.
enabled bool Whether the breakpoint is enabled or not.
Source code inlibdebug/data/breakpoint.py @dataclass\nclass Breakpoint:\n \"\"\"A breakpoint in the target process.\n\n Attributes:\n address (int): The address of the breakpoint in the target process.\n symbol (str): The symbol, if available, of the breakpoint in the target process.\n hit_count (int): The number of times this specific breakpoint has been hit.\n hardware (bool): Whether the breakpoint is a hardware breakpoint or not.\n callback (Callable[[ThreadContext, Breakpoint], None]): The callback defined by the user to execute when the breakpoint is hit.\n condition (str): The breakpoint condition. Available values are \"X\", \"W\", \"RW\". Supported only for hardware breakpoints.\n length (int): The length of the breakpoint area. Supported only for hardware breakpoints.\n enabled (bool): Whether the breakpoint is enabled or not.\n \"\"\"\n\n address: int = 0\n symbol: str = \"\"\n hit_count: int = 0\n hardware: bool = False\n callback: None | Callable[[ThreadContext, Breakpoint], None] = None\n condition: str = \"x\"\n length: int = 1\n enabled: bool = True\n\n _linked_thread_ids: list[int] = field(default_factory=list)\n # The thread ID that hit the breakpoint\n\n _disabled_for_step: bool = False\n _changed: bool = False\n\n def enable(self: Breakpoint) -> None:\n \"\"\"Enable the breakpoint.\"\"\"\n provide_internal_debugger(self)._ensure_process_stopped()\n self.enabled = True\n self._changed = True\n\n def disable(self: Breakpoint) -> None:\n \"\"\"Disable the breakpoint.\"\"\"\n provide_internal_debugger(self)._ensure_process_stopped()\n self.enabled = False\n self._changed = True\n\n def hit_on(self: Breakpoint, thread_context: ThreadContext) -> bool:\n \"\"\"Returns whether the breakpoint has been hit on the given thread context.\"\"\"\n if not self.enabled:\n return False\n\n internal_debugger = provide_internal_debugger(self)\n internal_debugger._ensure_process_stopped()\n return internal_debugger.resume_context.event_hit_ref.get(thread_context.thread_id) == self\n\n def __hash__(self: Breakpoint) -> int:\n \"\"\"Hash the breakpoint object by its memory address, so that it can be used in sets and dicts correctly.\"\"\"\n return hash(id(self))\n\n def __eq__(self: Breakpoint, other: object) -> bool:\n \"\"\"Check if two breakpoints are equal.\"\"\"\n return id(self) == id(other)\n"},{"location":"from_pydoc/generated/data/breakpoint/#libdebug.data.breakpoint.Breakpoint.__eq__","title":"__eq__(other)","text":"Check if two breakpoints are equal.
Source code inlibdebug/data/breakpoint.py def __eq__(self: Breakpoint, other: object) -> bool:\n \"\"\"Check if two breakpoints are equal.\"\"\"\n return id(self) == id(other)\n"},{"location":"from_pydoc/generated/data/breakpoint/#libdebug.data.breakpoint.Breakpoint.__hash__","title":"__hash__()","text":"Hash the breakpoint object by its memory address, so that it can be used in sets and dicts correctly.
Source code inlibdebug/data/breakpoint.py def __hash__(self: Breakpoint) -> int:\n \"\"\"Hash the breakpoint object by its memory address, so that it can be used in sets and dicts correctly.\"\"\"\n return hash(id(self))\n"},{"location":"from_pydoc/generated/data/breakpoint/#libdebug.data.breakpoint.Breakpoint.disable","title":"disable()","text":"Disable the breakpoint.
Source code inlibdebug/data/breakpoint.py def disable(self: Breakpoint) -> None:\n \"\"\"Disable the breakpoint.\"\"\"\n provide_internal_debugger(self)._ensure_process_stopped()\n self.enabled = False\n self._changed = True\n"},{"location":"from_pydoc/generated/data/breakpoint/#libdebug.data.breakpoint.Breakpoint.enable","title":"enable()","text":"Enable the breakpoint.
Source code inlibdebug/data/breakpoint.py def enable(self: Breakpoint) -> None:\n \"\"\"Enable the breakpoint.\"\"\"\n provide_internal_debugger(self)._ensure_process_stopped()\n self.enabled = True\n self._changed = True\n"},{"location":"from_pydoc/generated/data/breakpoint/#libdebug.data.breakpoint.Breakpoint.hit_on","title":"hit_on(thread_context)","text":"Returns whether the breakpoint has been hit on the given thread context.
Source code inlibdebug/data/breakpoint.py def hit_on(self: Breakpoint, thread_context: ThreadContext) -> bool:\n \"\"\"Returns whether the breakpoint has been hit on the given thread context.\"\"\"\n if not self.enabled:\n return False\n\n internal_debugger = provide_internal_debugger(self)\n internal_debugger._ensure_process_stopped()\n return internal_debugger.resume_context.event_hit_ref.get(thread_context.thread_id) == self\n"},{"location":"from_pydoc/generated/data/gdb_resume_event/","title":"libdebug.data.gdb_resume_event","text":""},{"location":"from_pydoc/generated/data/gdb_resume_event/#libdebug.data.gdb_resume_event.GdbResumeEvent","title":"GdbResumeEvent","text":"This class handles the actions needed to resume the debugging session, after returning from GDB.
Source code inlibdebug/data/gdb_resume_event.py class GdbResumeEvent:\n \"\"\"This class handles the actions needed to resume the debugging session, after returning from GDB.\"\"\"\n\n def __init__(\n self: GdbResumeEvent,\n internal_debugger: InternalDebugger,\n lambda_function: callable[[], None],\n ) -> None:\n \"\"\"Initializes the GdbResumeEvent.\n\n Args:\n internal_debugger (InternalDebugger): The internal debugger instance.\n lambda_function (callable[[], None]): The blocking lambda function to wait on.\n \"\"\"\n self._internal_debugger = internal_debugger\n self._lambda_function = lambda_function\n self._joined = False\n\n def join(self: GdbResumeEvent) -> None:\n \"\"\"Resumes the debugging session, blocking the script until GDB terminate and libdebug reattaches.\"\"\"\n if self._joined:\n raise RuntimeError(\"GdbResumeEvent already joined\")\n\n self._lambda_function()\n self._internal_debugger._resume_from_gdb()\n self._joined = True\n"},{"location":"from_pydoc/generated/data/gdb_resume_event/#libdebug.data.gdb_resume_event.GdbResumeEvent.__init__","title":"__init__(internal_debugger, lambda_function)","text":"Initializes the GdbResumeEvent.
Parameters:
Name Type Description Defaultinternal_debugger InternalDebugger The internal debugger instance.
requiredlambda_function callable[[], None] The blocking lambda function to wait on.
required Source code inlibdebug/data/gdb_resume_event.py def __init__(\n self: GdbResumeEvent,\n internal_debugger: InternalDebugger,\n lambda_function: callable[[], None],\n) -> None:\n \"\"\"Initializes the GdbResumeEvent.\n\n Args:\n internal_debugger (InternalDebugger): The internal debugger instance.\n lambda_function (callable[[], None]): The blocking lambda function to wait on.\n \"\"\"\n self._internal_debugger = internal_debugger\n self._lambda_function = lambda_function\n self._joined = False\n"},{"location":"from_pydoc/generated/data/gdb_resume_event/#libdebug.data.gdb_resume_event.GdbResumeEvent.join","title":"join()","text":"Resumes the debugging session, blocking the script until GDB terminate and libdebug reattaches.
Source code inlibdebug/data/gdb_resume_event.py def join(self: GdbResumeEvent) -> None:\n \"\"\"Resumes the debugging session, blocking the script until GDB terminate and libdebug reattaches.\"\"\"\n if self._joined:\n raise RuntimeError(\"GdbResumeEvent already joined\")\n\n self._lambda_function()\n self._internal_debugger._resume_from_gdb()\n self._joined = True\n"},{"location":"from_pydoc/generated/data/memory_map/","title":"libdebug.data.memory_map","text":""},{"location":"from_pydoc/generated/data/memory_map/#libdebug.data.memory_map.MemoryMap","title":"MemoryMap dataclass","text":"A memory map of the target process.
Attributes:
Name Type Descriptionstart int The start address of the memory map. You can access it also with the 'base' attribute.
end int The end address of the memory map.
permissions str The permissions of the memory map.
size int The size of the memory map.
offset int The relative offset of the memory map.
backing_file str The backing file of the memory map, or the symbolic name of the memory map.
Source code inlibdebug/data/memory_map.py @dataclass\nclass MemoryMap:\n \"\"\"A memory map of the target process.\n\n Attributes:\n start (int): The start address of the memory map. You can access it also with the 'base' attribute.\n end (int): The end address of the memory map.\n permissions (str): The permissions of the memory map.\n size (int): The size of the memory map.\n offset (int): The relative offset of the memory map.\n backing_file (str): The backing file of the memory map, or the symbolic name of the memory map.\n \"\"\"\n\n start: int = 0\n end: int = 0\n permissions: str = \"\"\n size: int = 0\n\n offset: int = 0\n \"\"\"The relative offset of the memory map inside the backing file, if any.\"\"\"\n\n backing_file: str = \"\"\n \"\"\"The backing file of the memory map, such as 'libc.so.6', or the symbolic name of the memory map, such as '[stack]'.\"\"\"\n\n @staticmethod\n def parse(vmap: str) -> MemoryMap:\n \"\"\"Parses a memory map from a /proc/pid/maps string representation.\n\n Args:\n vmap (str): The string containing the memory map.\n\n Returns:\n MemoryMap: The parsed memory map.\n \"\"\"\n try:\n address, permissions, offset, *_, backing_file = vmap.split(\" \", 6)\n start = int(address.split(\"-\")[0], 16)\n end = int(address.split(\"-\")[1], 16)\n size = end - start\n int_offset = int(offset, 16)\n backing_file = backing_file.strip()\n if not backing_file:\n backing_file = f\"anon_{start:x}\"\n except ValueError as e:\n raise ValueError(\n f\"Invalid memory map: {vmap}. Please specify a valid memory map.\",\n ) from e\n\n return MemoryMap(start, end, permissions, size, int_offset, backing_file)\n\n @property\n def base(self: MemoryMap) -> int:\n \"\"\"Alias for the start address of the memory map.\"\"\"\n return self.start\n\n def __repr__(self: MemoryMap) -> str:\n \"\"\"Return the string representation of the memory map.\"\"\"\n return f\"MemoryMap(start={hex(self.start)}, end={hex(self.end)}, permissions={self.permissions}, size={hex(self.size)}, offset={hex(self.offset)}, backing_file={self.backing_file})\"\n"},{"location":"from_pydoc/generated/data/memory_map/#libdebug.data.memory_map.MemoryMap.backing_file","title":"backing_file = '' class-attribute instance-attribute","text":"The backing file of the memory map, such as 'libc.so.6', or the symbolic name of the memory map, such as '[stack]'.
"},{"location":"from_pydoc/generated/data/memory_map/#libdebug.data.memory_map.MemoryMap.base","title":"base property","text":"Alias for the start address of the memory map.
"},{"location":"from_pydoc/generated/data/memory_map/#libdebug.data.memory_map.MemoryMap.offset","title":"offset = 0 class-attribute instance-attribute","text":"The relative offset of the memory map inside the backing file, if any.
"},{"location":"from_pydoc/generated/data/memory_map/#libdebug.data.memory_map.MemoryMap.__repr__","title":"__repr__()","text":"Return the string representation of the memory map.
Source code inlibdebug/data/memory_map.py def __repr__(self: MemoryMap) -> str:\n \"\"\"Return the string representation of the memory map.\"\"\"\n return f\"MemoryMap(start={hex(self.start)}, end={hex(self.end)}, permissions={self.permissions}, size={hex(self.size)}, offset={hex(self.offset)}, backing_file={self.backing_file})\"\n"},{"location":"from_pydoc/generated/data/memory_map/#libdebug.data.memory_map.MemoryMap.parse","title":"parse(vmap) staticmethod","text":"Parses a memory map from a /proc/pid/maps string representation.
Parameters:
Name Type Description Defaultvmap str The string containing the memory map.
requiredReturns:
Name Type DescriptionMemoryMap MemoryMap The parsed memory map.
Source code inlibdebug/data/memory_map.py @staticmethod\ndef parse(vmap: str) -> MemoryMap:\n \"\"\"Parses a memory map from a /proc/pid/maps string representation.\n\n Args:\n vmap (str): The string containing the memory map.\n\n Returns:\n MemoryMap: The parsed memory map.\n \"\"\"\n try:\n address, permissions, offset, *_, backing_file = vmap.split(\" \", 6)\n start = int(address.split(\"-\")[0], 16)\n end = int(address.split(\"-\")[1], 16)\n size = end - start\n int_offset = int(offset, 16)\n backing_file = backing_file.strip()\n if not backing_file:\n backing_file = f\"anon_{start:x}\"\n except ValueError as e:\n raise ValueError(\n f\"Invalid memory map: {vmap}. Please specify a valid memory map.\",\n ) from e\n\n return MemoryMap(start, end, permissions, size, int_offset, backing_file)\n"},{"location":"from_pydoc/generated/data/memory_map_list/","title":"libdebug.data.memory_map_list","text":""},{"location":"from_pydoc/generated/data/memory_map_list/#libdebug.data.memory_map_list.MemoryMapList","title":"MemoryMapList","text":" Bases: list
A list of memory maps of the target process.
Source code inlibdebug/data/memory_map_list.py class MemoryMapList(list):\n \"\"\"A list of memory maps of the target process.\"\"\"\n\n def __init__(self: MemoryMapList, memory_maps: list[MemoryMap]) -> None:\n \"\"\"Initializes the MemoryMapList.\"\"\"\n super().__init__(memory_maps)\n self._internal_debugger = provide_internal_debugger(self)\n\n def _search_by_address(self: MemoryMapList, address: int) -> list[MemoryMap]:\n for vmap in self:\n if vmap.start <= address < vmap.end:\n return [vmap]\n return []\n\n def _search_by_backing_file(self: MemoryMapList, backing_file: str) -> list[MemoryMap]:\n if backing_file in [\"binary\", self._internal_debugger._process_name]:\n backing_file = self._internal_debugger._process_full_path\n\n filtered_maps = []\n unique_files = set()\n\n for vmap in self:\n if backing_file in vmap.backing_file:\n filtered_maps.append(vmap)\n unique_files.add(vmap.backing_file)\n\n if len(unique_files) > 1:\n liblog.warning(\n f\"The substring {backing_file} is present in multiple, different backing files. The address resolution cannot be accurate. The matching backing files are: {', '.join(unique_files)}.\",\n )\n\n return filtered_maps\n\n def filter(self: MemoryMapList, value: int | str) -> MemoryMapList[MemoryMap]:\n \"\"\"Filters the memory maps according to the specified value.\n\n If the value is an integer, it is treated as an address.\n If the value is a string, it is treated as a backing file.\n\n Args:\n value (int | str): The value to search for.\n\n Returns:\n MemoryMapList[MemoryMap]: The memory maps matching the specified value.\n \"\"\"\n if isinstance(value, int):\n filtered_maps = self._search_by_address(value)\n elif isinstance(value, str):\n filtered_maps = self._search_by_backing_file(value)\n else:\n raise TypeError(\"The value must be an integer or a string.\")\n\n with extend_internal_debugger(self._internal_debugger):\n return MemoryMapList(filtered_maps)\n\n def __hash__(self) -> int:\n \"\"\"Return the hash of the memory map list.\"\"\"\n return hash(id(self))\n\n def __eq__(self, other: object) -> bool:\n \"\"\"Check if the memory map list is equal to another object.\"\"\"\n return super().__eq__(other)\n\n def __repr__(self) -> str:\n \"\"\"Return the string representation of the memory map list.\"\"\"\n return f\"MemoryMapList({super().__repr__()})\"\n"},{"location":"from_pydoc/generated/data/memory_map_list/#libdebug.data.memory_map_list.MemoryMapList.__eq__","title":"__eq__(other)","text":"Check if the memory map list is equal to another object.
Source code inlibdebug/data/memory_map_list.py def __eq__(self, other: object) -> bool:\n \"\"\"Check if the memory map list is equal to another object.\"\"\"\n return super().__eq__(other)\n"},{"location":"from_pydoc/generated/data/memory_map_list/#libdebug.data.memory_map_list.MemoryMapList.__hash__","title":"__hash__()","text":"Return the hash of the memory map list.
Source code inlibdebug/data/memory_map_list.py def __hash__(self) -> int:\n \"\"\"Return the hash of the memory map list.\"\"\"\n return hash(id(self))\n"},{"location":"from_pydoc/generated/data/memory_map_list/#libdebug.data.memory_map_list.MemoryMapList.__init__","title":"__init__(memory_maps)","text":"Initializes the MemoryMapList.
Source code inlibdebug/data/memory_map_list.py def __init__(self: MemoryMapList, memory_maps: list[MemoryMap]) -> None:\n \"\"\"Initializes the MemoryMapList.\"\"\"\n super().__init__(memory_maps)\n self._internal_debugger = provide_internal_debugger(self)\n"},{"location":"from_pydoc/generated/data/memory_map_list/#libdebug.data.memory_map_list.MemoryMapList.__repr__","title":"__repr__()","text":"Return the string representation of the memory map list.
Source code inlibdebug/data/memory_map_list.py def __repr__(self) -> str:\n \"\"\"Return the string representation of the memory map list.\"\"\"\n return f\"MemoryMapList({super().__repr__()})\"\n"},{"location":"from_pydoc/generated/data/memory_map_list/#libdebug.data.memory_map_list.MemoryMapList.filter","title":"filter(value)","text":"Filters the memory maps according to the specified value.
If the value is an integer, it is treated as an address. If the value is a string, it is treated as a backing file.
Parameters:
Name Type Description Defaultvalue int | str The value to search for.
requiredReturns:
Type DescriptionMemoryMapList[MemoryMap] MemoryMapList[MemoryMap]: The memory maps matching the specified value.
Source code inlibdebug/data/memory_map_list.py def filter(self: MemoryMapList, value: int | str) -> MemoryMapList[MemoryMap]:\n \"\"\"Filters the memory maps according to the specified value.\n\n If the value is an integer, it is treated as an address.\n If the value is a string, it is treated as a backing file.\n\n Args:\n value (int | str): The value to search for.\n\n Returns:\n MemoryMapList[MemoryMap]: The memory maps matching the specified value.\n \"\"\"\n if isinstance(value, int):\n filtered_maps = self._search_by_address(value)\n elif isinstance(value, str):\n filtered_maps = self._search_by_backing_file(value)\n else:\n raise TypeError(\"The value must be an integer or a string.\")\n\n with extend_internal_debugger(self._internal_debugger):\n return MemoryMapList(filtered_maps)\n"},{"location":"from_pydoc/generated/data/register_holder/","title":"libdebug.data.register_holder","text":""},{"location":"from_pydoc/generated/data/register_holder/#libdebug.data.register_holder.RegisterHolder","title":"RegisterHolder","text":" Bases: ABC
An abstract class that holds the state of the registers of a process, providing setters and getters for them.
Source code inlibdebug/data/register_holder.py class RegisterHolder(ABC):\n \"\"\"An abstract class that holds the state of the registers of a process, providing setters and getters for them.\"\"\"\n\n @abstractmethod\n def apply_on_thread(self: RegisterHolder, target: ThreadContext, target_class: type) -> None:\n \"\"\"Applies the current register values to the specified thread target.\n\n Args:\n target (ThreadContext): The object to which the register values should be applied.\n target_class (type): The class of the target object, needed to set the attributes.\n \"\"\"\n\n @abstractmethod\n def apply_on_regs(self: RegisterHolder, target: object, target_class: type) -> None:\n \"\"\"Applies the current register values to the specified regs target.\n\n Args:\n target (object): The object to which the register values should be applied.\n target_class (type): The class of the target object, needed to set the attributes.\n \"\"\"\n\n @abstractmethod\n def poll(self: RegisterHolder, target: ThreadContext) -> None:\n \"\"\"Polls the register values from the specified target.\n\n Args:\n target (ThreadContext): The object from which the register values should be polled.\n \"\"\"\n\n @abstractmethod\n def flush(self: RegisterHolder, source: ThreadContext) -> None:\n \"\"\"Flushes the register values from the specified source.\n\n Args:\n source (ThreadContext): The object from which the register values should be flushed.\n \"\"\"\n\n @abstractmethod\n def provide_regs(self: RegisterHolder) -> list[str]:\n \"\"\"Provide the list of registers, excluding the vector and fp registers.\"\"\"\n\n @abstractmethod\n def provide_vector_fp_regs(self: RegisterHolder) -> list[tuple[str]]:\n \"\"\"Provide the list of vector and floating point registers.\"\"\"\n\n @abstractmethod\n def provide_special_regs(self: RegisterHolder) -> list[str]:\n \"\"\"Provide the list of special registers, which are not intended for general-purpose use.\"\"\"\n\n @abstractmethod\n def cleanup(self: RegisterHolder) -> None:\n \"\"\"Clean up the register accessors from the class.\"\"\"\n"},{"location":"from_pydoc/generated/data/register_holder/#libdebug.data.register_holder.RegisterHolder.apply_on_regs","title":"apply_on_regs(target, target_class) abstractmethod","text":"Applies the current register values to the specified regs target.
Parameters:
Name Type Description Defaulttarget object The object to which the register values should be applied.
requiredtarget_class type The class of the target object, needed to set the attributes.
required Source code inlibdebug/data/register_holder.py @abstractmethod\ndef apply_on_regs(self: RegisterHolder, target: object, target_class: type) -> None:\n \"\"\"Applies the current register values to the specified regs target.\n\n Args:\n target (object): The object to which the register values should be applied.\n target_class (type): The class of the target object, needed to set the attributes.\n \"\"\"\n"},{"location":"from_pydoc/generated/data/register_holder/#libdebug.data.register_holder.RegisterHolder.apply_on_thread","title":"apply_on_thread(target, target_class) abstractmethod","text":"Applies the current register values to the specified thread target.
Parameters:
Name Type Description Defaulttarget ThreadContext The object to which the register values should be applied.
requiredtarget_class type The class of the target object, needed to set the attributes.
required Source code inlibdebug/data/register_holder.py @abstractmethod\ndef apply_on_thread(self: RegisterHolder, target: ThreadContext, target_class: type) -> None:\n \"\"\"Applies the current register values to the specified thread target.\n\n Args:\n target (ThreadContext): The object to which the register values should be applied.\n target_class (type): The class of the target object, needed to set the attributes.\n \"\"\"\n"},{"location":"from_pydoc/generated/data/register_holder/#libdebug.data.register_holder.RegisterHolder.cleanup","title":"cleanup() abstractmethod","text":"Clean up the register accessors from the class.
Source code inlibdebug/data/register_holder.py @abstractmethod\ndef cleanup(self: RegisterHolder) -> None:\n \"\"\"Clean up the register accessors from the class.\"\"\"\n"},{"location":"from_pydoc/generated/data/register_holder/#libdebug.data.register_holder.RegisterHolder.flush","title":"flush(source) abstractmethod","text":"Flushes the register values from the specified source.
Parameters:
Name Type Description Defaultsource ThreadContext The object from which the register values should be flushed.
required Source code inlibdebug/data/register_holder.py @abstractmethod\ndef flush(self: RegisterHolder, source: ThreadContext) -> None:\n \"\"\"Flushes the register values from the specified source.\n\n Args:\n source (ThreadContext): The object from which the register values should be flushed.\n \"\"\"\n"},{"location":"from_pydoc/generated/data/register_holder/#libdebug.data.register_holder.RegisterHolder.poll","title":"poll(target) abstractmethod","text":"Polls the register values from the specified target.
Parameters:
Name Type Description Defaulttarget ThreadContext The object from which the register values should be polled.
required Source code inlibdebug/data/register_holder.py @abstractmethod\ndef poll(self: RegisterHolder, target: ThreadContext) -> None:\n \"\"\"Polls the register values from the specified target.\n\n Args:\n target (ThreadContext): The object from which the register values should be polled.\n \"\"\"\n"},{"location":"from_pydoc/generated/data/register_holder/#libdebug.data.register_holder.RegisterHolder.provide_regs","title":"provide_regs() abstractmethod","text":"Provide the list of registers, excluding the vector and fp registers.
Source code inlibdebug/data/register_holder.py @abstractmethod\ndef provide_regs(self: RegisterHolder) -> list[str]:\n \"\"\"Provide the list of registers, excluding the vector and fp registers.\"\"\"\n"},{"location":"from_pydoc/generated/data/register_holder/#libdebug.data.register_holder.RegisterHolder.provide_special_regs","title":"provide_special_regs() abstractmethod","text":"Provide the list of special registers, which are not intended for general-purpose use.
Source code inlibdebug/data/register_holder.py @abstractmethod\ndef provide_special_regs(self: RegisterHolder) -> list[str]:\n \"\"\"Provide the list of special registers, which are not intended for general-purpose use.\"\"\"\n"},{"location":"from_pydoc/generated/data/register_holder/#libdebug.data.register_holder.RegisterHolder.provide_vector_fp_regs","title":"provide_vector_fp_regs() abstractmethod","text":"Provide the list of vector and floating point registers.
Source code inlibdebug/data/register_holder.py @abstractmethod\ndef provide_vector_fp_regs(self: RegisterHolder) -> list[tuple[str]]:\n \"\"\"Provide the list of vector and floating point registers.\"\"\"\n"},{"location":"from_pydoc/generated/data/registers/","title":"libdebug.data.registers","text":""},{"location":"from_pydoc/generated/data/registers/#libdebug.data.registers.Registers","title":"Registers","text":"Abtract class that holds the state of the architectural-dependent registers of a process.
Source code inlibdebug/data/registers.py class Registers:\n \"\"\"Abtract class that holds the state of the architectural-dependent registers of a process.\"\"\"\n\n def __init__(self: Registers, thread_id: int, generic_regs: list[str]) -> None:\n \"\"\"Initializes the Registers object.\"\"\"\n self._internal_debugger = get_global_internal_debugger()\n self._thread_id = thread_id\n self._generic_regs = generic_regs\n\n def __repr__(self: Registers) -> str:\n \"\"\"Returns a string representation of the object.\"\"\"\n repr_str = f\"Registers(thread_id={self._thread_id})\"\n\n attributes = self._generic_regs\n max_len = max(len(attr) for attr in attributes) + 1\n\n repr_str += \"\".join(f\"\\n {attr + ':':<{max_len}} {getattr(self, attr):#x}\" for attr in attributes)\n\n return repr_str\n\n def filter(self: Registers, value: float) -> list[str]:\n \"\"\"Filters the registers by value.\n\n Args:\n value (float): The value to search for.\n\n Returns:\n list[str]: A list of names of the registers containing the value.\n \"\"\"\n attributes = self.__class__.__dict__\n return [attr for attr in attributes if getattr(self, attr) == value]\n"},{"location":"from_pydoc/generated/data/registers/#libdebug.data.registers.Registers.__init__","title":"__init__(thread_id, generic_regs)","text":"Initializes the Registers object.
Source code inlibdebug/data/registers.py def __init__(self: Registers, thread_id: int, generic_regs: list[str]) -> None:\n \"\"\"Initializes the Registers object.\"\"\"\n self._internal_debugger = get_global_internal_debugger()\n self._thread_id = thread_id\n self._generic_regs = generic_regs\n"},{"location":"from_pydoc/generated/data/registers/#libdebug.data.registers.Registers.__repr__","title":"__repr__()","text":"Returns a string representation of the object.
Source code inlibdebug/data/registers.py def __repr__(self: Registers) -> str:\n \"\"\"Returns a string representation of the object.\"\"\"\n repr_str = f\"Registers(thread_id={self._thread_id})\"\n\n attributes = self._generic_regs\n max_len = max(len(attr) for attr in attributes) + 1\n\n repr_str += \"\".join(f\"\\n {attr + ':':<{max_len}} {getattr(self, attr):#x}\" for attr in attributes)\n\n return repr_str\n"},{"location":"from_pydoc/generated/data/registers/#libdebug.data.registers.Registers.filter","title":"filter(value)","text":"Filters the registers by value.
Parameters:
Name Type Description Defaultvalue float The value to search for.
requiredReturns:
Type Descriptionlist[str] list[str]: A list of names of the registers containing the value.
Source code inlibdebug/data/registers.py def filter(self: Registers, value: float) -> list[str]:\n \"\"\"Filters the registers by value.\n\n Args:\n value (float): The value to search for.\n\n Returns:\n list[str]: A list of names of the registers containing the value.\n \"\"\"\n attributes = self.__class__.__dict__\n return [attr for attr in attributes if getattr(self, attr) == value]\n"},{"location":"from_pydoc/generated/data/signal_catcher/","title":"libdebug.data.signal_catcher","text":""},{"location":"from_pydoc/generated/data/signal_catcher/#libdebug.data.signal_catcher.SignalCatcher","title":"SignalCatcher dataclass","text":"Catch a signal raised by the target process.
Attributes:
Name Type Descriptionsignal_number int The signal number to catch.
callback Callable[[ThreadContext, CaughtSignal], None] The callback defined by the user to execute when the signal is caught.
recursive bool Whether, when the signal is hijacked with another one, the signal catcher associated with the new signal should be considered as well. Defaults to False.
enabled bool Whether the signal will be caught or not.
hit_count int The number of times the signal has been caught.
Source code inlibdebug/data/signal_catcher.py @dataclass\nclass SignalCatcher:\n \"\"\"Catch a signal raised by the target process.\n\n Attributes:\n signal_number (int): The signal number to catch.\n callback (Callable[[ThreadContext, CaughtSignal], None]): The callback defined by the user to execute when the signal is caught.\n recursive (bool): Whether, when the signal is hijacked with another one, the signal catcher associated with the new signal should be considered as well. Defaults to False.\n enabled (bool): Whether the signal will be caught or not.\n hit_count (int): The number of times the signal has been caught.\n \"\"\"\n\n signal_number: int\n callback: Callable[[ThreadContext, SignalCatcher], None]\n recursive: bool = True\n enabled: bool = True\n hit_count: int = 0\n\n def enable(self: SignalCatcher) -> None:\n \"\"\"Enable the signal catcher.\"\"\"\n provide_internal_debugger(self)._ensure_process_stopped()\n self.enabled = True\n\n def disable(self: SignalCatcher) -> None:\n \"\"\"Disable the signal catcher.\"\"\"\n provide_internal_debugger(self)._ensure_process_stopped()\n self.enabled = False\n\n def hit_on(self: SignalCatcher, thread_context: ThreadContext) -> bool:\n \"\"\"Returns whether the signal catcher has been hit on the given thread context.\"\"\"\n internal_debugger = provide_internal_debugger(self)\n internal_debugger._ensure_process_stopped()\n return self.enabled and thread_context.signal_number == self.signal_number\n\n def __hash__(self: SignalCatcher) -> int:\n \"\"\"Hash the signal catcher object by its memory address, so that it can be used in sets and dicts correctly.\"\"\"\n return hash(id(self))\n\n def __eq__(self: SignalCatcher, other: object) -> bool:\n \"\"\"Check if two catchers are equal.\"\"\"\n return id(self) == id(other)\n"},{"location":"from_pydoc/generated/data/signal_catcher/#libdebug.data.signal_catcher.SignalCatcher.__eq__","title":"__eq__(other)","text":"Check if two catchers are equal.
Source code inlibdebug/data/signal_catcher.py def __eq__(self: SignalCatcher, other: object) -> bool:\n \"\"\"Check if two catchers are equal.\"\"\"\n return id(self) == id(other)\n"},{"location":"from_pydoc/generated/data/signal_catcher/#libdebug.data.signal_catcher.SignalCatcher.__hash__","title":"__hash__()","text":"Hash the signal catcher object by its memory address, so that it can be used in sets and dicts correctly.
Source code inlibdebug/data/signal_catcher.py def __hash__(self: SignalCatcher) -> int:\n \"\"\"Hash the signal catcher object by its memory address, so that it can be used in sets and dicts correctly.\"\"\"\n return hash(id(self))\n"},{"location":"from_pydoc/generated/data/signal_catcher/#libdebug.data.signal_catcher.SignalCatcher.disable","title":"disable()","text":"Disable the signal catcher.
Source code inlibdebug/data/signal_catcher.py def disable(self: SignalCatcher) -> None:\n \"\"\"Disable the signal catcher.\"\"\"\n provide_internal_debugger(self)._ensure_process_stopped()\n self.enabled = False\n"},{"location":"from_pydoc/generated/data/signal_catcher/#libdebug.data.signal_catcher.SignalCatcher.enable","title":"enable()","text":"Enable the signal catcher.
Source code inlibdebug/data/signal_catcher.py def enable(self: SignalCatcher) -> None:\n \"\"\"Enable the signal catcher.\"\"\"\n provide_internal_debugger(self)._ensure_process_stopped()\n self.enabled = True\n"},{"location":"from_pydoc/generated/data/signal_catcher/#libdebug.data.signal_catcher.SignalCatcher.hit_on","title":"hit_on(thread_context)","text":"Returns whether the signal catcher has been hit on the given thread context.
Source code inlibdebug/data/signal_catcher.py def hit_on(self: SignalCatcher, thread_context: ThreadContext) -> bool:\n \"\"\"Returns whether the signal catcher has been hit on the given thread context.\"\"\"\n internal_debugger = provide_internal_debugger(self)\n internal_debugger._ensure_process_stopped()\n return self.enabled and thread_context.signal_number == self.signal_number\n"},{"location":"from_pydoc/generated/data/symbol/","title":"libdebug.data.symbol","text":""},{"location":"from_pydoc/generated/data/symbol/#libdebug.data.symbol.Symbol","title":"Symbol dataclass","text":"A symbol in the target process.
start (int): The start address of the symbol in the target process. end (int): The end address of the symbol in the target process. name (str): The name of the symbol in the target process. backing_file (str): The backing file of the symbol in the target process.
Source code inlibdebug/data/symbol.py @dataclass\nclass Symbol:\n \"\"\"A symbol in the target process.\n\n start (int): The start address of the symbol in the target process.\n end (int): The end address of the symbol in the target process.\n name (str): The name of the symbol in the target process.\n backing_file (str): The backing file of the symbol in the target process.\n \"\"\"\n\n start: int\n end: int\n name: str\n backing_file: str\n\n def __hash__(self: Symbol) -> int:\n \"\"\"Returns the hash of the symbol.\"\"\"\n return hash((self.start, self.end, self.name, self.backing_file))\n\n def __repr__(self: Symbol) -> str:\n \"\"\"Returns the string representation of the symbol.\"\"\"\n return f\"Symbol(start={self.start:#x}, end={self.end:#x}, name={self.name}, backing_file={self.backing_file})\"\n"},{"location":"from_pydoc/generated/data/symbol/#libdebug.data.symbol.Symbol.__hash__","title":"__hash__()","text":"Returns the hash of the symbol.
Source code inlibdebug/data/symbol.py def __hash__(self: Symbol) -> int:\n \"\"\"Returns the hash of the symbol.\"\"\"\n return hash((self.start, self.end, self.name, self.backing_file))\n"},{"location":"from_pydoc/generated/data/symbol/#libdebug.data.symbol.Symbol.__repr__","title":"__repr__()","text":"Returns the string representation of the symbol.
Source code inlibdebug/data/symbol.py def __repr__(self: Symbol) -> str:\n \"\"\"Returns the string representation of the symbol.\"\"\"\n return f\"Symbol(start={self.start:#x}, end={self.end:#x}, name={self.name}, backing_file={self.backing_file})\"\n"},{"location":"from_pydoc/generated/data/symbol_list/","title":"libdebug.data.symbol_list","text":""},{"location":"from_pydoc/generated/data/symbol_list/#libdebug.data.symbol_list.SymbolList","title":"SymbolList","text":" Bases: list
A list of symbols in the target process.
Source code inlibdebug/data/symbol_list.py class SymbolList(list):\n \"\"\"A list of symbols in the target process.\"\"\"\n\n def __init__(self: SymbolList, symbols: list[Symbol], maps_source: InternalDebugger | Snapshot) -> None:\n \"\"\"Initializes the SymbolDict.\"\"\"\n super().__init__(symbols)\n\n self._maps_source = maps_source\n\n def _search_by_address(self: SymbolList, address: int) -> list[Symbol]:\n \"\"\"Searches for a symbol by address.\n\n Args:\n address (int): The address of the symbol to search for.\n\n Returns:\n list[Symbol]: The list of symbols that match the specified address.\n \"\"\"\n # Find the memory map that contains the address\n if maps := self._maps_source.maps.filter(address):\n address -= maps[0].start\n else:\n raise ValueError(\n f\"Address {address:#x} does not belong to any memory map. You must specify an absolute address.\",\n )\n return [symbol for symbol in self if symbol.start <= address < symbol.end]\n\n def _search_by_name(self: SymbolList, name: str) -> list[Symbol]:\n \"\"\"Searches for a symbol by name.\n\n Args:\n name (str): The name of the symbol to search for.\n\n Returns:\n list[Symbol]: The list of symbols that match the specified name.\n \"\"\"\n exact_match = []\n no_exact_match = []\n # We first want to list the symbols that exactly match the name\n for symbol in self:\n if symbol.name == name:\n exact_match.append(symbol)\n elif name in symbol.name:\n no_exact_match.append(symbol)\n return exact_match + no_exact_match\n\n def filter(self: SymbolList, value: int | str) -> SymbolList[Symbol]:\n \"\"\"Filters the symbols according to the specified value.\n\n If the value is an integer, it is treated as an address.\n If the value is a string, it is treated as a symbol name.\n\n Args:\n value (int | str): The address or name of the symbol to find.\n\n Returns:\n SymbolList[Symbol]: The symbols matching the specified value.\n \"\"\"\n if isinstance(value, int):\n filtered_symbols = self._search_by_address(value)\n elif isinstance(value, str):\n filtered_symbols = self._search_by_name(value)\n else:\n raise TypeError(\"The value must be an integer or a string.\")\n\n return SymbolList(filtered_symbols, self._maps_source)\n\n def __getitem__(self: SymbolList, key: str | int) -> SymbolList[Symbol] | Symbol:\n \"\"\"Returns the symbol with the specified name.\n\n Args:\n key (str, int): The name of the symbol to return, or the index of the symbol in the list.\n\n Returns:\n Symbol | SymbolList[Symbol]: The symbol at the specified index, or the SymbolList of symbols with the specified name.\n \"\"\"\n if not isinstance(key, str):\n return super().__getitem__(key)\n\n symbols = [symbol for symbol in self if symbol.name == key]\n if not symbols:\n raise KeyError(f\"Symbol '{key}' not found.\")\n return SymbolList(symbols, self._maps_source)\n\n def __hash__(self) -> int:\n \"\"\"Return the hash of the symbol list.\"\"\"\n return hash(id(self))\n\n def __eq__(self, other: object) -> bool:\n \"\"\"Check if the symbol list is equal to another object.\"\"\"\n return super().__eq__(other)\n\n def __repr__(self: SymbolList) -> str:\n \"\"\"Returns the string representation of the SymbolDict without the default factory.\"\"\"\n return f\"SymbolList({super().__repr__()})\"\n"},{"location":"from_pydoc/generated/data/symbol_list/#libdebug.data.symbol_list.SymbolList.__eq__","title":"__eq__(other)","text":"Check if the symbol list is equal to another object.
Source code inlibdebug/data/symbol_list.py def __eq__(self, other: object) -> bool:\n \"\"\"Check if the symbol list is equal to another object.\"\"\"\n return super().__eq__(other)\n"},{"location":"from_pydoc/generated/data/symbol_list/#libdebug.data.symbol_list.SymbolList.__getitem__","title":"__getitem__(key)","text":"Returns the symbol with the specified name.
Parameters:
Name Type Description Defaultkey (str, int) The name of the symbol to return, or the index of the symbol in the list.
requiredReturns:
Type DescriptionSymbolList[Symbol] | Symbol Symbol | SymbolList[Symbol]: The symbol at the specified index, or the SymbolList of symbols with the specified name.
Source code inlibdebug/data/symbol_list.py def __getitem__(self: SymbolList, key: str | int) -> SymbolList[Symbol] | Symbol:\n \"\"\"Returns the symbol with the specified name.\n\n Args:\n key (str, int): The name of the symbol to return, or the index of the symbol in the list.\n\n Returns:\n Symbol | SymbolList[Symbol]: The symbol at the specified index, or the SymbolList of symbols with the specified name.\n \"\"\"\n if not isinstance(key, str):\n return super().__getitem__(key)\n\n symbols = [symbol for symbol in self if symbol.name == key]\n if not symbols:\n raise KeyError(f\"Symbol '{key}' not found.\")\n return SymbolList(symbols, self._maps_source)\n"},{"location":"from_pydoc/generated/data/symbol_list/#libdebug.data.symbol_list.SymbolList.__hash__","title":"__hash__()","text":"Return the hash of the symbol list.
Source code inlibdebug/data/symbol_list.py def __hash__(self) -> int:\n \"\"\"Return the hash of the symbol list.\"\"\"\n return hash(id(self))\n"},{"location":"from_pydoc/generated/data/symbol_list/#libdebug.data.symbol_list.SymbolList.__init__","title":"__init__(symbols, maps_source)","text":"Initializes the SymbolDict.
Source code inlibdebug/data/symbol_list.py def __init__(self: SymbolList, symbols: list[Symbol], maps_source: InternalDebugger | Snapshot) -> None:\n \"\"\"Initializes the SymbolDict.\"\"\"\n super().__init__(symbols)\n\n self._maps_source = maps_source\n"},{"location":"from_pydoc/generated/data/symbol_list/#libdebug.data.symbol_list.SymbolList.__repr__","title":"__repr__()","text":"Returns the string representation of the SymbolDict without the default factory.
Source code inlibdebug/data/symbol_list.py def __repr__(self: SymbolList) -> str:\n \"\"\"Returns the string representation of the SymbolDict without the default factory.\"\"\"\n return f\"SymbolList({super().__repr__()})\"\n"},{"location":"from_pydoc/generated/data/symbol_list/#libdebug.data.symbol_list.SymbolList._search_by_address","title":"_search_by_address(address)","text":"Searches for a symbol by address.
Parameters:
Name Type Description Defaultaddress int The address of the symbol to search for.
requiredReturns:
Type Descriptionlist[Symbol] list[Symbol]: The list of symbols that match the specified address.
Source code inlibdebug/data/symbol_list.py def _search_by_address(self: SymbolList, address: int) -> list[Symbol]:\n \"\"\"Searches for a symbol by address.\n\n Args:\n address (int): The address of the symbol to search for.\n\n Returns:\n list[Symbol]: The list of symbols that match the specified address.\n \"\"\"\n # Find the memory map that contains the address\n if maps := self._maps_source.maps.filter(address):\n address -= maps[0].start\n else:\n raise ValueError(\n f\"Address {address:#x} does not belong to any memory map. You must specify an absolute address.\",\n )\n return [symbol for symbol in self if symbol.start <= address < symbol.end]\n"},{"location":"from_pydoc/generated/data/symbol_list/#libdebug.data.symbol_list.SymbolList._search_by_name","title":"_search_by_name(name)","text":"Searches for a symbol by name.
Parameters:
Name Type Description Defaultname str The name of the symbol to search for.
requiredReturns:
Type Descriptionlist[Symbol] list[Symbol]: The list of symbols that match the specified name.
Source code inlibdebug/data/symbol_list.py def _search_by_name(self: SymbolList, name: str) -> list[Symbol]:\n \"\"\"Searches for a symbol by name.\n\n Args:\n name (str): The name of the symbol to search for.\n\n Returns:\n list[Symbol]: The list of symbols that match the specified name.\n \"\"\"\n exact_match = []\n no_exact_match = []\n # We first want to list the symbols that exactly match the name\n for symbol in self:\n if symbol.name == name:\n exact_match.append(symbol)\n elif name in symbol.name:\n no_exact_match.append(symbol)\n return exact_match + no_exact_match\n"},{"location":"from_pydoc/generated/data/symbol_list/#libdebug.data.symbol_list.SymbolList.filter","title":"filter(value)","text":"Filters the symbols according to the specified value.
If the value is an integer, it is treated as an address. If the value is a string, it is treated as a symbol name.
Parameters:
Name Type Description Defaultvalue int | str The address or name of the symbol to find.
requiredReturns:
Type DescriptionSymbolList[Symbol] SymbolList[Symbol]: The symbols matching the specified value.
Source code inlibdebug/data/symbol_list.py def filter(self: SymbolList, value: int | str) -> SymbolList[Symbol]:\n \"\"\"Filters the symbols according to the specified value.\n\n If the value is an integer, it is treated as an address.\n If the value is a string, it is treated as a symbol name.\n\n Args:\n value (int | str): The address or name of the symbol to find.\n\n Returns:\n SymbolList[Symbol]: The symbols matching the specified value.\n \"\"\"\n if isinstance(value, int):\n filtered_symbols = self._search_by_address(value)\n elif isinstance(value, str):\n filtered_symbols = self._search_by_name(value)\n else:\n raise TypeError(\"The value must be an integer or a string.\")\n\n return SymbolList(filtered_symbols, self._maps_source)\n"},{"location":"from_pydoc/generated/data/syscall_handler/","title":"libdebug.data.syscall_handler","text":""},{"location":"from_pydoc/generated/data/syscall_handler/#libdebug.data.syscall_handler.SyscallHandler","title":"SyscallHandler dataclass","text":"Handle a syscall executed by the target process.
Attributes:
Name Type Descriptionsyscall_number int The syscall number to handle.
on_enter_user Callable[[ThreadContext, int], None] The callback defined by the user to execute when the syscall is entered.
on_exit_user Callable[[ThreadContext, int], None] The callback defined by the user to execute when the syscall is exited.
on_enter_pprint Callable[[ThreadContext, int], None] The callback defined by the pretty print to execute when the syscall is entered.
on_exit_pprint Callable[[ThreadContext, int], None] The callback defined by the pretty print to execute when the syscall is exited.
recursive bool Whether, when the syscall is hijacked with another one, the syscall handler associated with the new syscall should be considered as well. Defaults to False.
enabled bool Whether the syscall will be handled or not.
hit_count int The number of times the syscall has been handled.
Source code inlibdebug/data/syscall_handler.py @dataclass\nclass SyscallHandler:\n \"\"\"Handle a syscall executed by the target process.\n\n Attributes:\n syscall_number (int): The syscall number to handle.\n on_enter_user (Callable[[ThreadContext, int], None]): The callback defined by the user to execute when the syscall is entered.\n on_exit_user (Callable[[ThreadContext, int], None]): The callback defined by the user to execute when the syscall is exited.\n on_enter_pprint (Callable[[ThreadContext, int], None]): The callback defined by the pretty print to execute when the syscall is entered.\n on_exit_pprint (Callable[[ThreadContext, int], None]): The callback defined by the pretty print to execute when the syscall is exited.\n recursive (bool): Whether, when the syscall is hijacked with another one, the syscall handler associated with the new syscall should be considered as well. Defaults to False.\n enabled (bool): Whether the syscall will be handled or not.\n hit_count (int): The number of times the syscall has been handled.\n \"\"\"\n\n syscall_number: int\n on_enter_user: Callable[[ThreadContext, int], None]\n on_exit_user: Callable[[ThreadContext, int], None]\n on_enter_pprint: Callable[[ThreadContext, int, Any], None]\n on_exit_pprint: Callable[[int | tuple[int, int]], None]\n recursive: bool = False\n enabled: bool = True\n hit_count: int = 0\n\n _has_entered: bool = False\n _skip_exit: bool = False\n\n def enable(self: SyscallHandler) -> None:\n \"\"\"Handle the syscall.\"\"\"\n provide_internal_debugger(self)._ensure_process_stopped()\n self.enabled = True\n self._has_entered = False\n\n def disable(self: SyscallHandler) -> None:\n \"\"\"Unhandle the syscall.\"\"\"\n provide_internal_debugger(self)._ensure_process_stopped()\n self.enabled = False\n self._has_entered = False\n\n def hit_on(self: SyscallHandler, thread_context: ThreadContext) -> bool:\n \"\"\"Returns whether the syscall handler has been hit on the given thread context.\"\"\"\n internal_debugger = provide_internal_debugger(self)\n internal_debugger._ensure_process_stopped()\n return self.enabled and thread_context.syscall_number == self.syscall_number\n\n def hit_on_enter(self: SyscallHandler, thread_context: ThreadContext) -> bool:\n \"\"\"Returns whether the syscall handler has been hit during the syscall entry on the given thread context.\"\"\"\n internal_debugger = provide_internal_debugger(self)\n internal_debugger._ensure_process_stopped()\n return self.enabled and thread_context.syscall_number == self.syscall_number and self._has_entered\n\n def hit_on_exit(self: SyscallHandler, thread_context: ThreadContext) -> bool:\n \"\"\"Returns whether the syscall handler has been hit during the syscall exit on the given thread context.\"\"\"\n internal_debugger = provide_internal_debugger(self)\n internal_debugger._ensure_process_stopped()\n return self.enabled and thread_context.syscall_number == self.syscall_number and not self._has_entered\n\n def __hash__(self: SyscallHandler) -> int:\n \"\"\"Hash the syscall handler object by its memory address, so that it can be used in sets and dicts correctly.\"\"\"\n return hash(id(self))\n\n def __eq__(self: SyscallHandler, other: object) -> bool:\n \"\"\"Check if two handlers are equal.\"\"\"\n return id(self) == id(other)\n"},{"location":"from_pydoc/generated/data/syscall_handler/#libdebug.data.syscall_handler.SyscallHandler.__eq__","title":"__eq__(other)","text":"Check if two handlers are equal.
Source code inlibdebug/data/syscall_handler.py def __eq__(self: SyscallHandler, other: object) -> bool:\n \"\"\"Check if two handlers are equal.\"\"\"\n return id(self) == id(other)\n"},{"location":"from_pydoc/generated/data/syscall_handler/#libdebug.data.syscall_handler.SyscallHandler.__hash__","title":"__hash__()","text":"Hash the syscall handler object by its memory address, so that it can be used in sets and dicts correctly.
Source code inlibdebug/data/syscall_handler.py def __hash__(self: SyscallHandler) -> int:\n \"\"\"Hash the syscall handler object by its memory address, so that it can be used in sets and dicts correctly.\"\"\"\n return hash(id(self))\n"},{"location":"from_pydoc/generated/data/syscall_handler/#libdebug.data.syscall_handler.SyscallHandler.disable","title":"disable()","text":"Unhandle the syscall.
Source code inlibdebug/data/syscall_handler.py def disable(self: SyscallHandler) -> None:\n \"\"\"Unhandle the syscall.\"\"\"\n provide_internal_debugger(self)._ensure_process_stopped()\n self.enabled = False\n self._has_entered = False\n"},{"location":"from_pydoc/generated/data/syscall_handler/#libdebug.data.syscall_handler.SyscallHandler.enable","title":"enable()","text":"Handle the syscall.
Source code inlibdebug/data/syscall_handler.py def enable(self: SyscallHandler) -> None:\n \"\"\"Handle the syscall.\"\"\"\n provide_internal_debugger(self)._ensure_process_stopped()\n self.enabled = True\n self._has_entered = False\n"},{"location":"from_pydoc/generated/data/syscall_handler/#libdebug.data.syscall_handler.SyscallHandler.hit_on","title":"hit_on(thread_context)","text":"Returns whether the syscall handler has been hit on the given thread context.
Source code inlibdebug/data/syscall_handler.py def hit_on(self: SyscallHandler, thread_context: ThreadContext) -> bool:\n \"\"\"Returns whether the syscall handler has been hit on the given thread context.\"\"\"\n internal_debugger = provide_internal_debugger(self)\n internal_debugger._ensure_process_stopped()\n return self.enabled and thread_context.syscall_number == self.syscall_number\n"},{"location":"from_pydoc/generated/data/syscall_handler/#libdebug.data.syscall_handler.SyscallHandler.hit_on_enter","title":"hit_on_enter(thread_context)","text":"Returns whether the syscall handler has been hit during the syscall entry on the given thread context.
Source code inlibdebug/data/syscall_handler.py def hit_on_enter(self: SyscallHandler, thread_context: ThreadContext) -> bool:\n \"\"\"Returns whether the syscall handler has been hit during the syscall entry on the given thread context.\"\"\"\n internal_debugger = provide_internal_debugger(self)\n internal_debugger._ensure_process_stopped()\n return self.enabled and thread_context.syscall_number == self.syscall_number and self._has_entered\n"},{"location":"from_pydoc/generated/data/syscall_handler/#libdebug.data.syscall_handler.SyscallHandler.hit_on_exit","title":"hit_on_exit(thread_context)","text":"Returns whether the syscall handler has been hit during the syscall exit on the given thread context.
Source code inlibdebug/data/syscall_handler.py def hit_on_exit(self: SyscallHandler, thread_context: ThreadContext) -> bool:\n \"\"\"Returns whether the syscall handler has been hit during the syscall exit on the given thread context.\"\"\"\n internal_debugger = provide_internal_debugger(self)\n internal_debugger._ensure_process_stopped()\n return self.enabled and thread_context.syscall_number == self.syscall_number and not self._has_entered\n"},{"location":"from_pydoc/generated/data/terminals/","title":"libdebug.data.terminals","text":""},{"location":"from_pydoc/generated/data/terminals/#libdebug.data.terminals.TerminalTypes","title":"TerminalTypes dataclass","text":"Terminal class for launching terminal emulators with predefined commands.
Source code inlibdebug/data/terminals.py @dataclass\nclass TerminalTypes:\n \"\"\"Terminal class for launching terminal emulators with predefined commands.\"\"\"\n\n terminals: ClassVar[dict[str, list[str]]] = {\n \"gnome-terminal-server\": [\"gnome-terminal\", \"--tab\", \"--\"],\n \"konsole\": [\"konsole\", \"--new-tab\", \"-e\"],\n \"xterm\": [\"xterm\", \"-e\"],\n \"lxterminal\": [\"lxterminal\", \"-e\"],\n \"mate-terminal\": [\"mate-terminal\", \"--tab\", \"-e\"],\n \"tilix\": [\"tilix\", \"--action=app-new-session\", \"-e\"],\n \"kgx\": [\"kgx\", \"--tab\", \"-e\"],\n \"alacritty\": [\"alacritty\", \"-e\"],\n \"kitty\": [\"kitty\", \"-e\"],\n \"urxvt\": [\"urxvt\", \"-e\"],\n \"tmux: server\": [\"tmux\", \"split-window\", \"-h\"],\n \"xfce4-terminal\": [\"xfce4-terminal\", \"--tab\", \"-e\"],\n \"terminator\": [\"terminator\", \"--new-tab\", \"-e\"],\n \"ptyxis-agent\": [\"ptyxis\", \"--tab\", \"-x\"],\n }\n\n @staticmethod\n def get_command(terminal_name: str) -> list[str]:\n \"\"\"Retrieve the command list for a given terminal emulator name.\n\n Args:\n terminal_name (str): the name of the terminal emulator.\n\n Returns:\n list[str]: the command list for the terminal emulator, or an empty list if not found.\n \"\"\"\n return TerminalTypes.terminals.get(terminal_name, [])\n"},{"location":"from_pydoc/generated/data/terminals/#libdebug.data.terminals.TerminalTypes.get_command","title":"get_command(terminal_name) staticmethod","text":"Retrieve the command list for a given terminal emulator name.
Args: terminal_name (str): the name of the terminal emulator.
Returns: list[str]: the command list for the terminal emulator, or an empty list if not found.
Source code inlibdebug/data/terminals.py @staticmethod\ndef get_command(terminal_name: str) -> list[str]:\n \"\"\"Retrieve the command list for a given terminal emulator name.\n\n Args:\n terminal_name (str): the name of the terminal emulator.\n\n Returns:\n list[str]: the command list for the terminal emulator, or an empty list if not found.\n \"\"\"\n return TerminalTypes.terminals.get(terminal_name, [])\n"},{"location":"from_pydoc/generated/debugger/debugger/","title":"libdebug.debugger.debugger","text":""},{"location":"from_pydoc/generated/debugger/debugger/#libdebug.debugger.debugger.Debugger","title":"Debugger","text":"The Debugger class is the main class of libdebug. It contains all the methods needed to run and interact with the process.
libdebug/debugger/debugger.py class Debugger:\n \"\"\"The Debugger class is the main class of `libdebug`. It contains all the methods needed to run and interact with the process.\"\"\"\n\n _sentinel: object = object()\n \"\"\"A sentinel object.\"\"\"\n\n _internal_debugger: InternalDebugger\n \"\"\"The internal debugger object.\"\"\"\n\n def __init__(self: Debugger) -> None:\n pass\n\n def post_init_(self: Debugger, internal_debugger: InternalDebugger) -> None:\n \"\"\"Do not use this constructor directly. Use the `debugger` function instead.\"\"\"\n self._internal_debugger = internal_debugger\n self._internal_debugger.start_up()\n\n def run(self: Debugger, redirect_pipes: bool = True) -> PipeManager | None:\n \"\"\"Starts the process and waits for it to stop.\n\n Args:\n redirect_pipes (bool): Whether to hook and redirect the pipes of the process to a PipeManager.\n \"\"\"\n return self._internal_debugger.run(redirect_pipes)\n\n def attach(self: Debugger, pid: int) -> None:\n \"\"\"Attaches to an existing process.\"\"\"\n self._internal_debugger.attach(pid)\n\n def detach(self: Debugger) -> None:\n \"\"\"Detaches from the process.\"\"\"\n self._internal_debugger.detach()\n\n def kill(self: Debugger) -> None:\n \"\"\"Kills the process.\"\"\"\n self._internal_debugger.kill()\n\n def terminate(self: Debugger) -> None:\n \"\"\"Interrupts the process, kills it and then terminates the background thread.\n\n The debugger object will not be usable after this method is called.\n This method should only be called to free up resources when the debugger object is no longer needed.\n \"\"\"\n self._internal_debugger.terminate()\n\n def cont(self: Debugger) -> None:\n \"\"\"Continues the process.\"\"\"\n self._internal_debugger.cont()\n\n def interrupt(self: Debugger) -> None:\n \"\"\"Interrupts the process.\"\"\"\n self._internal_debugger.interrupt()\n\n def wait(self: Debugger) -> None:\n \"\"\"Waits for the process to stop.\"\"\"\n self._internal_debugger.wait()\n\n def print_maps(self: Debugger) -> None:\n \"\"\"Prints the memory maps of the process.\"\"\"\n liblog.warning(\"The `print_maps` method is deprecated. Use `d.pprint_maps` instead.\")\n self._internal_debugger.pprint_maps()\n\n def pprint_maps(self: Debugger) -> None:\n \"\"\"Prints the memory maps of the process.\"\"\"\n self._internal_debugger.pprint_maps()\n\n def resolve_symbol(self: Debugger, symbol: str, file: str = \"binary\") -> int:\n \"\"\"Resolves the address of the specified symbol.\n\n Args:\n symbol (str): The symbol to resolve.\n file (str): The backing file to resolve the symbol in. Defaults to \"binary\"\n\n Returns:\n int: The address of the symbol.\n \"\"\"\n return self._internal_debugger.resolve_symbol(symbol, file)\n\n @property\n def symbols(self: Debugger) -> SymbolList[Symbol]:\n \"\"\"Get the symbols of the process.\"\"\"\n return self._internal_debugger.symbols\n\n def breakpoint(\n self: Debugger,\n position: int | str,\n hardware: bool = False,\n condition: str = \"x\",\n length: int = 1,\n callback: None | bool | Callable[[ThreadContext, Breakpoint], None] = None,\n file: str = \"hybrid\",\n ) -> Breakpoint:\n \"\"\"Sets a breakpoint at the specified location.\n\n Args:\n position (int | bytes): The location of the breakpoint.\n hardware (bool, optional): Whether the breakpoint should be hardware-assisted or purely software. Defaults to False.\n condition (str, optional): The trigger condition for the breakpoint. Defaults to None.\n length (int, optional): The length of the breakpoint. Only for watchpoints. Defaults to 1.\n callback (None | bool | Callable[[ThreadContext, Breakpoint], None], optional): A callback to be called when the breakpoint is hit. If True, an empty callback will be set. Defaults to None.\n file (str, optional): The user-defined backing file to resolve the address in. Defaults to \"hybrid\" (libdebug will first try to solve the address as an absolute address, then as a relative address w.r.t. the \"binary\" map file).\n \"\"\"\n return self._internal_debugger.breakpoint(position, hardware, condition, length, callback, file)\n\n def watchpoint(\n self: Debugger,\n position: int | str,\n condition: str = \"w\",\n length: int = 1,\n callback: None | bool | Callable[[ThreadContext, Breakpoint], None] = None,\n file: str = \"hybrid\",\n ) -> Breakpoint:\n \"\"\"Sets a watchpoint at the specified location. Internally, watchpoints are implemented as breakpoints.\n\n Args:\n position (int | bytes): The location of the breakpoint.\n condition (str, optional): The trigger condition for the watchpoint (either \"w\", \"rw\" or \"x\"). Defaults to \"w\".\n length (int, optional): The size of the word in being watched (1, 2, 4 or 8). Defaults to 1.\n callback (None | bool | Callable[[ThreadContext, Breakpoint], None], optional): A callback to be called when the watchpoint is hit. If True, an empty callback will be set. Defaults to None.\n file (str, optional): The user-defined backing file to resolve the address in. Defaults to \"hybrid\" (libdebug will first try to solve the address as an absolute address, then as a relative address w.r.t. the \"binary\" map file).\n \"\"\"\n return self._internal_debugger.breakpoint(\n position,\n hardware=True,\n condition=condition,\n length=length,\n callback=callback,\n file=file,\n )\n\n def catch_signal(\n self: Debugger,\n signal: int | str,\n callback: None | bool | Callable[[ThreadContext, SignalCatcher], None] = None,\n recursive: bool = False,\n ) -> SignalCatcher:\n \"\"\"Catch a signal in the target process.\n\n Args:\n signal (int | str): The signal to catch. If \"*\", \"ALL\", \"all\" or -1 is passed, all signals will be caught.\n callback (None | bool | Callable[[ThreadContext, SignalCatcher], None], optional): A callback to be called when the signal is caught. If True, an empty callback will be set. Defaults to None.\n recursive (bool, optional): Whether, when the signal is hijacked with another one, the signal catcher associated with the new signal should be considered as well. Defaults to False.\n\n Returns:\n SignalCatcher: The SignalCatcher object.\n \"\"\"\n return self._internal_debugger.catch_signal(signal, callback, recursive)\n\n def hijack_signal(\n self: Debugger,\n original_signal: int | str,\n new_signal: int | str,\n recursive: bool = False,\n ) -> SyscallHandler:\n \"\"\"Hijack a signal in the target process.\n\n Args:\n original_signal (int | str): The signal to hijack. If \"*\", \"ALL\", \"all\" or -1 is passed, all signals will be hijacked.\n new_signal (int | str): The signal to hijack the original signal with.\n recursive (bool, optional): Whether, when the signal is hijacked with another one, the signal catcher associated with the new signal should be considered as well. Defaults to False.\n\n Returns:\n SignalCatcher: The SignalCatcher object.\n \"\"\"\n return self._internal_debugger.hijack_signal(original_signal, new_signal, recursive)\n\n def handle_syscall(\n self: Debugger,\n syscall: int | str,\n on_enter: None | bool | Callable[[ThreadContext, SyscallHandler], None] = None,\n on_exit: None | bool | Callable[[ThreadContext, SyscallHandler], None] = None,\n recursive: bool = False,\n ) -> SyscallHandler:\n \"\"\"Handle a syscall in the target process.\n\n Args:\n syscall (int | str): The syscall name or number to handle. If \"*\", \"ALL\", \"all\" or -1 is passed, all syscalls will be handled.\n on_enter (None | bool |Callable[[ThreadContext, SyscallHandler], None], optional): The callback to execute when the syscall is entered. If True, an empty callback will be set. Defaults to None.\n on_exit (None | bool | Callable[[ThreadContext, SyscallHandler], None], optional): The callback to execute when the syscall is exited. If True, an empty callback will be set. Defaults to None.\n recursive (bool, optional): Whether, when the syscall is hijacked with another one, the syscall handler associated with the new syscall should be considered as well. Defaults to False.\n\n Returns:\n SyscallHandler: The SyscallHandler object.\n \"\"\"\n return self._internal_debugger.handle_syscall(syscall, on_enter, on_exit, recursive)\n\n def hijack_syscall(\n self: Debugger,\n original_syscall: int | str,\n new_syscall: int | str,\n recursive: bool = False,\n **kwargs: int,\n ) -> SyscallHandler:\n \"\"\"Hijacks a syscall in the target process.\n\n Args:\n original_syscall (int | str): The syscall name or number to hijack. If \"*\", \"ALL\", \"all\" or -1 is passed, all syscalls will be hijacked.\n new_syscall (int | str): The syscall name or number to hijack the original syscall with.\n recursive (bool, optional): Whether, when the syscall is hijacked with another one, the syscall handler associated with the new syscall should be considered as well. Defaults to False.\n **kwargs: (int, optional): The arguments to pass to the new syscall.\n\n Returns:\n SyscallHandler: The SyscallHandler object.\n \"\"\"\n return self._internal_debugger.hijack_syscall(original_syscall, new_syscall, recursive, **kwargs)\n\n def gdb(\n self: Debugger,\n migrate_breakpoints: bool = True,\n open_in_new_process: bool = True,\n blocking: bool = True,\n ) -> GdbResumeEvent:\n \"\"\"Migrates the current debugging session to GDB.\n\n Args:\n migrate_breakpoints (bool): Whether to migrate over the breakpoints set in libdebug to GDB.\n open_in_new_process (bool): Whether to attempt to open GDB in a new process instead of the current one.\n blocking (bool): Whether to block the script until GDB is closed.\n \"\"\"\n return self._internal_debugger.gdb(migrate_breakpoints, open_in_new_process, blocking)\n\n def r(self: Debugger, redirect_pipes: bool = True) -> PipeManager | None:\n \"\"\"Alias for the `run` method.\n\n Starts the process and waits for it to stop.\n\n Args:\n redirect_pipes (bool): Whether to hook and redirect the pipes of the process to a PipeManager.\n \"\"\"\n return self._internal_debugger.run(redirect_pipes)\n\n def c(self: Debugger) -> None:\n \"\"\"Alias for the `cont` method.\n\n Continues the process.\n \"\"\"\n self._internal_debugger.cont()\n\n def int(self: Debugger) -> None:\n \"\"\"Alias for the `interrupt` method.\n\n Interrupts the process.\n \"\"\"\n self._internal_debugger.interrupt()\n\n def w(self: Debugger) -> None:\n \"\"\"Alias for the `wait` method.\n\n Waits for the process to stop.\n \"\"\"\n self._internal_debugger.wait()\n\n def bp(\n self: Debugger,\n position: int | str,\n hardware: bool = False,\n condition: str = \"x\",\n length: int = 1,\n callback: None | Callable[[ThreadContext, Breakpoint], None] = None,\n file: str = \"hybrid\",\n ) -> Breakpoint:\n \"\"\"Alias for the `breakpoint` method.\n\n Args:\n position (int | bytes): The location of the breakpoint.\n hardware (bool, optional): Whether the breakpoint should be hardware-assisted or purely software. Defaults to False.\n condition (str, optional): The trigger condition for the breakpoint. Defaults to None.\n length (int, optional): The length of the breakpoint. Only for watchpoints. Defaults to 1.\n callback (Callable[[ThreadContext, Breakpoint], None], optional): A callback to be called when the breakpoint is hit. Defaults to None.\n file (str, optional): The user-defined backing file to resolve the address in. Defaults to \"hybrid\" (libdebug will first try to solve the address as an absolute address, then as a relative address w.r.t. the \"binary\" map file).\n \"\"\"\n return self._internal_debugger.breakpoint(position, hardware, condition, length, callback, file)\n\n def wp(\n self: Debugger,\n position: int | str,\n condition: str = \"w\",\n length: int = 1,\n callback: None | Callable[[ThreadContext, Breakpoint], None] = None,\n file: str = \"hybrid\",\n ) -> Breakpoint:\n \"\"\"Alias for the `watchpoint` method.\n\n Sets a watchpoint at the specified location. Internally, watchpoints are implemented as breakpoints.\n\n Args:\n position (int | bytes): The location of the breakpoint.\n condition (str, optional): The trigger condition for the watchpoint (either \"w\", \"rw\" or \"x\"). Defaults to \"w\".\n length (int, optional): The size of the word in being watched (1, 2, 4 or 8). Defaults to 1.\n callback (Callable[[ThreadContext, Breakpoint], None], optional): A callback to be called when the watchpoint is hit. Defaults to None.\n file (str, optional): The user-defined backing file to resolve the address in. Defaults to \"hybrid\" (libdebug will first try to solve the address as an absolute address, then as a relative address w.r.t. the \"binary\" map file).\n \"\"\"\n return self._internal_debugger.breakpoint(\n position,\n hardware=True,\n condition=condition,\n length=length,\n callback=callback,\n file=file,\n )\n\n @property\n def arch(self: Debugger) -> str:\n \"\"\"Get the architecture of the process.\"\"\"\n return self._internal_debugger.arch\n\n @arch.setter\n def arch(self: Debugger, value: str) -> None:\n \"\"\"Set the architecture of the process.\"\"\"\n self._internal_debugger.arch = map_arch(value)\n\n @property\n def kill_on_exit(self: Debugger) -> bool:\n \"\"\"Get whether the process will be killed when the debugger exits.\"\"\"\n return self._internal_debugger.kill_on_exit\n\n @kill_on_exit.setter\n def kill_on_exit(self: Debugger, value: bool) -> None:\n if not isinstance(value, bool):\n raise TypeError(\"kill_on_exit must be a boolean\")\n\n self._internal_debugger.kill_on_exit = value\n\n @property\n def threads(self: Debugger) -> list[ThreadContext]:\n \"\"\"Get the list of threads in the process.\"\"\"\n return self._internal_debugger.threads\n\n @property\n def breakpoints(self: Debugger) -> dict[int, Breakpoint]:\n \"\"\"Get the breakpoints set on the process.\"\"\"\n return self._internal_debugger.breakpoints\n\n @property\n def children(self: Debugger) -> list[Debugger]:\n \"\"\"Get the list of child debuggers.\"\"\"\n return self._internal_debugger.children\n\n @property\n def handled_syscalls(self: InternalDebugger) -> dict[int, SyscallHandler]:\n \"\"\"Get the handled syscalls dictionary.\n\n Returns:\n dict[int, SyscallHandler]: the handled syscalls dictionary.\n \"\"\"\n return self._internal_debugger.handled_syscalls\n\n @property\n def caught_signals(self: InternalDebugger) -> dict[int, SignalCatcher]:\n \"\"\"Get the caught signals dictionary.\n\n Returns:\n dict[int, SignalCatcher]: the caught signals dictionary.\n \"\"\"\n return self._internal_debugger.caught_signals\n\n @property\n def maps(self: Debugger) -> MemoryMapList[MemoryMap]:\n \"\"\"Get the memory maps of the process.\"\"\"\n return self._internal_debugger.maps\n\n @property\n def pprint_syscalls(self: Debugger) -> bool:\n \"\"\"Get the state of the pprint_syscalls flag.\n\n Returns:\n bool: True if the debugger should pretty print syscalls, False otherwise.\n \"\"\"\n return self._internal_debugger.pprint_syscalls\n\n @pprint_syscalls.setter\n def pprint_syscalls(self: Debugger, value: bool) -> None:\n \"\"\"Set the state of the pprint_syscalls flag.\n\n Args:\n value (bool): the value to set.\n \"\"\"\n if not isinstance(value, bool):\n raise TypeError(\"pprint_syscalls must be a boolean\")\n if value:\n self._internal_debugger.enable_pretty_print()\n else:\n self._internal_debugger.disable_pretty_print()\n\n self._internal_debugger.pprint_syscalls = value\n\n @contextmanager\n def pprint_syscalls_context(self: Debugger, value: bool) -> ...:\n \"\"\"A context manager to temporarily change the state of the pprint_syscalls flag.\n\n Args:\n value (bool): the value to set.\n \"\"\"\n old_value = self.pprint_syscalls\n self.pprint_syscalls = value\n yield\n self.pprint_syscalls = old_value\n\n @property\n def syscalls_to_pprint(self: Debugger) -> list[str] | None:\n \"\"\"Get the syscalls to pretty print.\n\n Returns:\n list[str]: The syscalls to pretty print.\n \"\"\"\n if self._internal_debugger.syscalls_to_pprint is None:\n return None\n else:\n return [\n resolve_syscall_name(self._internal_debugger.arch, v)\n for v in self._internal_debugger.syscalls_to_pprint\n ]\n\n @syscalls_to_pprint.setter\n def syscalls_to_pprint(self: Debugger, value: list[int | str] | None) -> None:\n \"\"\"Get the syscalls to pretty print.\n\n Args:\n value (list[int | str] | None): The syscalls to pretty print.\n \"\"\"\n if value is None:\n self._internal_debugger.syscalls_to_pprint = None\n elif isinstance(value, list):\n self._internal_debugger.syscalls_to_pprint = [\n v if isinstance(v, int) else resolve_syscall_number(self._internal_debugger.arch, v) for v in value\n ]\n else:\n raise ValueError(\n \"syscalls_to_pprint must be a list of integers or strings or None.\",\n )\n if self._internal_debugger.pprint_syscalls:\n self._internal_debugger.enable_pretty_print()\n\n @property\n def syscalls_to_not_pprint(self: Debugger) -> list[str] | None:\n \"\"\"Get the syscalls to not pretty print.\n\n Returns:\n list[str]: The syscalls to not pretty print.\n \"\"\"\n if self._internal_debugger.syscalls_to_not_pprint is None:\n return None\n else:\n return [\n resolve_syscall_name(self._internal_debugger.arch, v)\n for v in self._internal_debugger.syscalls_to_not_pprint\n ]\n\n @syscalls_to_not_pprint.setter\n def syscalls_to_not_pprint(self: Debugger, value: list[int | str] | None) -> None:\n \"\"\"Get the syscalls to not pretty print.\n\n Args:\n value (list[int | str] | None): The syscalls to not pretty print.\n \"\"\"\n if value is None:\n self._internal_debugger.syscalls_to_not_pprint = None\n elif isinstance(value, list):\n self._internal_debugger.syscalls_to_not_pprint = [\n v if isinstance(v, int) else resolve_syscall_number(self._internal_debugger.arch, v) for v in value\n ]\n else:\n raise ValueError(\n \"syscalls_to_not_pprint must be a list of integers or strings or None.\",\n )\n if self._internal_debugger.pprint_syscalls:\n self._internal_debugger.enable_pretty_print()\n\n @property\n def signals_to_block(self: Debugger) -> list[str]:\n \"\"\"Get the signals to not forward to the process.\n\n Returns:\n list[str]: The signals to block.\n \"\"\"\n return [resolve_signal_name(v) for v in self._internal_debugger.signals_to_block]\n\n @signals_to_block.setter\n def signals_to_block(self: Debugger, signals: list[int | str]) -> None:\n \"\"\"Set the signal to not forward to the process.\n\n Args:\n signals (list[int | str]): The signals to block.\n \"\"\"\n if not isinstance(signals, list):\n raise TypeError(\"signals_to_block must be a list of integers or strings\")\n\n signals = [v if isinstance(v, int) else resolve_signal_number(v) for v in signals]\n\n if not set(signals).issubset(get_all_signal_numbers()):\n raise ValueError(\"Invalid signal number.\")\n\n self._internal_debugger.signals_to_block = signals\n\n @property\n def fast_memory(self: Debugger) -> bool:\n \"\"\"Get the state of the fast_memory flag.\n\n It is used to determine if the debugger should use a faster memory access method.\n\n Returns:\n bool: True if the debugger should use a faster memory access method, False otherwise.\n \"\"\"\n return self._internal_debugger.fast_memory\n\n @fast_memory.setter\n def fast_memory(self: Debugger, value: bool) -> None:\n \"\"\"Set the state of the fast_memory flag.\n\n It is used to determine if the debugger should use a faster memory access method.\n\n Args:\n value (bool): the value to set.\n \"\"\"\n if not isinstance(value, bool):\n raise TypeError(\"fast_memory must be a boolean\")\n self._internal_debugger.fast_memory = value\n\n @property\n def instruction_pointer(self: Debugger) -> int:\n \"\"\"Get the main thread's instruction pointer.\"\"\"\n if not self.threads:\n raise RuntimeError(\"No threads available. Did you call `run` or `attach`?\")\n return self.threads[0].instruction_pointer\n\n @instruction_pointer.setter\n def instruction_pointer(self: Debugger, value: int) -> None:\n \"\"\"Set the main thread's instruction pointer.\"\"\"\n if not self.threads:\n raise RuntimeError(\"No threads available. Did you call `run` or `attach`?\")\n self.threads[0].instruction_pointer = value\n\n @property\n def syscall_arg0(self: Debugger) -> int:\n \"\"\"Get the main thread's syscall argument 0.\"\"\"\n if not self.threads:\n raise RuntimeError(\"No threads available. Did you call `run` or `attach`?\")\n return self.threads[0].syscall_arg0\n\n @syscall_arg0.setter\n def syscall_arg0(self: Debugger, value: int) -> None:\n \"\"\"Set the main thread's syscall argument 0.\"\"\"\n if not self.threads:\n raise RuntimeError(\"No threads available. Did you call `run` or `attach`?\")\n self.threads[0].syscall_arg0 = value\n\n @property\n def syscall_arg1(self: Debugger) -> int:\n \"\"\"Get the main thread's syscall argument 1.\"\"\"\n if not self.threads:\n raise RuntimeError(\"No threads available. Did you call `run` or `attach`?\")\n return self.threads[0].syscall_arg1\n\n @syscall_arg1.setter\n def syscall_arg1(self: Debugger, value: int) -> None:\n \"\"\"Set the main thread's syscall argument 1.\"\"\"\n if not self.threads:\n raise RuntimeError(\"No threads available. Did you call `run` or `attach`?\")\n self.threads[0].syscall_arg1 = value\n\n @property\n def syscall_arg2(self: Debugger) -> int:\n \"\"\"Get the main thread's syscall argument 2.\"\"\"\n if not self.threads:\n raise RuntimeError(\"No threads available. Did you call `run` or `attach`?\")\n return self.threads[0].syscall_arg2\n\n @syscall_arg2.setter\n def syscall_arg2(self: Debugger, value: int) -> None:\n \"\"\"Set the main thread's syscall argument 2.\"\"\"\n if not self.threads:\n raise RuntimeError(\"No threads available. Did you call `run` or `attach`?\")\n self.threads[0].syscall_arg2 = value\n\n @property\n def syscall_arg3(self: Debugger) -> int:\n \"\"\"Get the main thread's syscall argument 3.\"\"\"\n if not self.threads:\n raise RuntimeError(\"No threads available. Did you call `run` or `attach`?\")\n return self.threads[0].syscall_arg3\n\n @syscall_arg3.setter\n def syscall_arg3(self: Debugger, value: int) -> None:\n \"\"\"Set the main thread's syscall argument 3.\"\"\"\n if not self.threads:\n raise RuntimeError(\"No threads available. Did you call `run` or `attach`?\")\n self.threads[0].syscall_arg3 = value\n\n @property\n def syscall_arg4(self: Debugger) -> int:\n \"\"\"Get the main thread's syscall argument 4.\"\"\"\n if not self.threads:\n raise RuntimeError(\"No threads available. Did you call `run` or `attach`?\")\n return self.threads[0].syscall_arg4\n\n @syscall_arg4.setter\n def syscall_arg4(self: Debugger, value: int) -> None:\n \"\"\"Set the main thread's syscall argument 4.\"\"\"\n if not self.threads:\n raise RuntimeError(\"No threads available. Did you call `run` or `attach`?\")\n self.threads[0].syscall_arg4 = value\n\n @property\n def syscall_arg5(self: Debugger) -> int:\n \"\"\"Get the main thread's syscall argument 5.\"\"\"\n if not self.threads:\n raise RuntimeError(\"No threads available. Did you call `run` or `attach`?\")\n return self.threads[0].syscall_arg5\n\n @syscall_arg5.setter\n def syscall_arg5(self: Debugger, value: int) -> None:\n \"\"\"Set the main thread's syscall argument 5.\"\"\"\n if not self.threads:\n raise RuntimeError(\"No threads available. Did you call `run` or `attach`?\")\n self.threads[0].syscall_arg5 = value\n\n @property\n def syscall_number(self: Debugger) -> int:\n \"\"\"Get the main thread's syscall number.\"\"\"\n if not self.threads:\n raise RuntimeError(\"No threads available. Did you call `run` or `attach`?\")\n return self.threads[0].syscall_number\n\n @syscall_number.setter\n def syscall_number(self: Debugger, value: int) -> None:\n \"\"\"Set the main thread's syscall number.\"\"\"\n if not self.threads:\n raise RuntimeError(\"No threads available. Did you call `run` or `attach`?\")\n self.threads[0].syscall_number = value\n\n @property\n def syscall_return(self: Debugger) -> int:\n \"\"\"Get the main thread's syscall return value.\"\"\"\n if not self.threads:\n raise RuntimeError(\"No threads available. Did you call `run` or `attach`?\")\n return self.threads[0].syscall_return\n\n @syscall_return.setter\n def syscall_return(self: Debugger, value: int) -> None:\n \"\"\"Set the main thread's syscall return value.\"\"\"\n if not self.threads:\n raise RuntimeError(\"No threads available. Did you call `run` or `attach`?\")\n self.threads[0].syscall_return = value\n\n @property\n def regs(self: Debugger) -> Registers:\n \"\"\"Get the main thread's registers.\"\"\"\n if not self.threads:\n raise RuntimeError(\"No threads available. Did you call `run` or `attach`?\")\n self._internal_debugger._ensure_process_stopped_regs()\n return self.threads[0].regs\n\n @property\n def dead(self: Debugger) -> bool:\n \"\"\"Whether the process is dead.\"\"\"\n if not self.threads:\n raise RuntimeError(\"No threads available. Did you call `run` or `attach`?\")\n return self.threads[0].dead\n\n @property\n def zombie(self: Debugger) -> None:\n \"\"\"Whether the main thread is a zombie.\"\"\"\n if not self.threads:\n raise RuntimeError(\"No threads available. Did you call `run` or `attach`?\")\n return self.threads[0].zombie\n\n @property\n def memory(self: Debugger) -> AbstractMemoryView:\n \"\"\"The memory view of the process.\"\"\"\n return self._internal_debugger.memory\n\n @property\n def mem(self: Debugger) -> AbstractMemoryView:\n \"\"\"Alias for the `memory` property.\n\n Get the memory view of the process.\n \"\"\"\n return self._internal_debugger.memory\n\n @property\n def process_id(self: Debugger) -> int:\n \"\"\"The process ID.\"\"\"\n return self._internal_debugger.process_id\n\n @property\n def pid(self: Debugger) -> int:\n \"\"\"Alias for `process_id` property.\n\n The process ID.\n \"\"\"\n return self._internal_debugger.process_id\n\n @property\n def thread_id(self: Debugger) -> int:\n \"\"\"The thread ID of the main thread.\"\"\"\n if not self.threads:\n raise RuntimeError(\"No threads available. Did you call `run` or `attach`?\")\n return self.threads[0].tid\n\n @property\n def tid(self: Debugger) -> int:\n \"\"\"Alias for `thread_id` property.\n\n The thread ID of the main thread.\n \"\"\"\n return self._thread_id\n\n @property\n def running(self: Debugger) -> bool:\n \"\"\"Whether the process is running.\"\"\"\n return self._internal_debugger.running\n\n @property\n def saved_ip(self: Debugger) -> int:\n \"\"\"Get the saved instruction pointer of the main thread.\"\"\"\n if not self.threads:\n raise RuntimeError(\"No threads available. Did you call `run` or `attach`?\")\n return self.threads[0].saved_ip\n\n @property\n def exit_code(self: Debugger) -> int | None:\n \"\"\"The main thread's exit code.\"\"\"\n if not self.threads:\n raise RuntimeError(\"No threads available. Did you call `run` or `attach`?\")\n return self.threads[0].exit_code\n\n @property\n def exit_signal(self: Debugger) -> str | None:\n \"\"\"The main thread's exit signal.\"\"\"\n if not self.threads:\n raise RuntimeError(\"No threads available. Did you call `run` or `attach`?\")\n return self.threads[0].exit_signal\n\n @property\n def signal(self: Debugger) -> str | None:\n \"\"\"The signal to be forwarded to the main thread.\"\"\"\n if not self.threads:\n raise RuntimeError(\"No threads available. Did you call `run` or `attach`?\")\n return self.threads[0].signal\n\n @signal.setter\n def signal(self: Debugger, signal: str | int) -> None:\n \"\"\"Set the signal to forward to the main thread.\"\"\"\n if not self.threads:\n raise RuntimeError(\"No threads available. Did you call `run` or `attach`?\")\n self.threads[0].signal = signal\n\n @property\n def signal_number(self: Debugger) -> int | None:\n \"\"\"The signal number to be forwarded to the main thread.\"\"\"\n if not self.threads:\n raise RuntimeError(\"No threads available. Did you call `run` or `attach`?\")\n return self.threads[0].signal_number\n\n def backtrace(self: Debugger, as_symbols: bool = False) -> list:\n \"\"\"Returns the current backtrace of the main thread.\n\n Args:\n as_symbols (bool, optional): Whether to return the backtrace as symbols\n \"\"\"\n if not self.threads:\n raise RuntimeError(\"No threads available. Did you call `run` or `attach`?\")\n return self.threads[0].backtrace(as_symbols)\n\n def pprint_backtrace(self: Debugger) -> None:\n \"\"\"Pretty pints the current backtrace of the main thread.\"\"\"\n if not self.threads:\n raise RuntimeError(\"No threads available. Did you call `run` or `attach`?\")\n self.threads[0].pprint_backtrace()\n\n def pprint_registers(self: Debugger) -> None:\n \"\"\"Pretty prints the main thread's registers.\"\"\"\n if not self.threads:\n raise RuntimeError(\"No threads available. Did you call `run` or `attach`?\")\n self.threads[0].pprint_registers()\n\n def pprint_regs(self: Debugger) -> None:\n \"\"\"Alias for the `pprint_registers` method.\n\n Pretty prints the main thread's registers.\n \"\"\"\n self.pprint_registers()\n\n def pprint_registers_all(self: Debugger) -> None:\n \"\"\"Pretty prints all the main thread's registers.\"\"\"\n if not self.threads:\n raise RuntimeError(\"No threads available. Did you call `run` or `attach`?\")\n self.threads[0].pprint_registers_all()\n\n def pprint_regs_all(self: Debugger) -> None:\n \"\"\"Alias for the `pprint_registers_all` method.\n\n Pretty prints all the main thread's registers.\n \"\"\"\n self.pprint_registers_all()\n\n def pprint_memory(\n self: Debugger,\n start: int,\n end: int,\n file: str = \"hybrid\",\n override_word_size: int | None = None,\n integer_mode: bool = False,\n ) -> None:\n \"\"\"Pretty prints the memory contents of the process.\n\n Args:\n start (int): The start address of the memory region.\n end (int): The end address of the memory region.\n file (str, optional): The user-defined backing file to resolve the address in. Defaults to \"hybrid\" (libdebug will first try to solve the address as an absolute address, then as a relative address w.r.t. the \"binary\" map file).\n override_word_size (int, optional): The word size to use for the memory dump. Defaults to None.\n integer_mode (bool, optional): Whether to print the memory contents as integers. Defaults to False.\n \"\"\"\n self._internal_debugger.pprint_memory(start, end, file, override_word_size, integer_mode)\n\n def step(self: Debugger) -> None:\n \"\"\"Executes a single instruction of the process.\"\"\"\n self._internal_debugger.step(self)\n\n def step_until(\n self: Debugger,\n position: int | str,\n max_steps: int = -1,\n file: str = \"hybrid\",\n ) -> None:\n \"\"\"Executes instructions of the process until the specified location is reached.\n\n Args:\n position (int | bytes): The location to reach.\n max_steps (int, optional): The maximum number of steps to execute. Defaults to -1.\n file (str, optional): The user-defined backing file to resolve the address in. Defaults to \"hybrid\" (libdebug will first try to solve the address as an absolute address, then as a relative address w.r.t. the \"binary\" map file).\n \"\"\"\n self._internal_debugger.step_until(self, position, max_steps, file)\n\n def finish(self: Debugger, heuristic: str = \"backtrace\") -> None:\n \"\"\"Continues execution until the current function returns or the process stops.\n\n The command requires a heuristic to determine the end of the function. The available heuristics are:\n - `backtrace`: The debugger will place a breakpoint on the saved return address found on the stack and continue execution on all threads.\n - `step-mode`: The debugger will step on the specified thread until the current function returns. This will be slower.\n\n Args:\n heuristic (str, optional): The heuristic to use. Defaults to \"backtrace\".\n \"\"\"\n self._internal_debugger.finish(self, heuristic=heuristic)\n\n def next(self: Debugger) -> None:\n \"\"\"Executes the next instruction of the process. If the instruction is a call, the debugger will continue until the called function returns.\"\"\"\n self._internal_debugger.next(self)\n\n def si(self: Debugger) -> None:\n \"\"\"Alias for the `step` method.\n\n Executes a single instruction of the process.\n \"\"\"\n self._internal_debugger.step(self)\n\n def su(\n self: Debugger,\n position: int | str,\n max_steps: int = -1,\n ) -> None:\n \"\"\"Alias for the `step_until` method.\n\n Executes instructions of the process until the specified location is reached.\n\n Args:\n position (int | bytes): The location to reach.\n max_steps (int, optional): The maximum number of steps to execute. Defaults to -1.\n \"\"\"\n self._internal_debugger.step_until(self, position, max_steps)\n\n def fin(self: Debugger, heuristic: str = \"backtrace\") -> None:\n \"\"\"Alias for the `finish` method. Continues execution until the current function returns or the process stops.\n\n The command requires a heuristic to determine the end of the function. The available heuristics are:\n - `backtrace`: The debugger will place a breakpoint on the saved return address found on the stack and continue execution on all threads.\n - `step-mode`: The debugger will step on the specified thread until the current function returns. This will be slower.\n\n Args:\n heuristic (str, optional): The heuristic to use. Defaults to \"backtrace\".\n \"\"\"\n self._internal_debugger.finish(self, heuristic)\n\n def ni(self: Debugger) -> None:\n \"\"\"Alias for the `next` method. Executes the next instruction of the process. If the instruction is a call, the debugger will continue until the called function returns.\"\"\"\n self._internal_debugger.next(self)\n\n def __repr__(self: Debugger) -> str:\n \"\"\"Return the string representation of the `Debugger` object.\"\"\"\n repr_str = \"Debugger(\"\n repr_str += f\"argv = {self._internal_debugger.argv}, \"\n repr_str += f\"aslr = {self._internal_debugger.aslr_enabled}, \"\n repr_str += f\"env = {self._internal_debugger.env}, \"\n repr_str += f\"escape_antidebug = {self._internal_debugger.escape_antidebug}, \"\n repr_str += f\"continue_to_binary_entrypoint = {self._internal_debugger.autoreach_entrypoint}, \"\n repr_str += f\"auto_interrupt_on_command = {self._internal_debugger.auto_interrupt_on_command}, \"\n repr_str += f\"fast_memory = {self._internal_debugger.fast_memory}, \"\n repr_str += f\"kill_on_exit = {self._internal_debugger.kill_on_exit})\\n\"\n repr_str += f\"follow_children = {self._internal_debugger.follow_children}, \"\n repr_str += f\" Architecture: {self.arch}\\n\"\n repr_str += \" Threads:\"\n for thread in self.threads:\n repr_str += f\"\\n ({thread.tid}, {'dead' if thread.dead else 'alive'}) \"\n repr_str += f\"ip: {thread.instruction_pointer:#x}\"\n return repr_str\n\n def create_snapshot(self: Debugger, level: str = \"base\", name: str | None = None) -> ProcessSnapshot:\n \"\"\"Create a snapshot of the current process state.\n\n Snapshot levels:\n - base: Registers\n - writable: Registers, writable memory contents\n - full: Registers, all memory contents\n\n Args:\n level (str): The level of the snapshot.\n name (str, optional): The name of the snapshot. Defaults to None.\n\n Returns:\n ProcessSnapshot: The created snapshot.\n \"\"\"\n return self._internal_debugger.create_snapshot(level, name)\n\n def load_snapshot(self: Debugger, file_path: str) -> Snapshot:\n \"\"\"Load a snapshot of the thread / process state.\n\n Args:\n file_path (str): The path to the snapshot file.\n \"\"\"\n return self._internal_debugger.load_snapshot(file_path)\n"},{"location":"from_pydoc/generated/debugger/debugger/#libdebug.debugger.debugger.Debugger._internal_debugger","title":"_internal_debugger instance-attribute","text":"The internal debugger object.
"},{"location":"from_pydoc/generated/debugger/debugger/#libdebug.debugger.debugger.Debugger._sentinel","title":"_sentinel = object() class-attribute instance-attribute","text":"A sentinel object.
"},{"location":"from_pydoc/generated/debugger/debugger/#libdebug.debugger.debugger.Debugger.arch","title":"arch property writable","text":"Get the architecture of the process.
"},{"location":"from_pydoc/generated/debugger/debugger/#libdebug.debugger.debugger.Debugger.breakpoints","title":"breakpoints property","text":"Get the breakpoints set on the process.
"},{"location":"from_pydoc/generated/debugger/debugger/#libdebug.debugger.debugger.Debugger.caught_signals","title":"caught_signals property","text":"Get the caught signals dictionary.
Returns:
Type Descriptiondict[int, SignalCatcher] dict[int, SignalCatcher]: the caught signals dictionary.
"},{"location":"from_pydoc/generated/debugger/debugger/#libdebug.debugger.debugger.Debugger.children","title":"children property","text":"Get the list of child debuggers.
"},{"location":"from_pydoc/generated/debugger/debugger/#libdebug.debugger.debugger.Debugger.dead","title":"dead property","text":"Whether the process is dead.
"},{"location":"from_pydoc/generated/debugger/debugger/#libdebug.debugger.debugger.Debugger.exit_code","title":"exit_code property","text":"The main thread's exit code.
"},{"location":"from_pydoc/generated/debugger/debugger/#libdebug.debugger.debugger.Debugger.exit_signal","title":"exit_signal property","text":"The main thread's exit signal.
"},{"location":"from_pydoc/generated/debugger/debugger/#libdebug.debugger.debugger.Debugger.fast_memory","title":"fast_memory property writable","text":"Get the state of the fast_memory flag.
It is used to determine if the debugger should use a faster memory access method.
Returns:
Name Type Descriptionbool bool True if the debugger should use a faster memory access method, False otherwise.
"},{"location":"from_pydoc/generated/debugger/debugger/#libdebug.debugger.debugger.Debugger.handled_syscalls","title":"handled_syscalls property","text":"Get the handled syscalls dictionary.
Returns:
Type Descriptiondict[int, SyscallHandler] dict[int, SyscallHandler]: the handled syscalls dictionary.
"},{"location":"from_pydoc/generated/debugger/debugger/#libdebug.debugger.debugger.Debugger.instruction_pointer","title":"instruction_pointer property writable","text":"Get the main thread's instruction pointer.
"},{"location":"from_pydoc/generated/debugger/debugger/#libdebug.debugger.debugger.Debugger.kill_on_exit","title":"kill_on_exit property writable","text":"Get whether the process will be killed when the debugger exits.
"},{"location":"from_pydoc/generated/debugger/debugger/#libdebug.debugger.debugger.Debugger.maps","title":"maps property","text":"Get the memory maps of the process.
"},{"location":"from_pydoc/generated/debugger/debugger/#libdebug.debugger.debugger.Debugger.mem","title":"mem property","text":"Alias for the memory property.
Get the memory view of the process.
"},{"location":"from_pydoc/generated/debugger/debugger/#libdebug.debugger.debugger.Debugger.memory","title":"memory property","text":"The memory view of the process.
"},{"location":"from_pydoc/generated/debugger/debugger/#libdebug.debugger.debugger.Debugger.pid","title":"pid property","text":"Alias for process_id property.
The process ID.
"},{"location":"from_pydoc/generated/debugger/debugger/#libdebug.debugger.debugger.Debugger.pprint_syscalls","title":"pprint_syscalls property writable","text":"Get the state of the pprint_syscalls flag.
Returns:
Name Type Descriptionbool bool True if the debugger should pretty print syscalls, False otherwise.
"},{"location":"from_pydoc/generated/debugger/debugger/#libdebug.debugger.debugger.Debugger.process_id","title":"process_id property","text":"The process ID.
"},{"location":"from_pydoc/generated/debugger/debugger/#libdebug.debugger.debugger.Debugger.regs","title":"regs property","text":"Get the main thread's registers.
"},{"location":"from_pydoc/generated/debugger/debugger/#libdebug.debugger.debugger.Debugger.running","title":"running property","text":"Whether the process is running.
"},{"location":"from_pydoc/generated/debugger/debugger/#libdebug.debugger.debugger.Debugger.saved_ip","title":"saved_ip property","text":"Get the saved instruction pointer of the main thread.
"},{"location":"from_pydoc/generated/debugger/debugger/#libdebug.debugger.debugger.Debugger.signal","title":"signal property writable","text":"The signal to be forwarded to the main thread.
"},{"location":"from_pydoc/generated/debugger/debugger/#libdebug.debugger.debugger.Debugger.signal_number","title":"signal_number property","text":"The signal number to be forwarded to the main thread.
"},{"location":"from_pydoc/generated/debugger/debugger/#libdebug.debugger.debugger.Debugger.signals_to_block","title":"signals_to_block property writable","text":"Get the signals to not forward to the process.
Returns:
Type Descriptionlist[str] list[str]: The signals to block.
"},{"location":"from_pydoc/generated/debugger/debugger/#libdebug.debugger.debugger.Debugger.symbols","title":"symbols property","text":"Get the symbols of the process.
"},{"location":"from_pydoc/generated/debugger/debugger/#libdebug.debugger.debugger.Debugger.syscall_arg0","title":"syscall_arg0 property writable","text":"Get the main thread's syscall argument 0.
"},{"location":"from_pydoc/generated/debugger/debugger/#libdebug.debugger.debugger.Debugger.syscall_arg1","title":"syscall_arg1 property writable","text":"Get the main thread's syscall argument 1.
"},{"location":"from_pydoc/generated/debugger/debugger/#libdebug.debugger.debugger.Debugger.syscall_arg2","title":"syscall_arg2 property writable","text":"Get the main thread's syscall argument 2.
"},{"location":"from_pydoc/generated/debugger/debugger/#libdebug.debugger.debugger.Debugger.syscall_arg3","title":"syscall_arg3 property writable","text":"Get the main thread's syscall argument 3.
"},{"location":"from_pydoc/generated/debugger/debugger/#libdebug.debugger.debugger.Debugger.syscall_arg4","title":"syscall_arg4 property writable","text":"Get the main thread's syscall argument 4.
"},{"location":"from_pydoc/generated/debugger/debugger/#libdebug.debugger.debugger.Debugger.syscall_arg5","title":"syscall_arg5 property writable","text":"Get the main thread's syscall argument 5.
"},{"location":"from_pydoc/generated/debugger/debugger/#libdebug.debugger.debugger.Debugger.syscall_number","title":"syscall_number property writable","text":"Get the main thread's syscall number.
"},{"location":"from_pydoc/generated/debugger/debugger/#libdebug.debugger.debugger.Debugger.syscall_return","title":"syscall_return property writable","text":"Get the main thread's syscall return value.
"},{"location":"from_pydoc/generated/debugger/debugger/#libdebug.debugger.debugger.Debugger.syscalls_to_not_pprint","title":"syscalls_to_not_pprint property writable","text":"Get the syscalls to not pretty print.
Returns:
Type Descriptionlist[str] | None list[str]: The syscalls to not pretty print.
"},{"location":"from_pydoc/generated/debugger/debugger/#libdebug.debugger.debugger.Debugger.syscalls_to_pprint","title":"syscalls_to_pprint property writable","text":"Get the syscalls to pretty print.
Returns:
Type Descriptionlist[str] | None list[str]: The syscalls to pretty print.
"},{"location":"from_pydoc/generated/debugger/debugger/#libdebug.debugger.debugger.Debugger.thread_id","title":"thread_id property","text":"The thread ID of the main thread.
"},{"location":"from_pydoc/generated/debugger/debugger/#libdebug.debugger.debugger.Debugger.threads","title":"threads property","text":"Get the list of threads in the process.
"},{"location":"from_pydoc/generated/debugger/debugger/#libdebug.debugger.debugger.Debugger.tid","title":"tid property","text":"Alias for thread_id property.
The thread ID of the main thread.
"},{"location":"from_pydoc/generated/debugger/debugger/#libdebug.debugger.debugger.Debugger.zombie","title":"zombie property","text":"Whether the main thread is a zombie.
"},{"location":"from_pydoc/generated/debugger/debugger/#libdebug.debugger.debugger.Debugger.__repr__","title":"__repr__()","text":"Return the string representation of the Debugger object.
libdebug/debugger/debugger.py def __repr__(self: Debugger) -> str:\n \"\"\"Return the string representation of the `Debugger` object.\"\"\"\n repr_str = \"Debugger(\"\n repr_str += f\"argv = {self._internal_debugger.argv}, \"\n repr_str += f\"aslr = {self._internal_debugger.aslr_enabled}, \"\n repr_str += f\"env = {self._internal_debugger.env}, \"\n repr_str += f\"escape_antidebug = {self._internal_debugger.escape_antidebug}, \"\n repr_str += f\"continue_to_binary_entrypoint = {self._internal_debugger.autoreach_entrypoint}, \"\n repr_str += f\"auto_interrupt_on_command = {self._internal_debugger.auto_interrupt_on_command}, \"\n repr_str += f\"fast_memory = {self._internal_debugger.fast_memory}, \"\n repr_str += f\"kill_on_exit = {self._internal_debugger.kill_on_exit})\\n\"\n repr_str += f\"follow_children = {self._internal_debugger.follow_children}, \"\n repr_str += f\" Architecture: {self.arch}\\n\"\n repr_str += \" Threads:\"\n for thread in self.threads:\n repr_str += f\"\\n ({thread.tid}, {'dead' if thread.dead else 'alive'}) \"\n repr_str += f\"ip: {thread.instruction_pointer:#x}\"\n return repr_str\n"},{"location":"from_pydoc/generated/debugger/debugger/#libdebug.debugger.debugger.Debugger.attach","title":"attach(pid)","text":"Attaches to an existing process.
Source code inlibdebug/debugger/debugger.py def attach(self: Debugger, pid: int) -> None:\n \"\"\"Attaches to an existing process.\"\"\"\n self._internal_debugger.attach(pid)\n"},{"location":"from_pydoc/generated/debugger/debugger/#libdebug.debugger.debugger.Debugger.backtrace","title":"backtrace(as_symbols=False)","text":"Returns the current backtrace of the main thread.
Parameters:
Name Type Description Defaultas_symbols bool Whether to return the backtrace as symbols
False Source code in libdebug/debugger/debugger.py def backtrace(self: Debugger, as_symbols: bool = False) -> list:\n \"\"\"Returns the current backtrace of the main thread.\n\n Args:\n as_symbols (bool, optional): Whether to return the backtrace as symbols\n \"\"\"\n if not self.threads:\n raise RuntimeError(\"No threads available. Did you call `run` or `attach`?\")\n return self.threads[0].backtrace(as_symbols)\n"},{"location":"from_pydoc/generated/debugger/debugger/#libdebug.debugger.debugger.Debugger.bp","title":"bp(position, hardware=False, condition='x', length=1, callback=None, file='hybrid')","text":"Alias for the breakpoint method.
Parameters:
Name Type Description Defaultposition int | bytes The location of the breakpoint.
requiredhardware bool Whether the breakpoint should be hardware-assisted or purely software. Defaults to False.
False condition str The trigger condition for the breakpoint. Defaults to None.
'x' length int The length of the breakpoint. Only for watchpoints. Defaults to 1.
1 callback Callable[[ThreadContext, Breakpoint], None] A callback to be called when the breakpoint is hit. Defaults to None.
None file str The user-defined backing file to resolve the address in. Defaults to \"hybrid\" (libdebug will first try to solve the address as an absolute address, then as a relative address w.r.t. the \"binary\" map file).
'hybrid' Source code in libdebug/debugger/debugger.py def bp(\n self: Debugger,\n position: int | str,\n hardware: bool = False,\n condition: str = \"x\",\n length: int = 1,\n callback: None | Callable[[ThreadContext, Breakpoint], None] = None,\n file: str = \"hybrid\",\n) -> Breakpoint:\n \"\"\"Alias for the `breakpoint` method.\n\n Args:\n position (int | bytes): The location of the breakpoint.\n hardware (bool, optional): Whether the breakpoint should be hardware-assisted or purely software. Defaults to False.\n condition (str, optional): The trigger condition for the breakpoint. Defaults to None.\n length (int, optional): The length of the breakpoint. Only for watchpoints. Defaults to 1.\n callback (Callable[[ThreadContext, Breakpoint], None], optional): A callback to be called when the breakpoint is hit. Defaults to None.\n file (str, optional): The user-defined backing file to resolve the address in. Defaults to \"hybrid\" (libdebug will first try to solve the address as an absolute address, then as a relative address w.r.t. the \"binary\" map file).\n \"\"\"\n return self._internal_debugger.breakpoint(position, hardware, condition, length, callback, file)\n"},{"location":"from_pydoc/generated/debugger/debugger/#libdebug.debugger.debugger.Debugger.breakpoint","title":"breakpoint(position, hardware=False, condition='x', length=1, callback=None, file='hybrid')","text":"Sets a breakpoint at the specified location.
Parameters:
Name Type Description Defaultposition int | bytes The location of the breakpoint.
requiredhardware bool Whether the breakpoint should be hardware-assisted or purely software. Defaults to False.
False condition str The trigger condition for the breakpoint. Defaults to None.
'x' length int The length of the breakpoint. Only for watchpoints. Defaults to 1.
1 callback None | bool | Callable[[ThreadContext, Breakpoint], None] A callback to be called when the breakpoint is hit. If True, an empty callback will be set. Defaults to None.
None file str The user-defined backing file to resolve the address in. Defaults to \"hybrid\" (libdebug will first try to solve the address as an absolute address, then as a relative address w.r.t. the \"binary\" map file).
'hybrid' Source code in libdebug/debugger/debugger.py def breakpoint(\n self: Debugger,\n position: int | str,\n hardware: bool = False,\n condition: str = \"x\",\n length: int = 1,\n callback: None | bool | Callable[[ThreadContext, Breakpoint], None] = None,\n file: str = \"hybrid\",\n) -> Breakpoint:\n \"\"\"Sets a breakpoint at the specified location.\n\n Args:\n position (int | bytes): The location of the breakpoint.\n hardware (bool, optional): Whether the breakpoint should be hardware-assisted or purely software. Defaults to False.\n condition (str, optional): The trigger condition for the breakpoint. Defaults to None.\n length (int, optional): The length of the breakpoint. Only for watchpoints. Defaults to 1.\n callback (None | bool | Callable[[ThreadContext, Breakpoint], None], optional): A callback to be called when the breakpoint is hit. If True, an empty callback will be set. Defaults to None.\n file (str, optional): The user-defined backing file to resolve the address in. Defaults to \"hybrid\" (libdebug will first try to solve the address as an absolute address, then as a relative address w.r.t. the \"binary\" map file).\n \"\"\"\n return self._internal_debugger.breakpoint(position, hardware, condition, length, callback, file)\n"},{"location":"from_pydoc/generated/debugger/debugger/#libdebug.debugger.debugger.Debugger.c","title":"c()","text":"Alias for the cont method.
Continues the process.
Source code inlibdebug/debugger/debugger.py def c(self: Debugger) -> None:\n \"\"\"Alias for the `cont` method.\n\n Continues the process.\n \"\"\"\n self._internal_debugger.cont()\n"},{"location":"from_pydoc/generated/debugger/debugger/#libdebug.debugger.debugger.Debugger.catch_signal","title":"catch_signal(signal, callback=None, recursive=False)","text":"Catch a signal in the target process.
Parameters:
Name Type Description Defaultsignal int | str The signal to catch. If \"*\", \"ALL\", \"all\" or -1 is passed, all signals will be caught.
requiredcallback None | bool | Callable[[ThreadContext, SignalCatcher], None] A callback to be called when the signal is caught. If True, an empty callback will be set. Defaults to None.
None recursive bool Whether, when the signal is hijacked with another one, the signal catcher associated with the new signal should be considered as well. Defaults to False.
False Returns:
Name Type DescriptionSignalCatcher SignalCatcher The SignalCatcher object.
Source code inlibdebug/debugger/debugger.py def catch_signal(\n self: Debugger,\n signal: int | str,\n callback: None | bool | Callable[[ThreadContext, SignalCatcher], None] = None,\n recursive: bool = False,\n) -> SignalCatcher:\n \"\"\"Catch a signal in the target process.\n\n Args:\n signal (int | str): The signal to catch. If \"*\", \"ALL\", \"all\" or -1 is passed, all signals will be caught.\n callback (None | bool | Callable[[ThreadContext, SignalCatcher], None], optional): A callback to be called when the signal is caught. If True, an empty callback will be set. Defaults to None.\n recursive (bool, optional): Whether, when the signal is hijacked with another one, the signal catcher associated with the new signal should be considered as well. Defaults to False.\n\n Returns:\n SignalCatcher: The SignalCatcher object.\n \"\"\"\n return self._internal_debugger.catch_signal(signal, callback, recursive)\n"},{"location":"from_pydoc/generated/debugger/debugger/#libdebug.debugger.debugger.Debugger.cont","title":"cont()","text":"Continues the process.
Source code inlibdebug/debugger/debugger.py def cont(self: Debugger) -> None:\n \"\"\"Continues the process.\"\"\"\n self._internal_debugger.cont()\n"},{"location":"from_pydoc/generated/debugger/debugger/#libdebug.debugger.debugger.Debugger.create_snapshot","title":"create_snapshot(level='base', name=None)","text":"Create a snapshot of the current process state.
Snapshot levels: - base: Registers - writable: Registers, writable memory contents - full: Registers, all memory contents
Parameters:
Name Type Description Defaultlevel str The level of the snapshot.
'base' name str The name of the snapshot. Defaults to None.
None Returns:
Name Type DescriptionProcessSnapshot ProcessSnapshot The created snapshot.
Source code inlibdebug/debugger/debugger.py def create_snapshot(self: Debugger, level: str = \"base\", name: str | None = None) -> ProcessSnapshot:\n \"\"\"Create a snapshot of the current process state.\n\n Snapshot levels:\n - base: Registers\n - writable: Registers, writable memory contents\n - full: Registers, all memory contents\n\n Args:\n level (str): The level of the snapshot.\n name (str, optional): The name of the snapshot. Defaults to None.\n\n Returns:\n ProcessSnapshot: The created snapshot.\n \"\"\"\n return self._internal_debugger.create_snapshot(level, name)\n"},{"location":"from_pydoc/generated/debugger/debugger/#libdebug.debugger.debugger.Debugger.detach","title":"detach()","text":"Detaches from the process.
Source code inlibdebug/debugger/debugger.py def detach(self: Debugger) -> None:\n \"\"\"Detaches from the process.\"\"\"\n self._internal_debugger.detach()\n"},{"location":"from_pydoc/generated/debugger/debugger/#libdebug.debugger.debugger.Debugger.fin","title":"fin(heuristic='backtrace')","text":"Alias for the finish method. Continues execution until the current function returns or the process stops.
The command requires a heuristic to determine the end of the function. The available heuristics are: - backtrace: The debugger will place a breakpoint on the saved return address found on the stack and continue execution on all threads. - step-mode: The debugger will step on the specified thread until the current function returns. This will be slower.
Parameters:
Name Type Description Defaultheuristic str The heuristic to use. Defaults to \"backtrace\".
'backtrace' Source code in libdebug/debugger/debugger.py def fin(self: Debugger, heuristic: str = \"backtrace\") -> None:\n \"\"\"Alias for the `finish` method. Continues execution until the current function returns or the process stops.\n\n The command requires a heuristic to determine the end of the function. The available heuristics are:\n - `backtrace`: The debugger will place a breakpoint on the saved return address found on the stack and continue execution on all threads.\n - `step-mode`: The debugger will step on the specified thread until the current function returns. This will be slower.\n\n Args:\n heuristic (str, optional): The heuristic to use. Defaults to \"backtrace\".\n \"\"\"\n self._internal_debugger.finish(self, heuristic)\n"},{"location":"from_pydoc/generated/debugger/debugger/#libdebug.debugger.debugger.Debugger.finish","title":"finish(heuristic='backtrace')","text":"Continues execution until the current function returns or the process stops.
The command requires a heuristic to determine the end of the function. The available heuristics are: - backtrace: The debugger will place a breakpoint on the saved return address found on the stack and continue execution on all threads. - step-mode: The debugger will step on the specified thread until the current function returns. This will be slower.
Parameters:
Name Type Description Defaultheuristic str The heuristic to use. Defaults to \"backtrace\".
'backtrace' Source code in libdebug/debugger/debugger.py def finish(self: Debugger, heuristic: str = \"backtrace\") -> None:\n \"\"\"Continues execution until the current function returns or the process stops.\n\n The command requires a heuristic to determine the end of the function. The available heuristics are:\n - `backtrace`: The debugger will place a breakpoint on the saved return address found on the stack and continue execution on all threads.\n - `step-mode`: The debugger will step on the specified thread until the current function returns. This will be slower.\n\n Args:\n heuristic (str, optional): The heuristic to use. Defaults to \"backtrace\".\n \"\"\"\n self._internal_debugger.finish(self, heuristic=heuristic)\n"},{"location":"from_pydoc/generated/debugger/debugger/#libdebug.debugger.debugger.Debugger.gdb","title":"gdb(migrate_breakpoints=True, open_in_new_process=True, blocking=True)","text":"Migrates the current debugging session to GDB.
Parameters:
Name Type Description Defaultmigrate_breakpoints bool Whether to migrate over the breakpoints set in libdebug to GDB.
True open_in_new_process bool Whether to attempt to open GDB in a new process instead of the current one.
True blocking bool Whether to block the script until GDB is closed.
True Source code in libdebug/debugger/debugger.py def gdb(\n self: Debugger,\n migrate_breakpoints: bool = True,\n open_in_new_process: bool = True,\n blocking: bool = True,\n) -> GdbResumeEvent:\n \"\"\"Migrates the current debugging session to GDB.\n\n Args:\n migrate_breakpoints (bool): Whether to migrate over the breakpoints set in libdebug to GDB.\n open_in_new_process (bool): Whether to attempt to open GDB in a new process instead of the current one.\n blocking (bool): Whether to block the script until GDB is closed.\n \"\"\"\n return self._internal_debugger.gdb(migrate_breakpoints, open_in_new_process, blocking)\n"},{"location":"from_pydoc/generated/debugger/debugger/#libdebug.debugger.debugger.Debugger.handle_syscall","title":"handle_syscall(syscall, on_enter=None, on_exit=None, recursive=False)","text":"Handle a syscall in the target process.
Parameters:
Name Type Description Defaultsyscall int | str The syscall name or number to handle. If \"*\", \"ALL\", \"all\" or -1 is passed, all syscalls will be handled.
requiredon_enter None | bool | Callable[[ThreadContext, SyscallHandler], None] The callback to execute when the syscall is entered. If True, an empty callback will be set. Defaults to None.
None on_exit None | bool | Callable[[ThreadContext, SyscallHandler], None] The callback to execute when the syscall is exited. If True, an empty callback will be set. Defaults to None.
None recursive bool Whether, when the syscall is hijacked with another one, the syscall handler associated with the new syscall should be considered as well. Defaults to False.
False Returns:
Name Type DescriptionSyscallHandler SyscallHandler The SyscallHandler object.
Source code inlibdebug/debugger/debugger.py def handle_syscall(\n self: Debugger,\n syscall: int | str,\n on_enter: None | bool | Callable[[ThreadContext, SyscallHandler], None] = None,\n on_exit: None | bool | Callable[[ThreadContext, SyscallHandler], None] = None,\n recursive: bool = False,\n) -> SyscallHandler:\n \"\"\"Handle a syscall in the target process.\n\n Args:\n syscall (int | str): The syscall name or number to handle. If \"*\", \"ALL\", \"all\" or -1 is passed, all syscalls will be handled.\n on_enter (None | bool |Callable[[ThreadContext, SyscallHandler], None], optional): The callback to execute when the syscall is entered. If True, an empty callback will be set. Defaults to None.\n on_exit (None | bool | Callable[[ThreadContext, SyscallHandler], None], optional): The callback to execute when the syscall is exited. If True, an empty callback will be set. Defaults to None.\n recursive (bool, optional): Whether, when the syscall is hijacked with another one, the syscall handler associated with the new syscall should be considered as well. Defaults to False.\n\n Returns:\n SyscallHandler: The SyscallHandler object.\n \"\"\"\n return self._internal_debugger.handle_syscall(syscall, on_enter, on_exit, recursive)\n"},{"location":"from_pydoc/generated/debugger/debugger/#libdebug.debugger.debugger.Debugger.hijack_signal","title":"hijack_signal(original_signal, new_signal, recursive=False)","text":"Hijack a signal in the target process.
Parameters:
Name Type Description Defaultoriginal_signal int | str The signal to hijack. If \"*\", \"ALL\", \"all\" or -1 is passed, all signals will be hijacked.
requirednew_signal int | str The signal to hijack the original signal with.
requiredrecursive bool Whether, when the signal is hijacked with another one, the signal catcher associated with the new signal should be considered as well. Defaults to False.
False Returns:
Name Type DescriptionSignalCatcher SyscallHandler The SignalCatcher object.
Source code inlibdebug/debugger/debugger.py def hijack_signal(\n self: Debugger,\n original_signal: int | str,\n new_signal: int | str,\n recursive: bool = False,\n) -> SyscallHandler:\n \"\"\"Hijack a signal in the target process.\n\n Args:\n original_signal (int | str): The signal to hijack. If \"*\", \"ALL\", \"all\" or -1 is passed, all signals will be hijacked.\n new_signal (int | str): The signal to hijack the original signal with.\n recursive (bool, optional): Whether, when the signal is hijacked with another one, the signal catcher associated with the new signal should be considered as well. Defaults to False.\n\n Returns:\n SignalCatcher: The SignalCatcher object.\n \"\"\"\n return self._internal_debugger.hijack_signal(original_signal, new_signal, recursive)\n"},{"location":"from_pydoc/generated/debugger/debugger/#libdebug.debugger.debugger.Debugger.hijack_syscall","title":"hijack_syscall(original_syscall, new_syscall, recursive=False, **kwargs)","text":"Hijacks a syscall in the target process.
Parameters:
Name Type Description Defaultoriginal_syscall int | str The syscall name or number to hijack. If \"*\", \"ALL\", \"all\" or -1 is passed, all syscalls will be hijacked.
requirednew_syscall int | str The syscall name or number to hijack the original syscall with.
requiredrecursive bool Whether, when the syscall is hijacked with another one, the syscall handler associated with the new syscall should be considered as well. Defaults to False.
False **kwargs int (int, optional): The arguments to pass to the new syscall.
{} Returns:
Name Type DescriptionSyscallHandler SyscallHandler The SyscallHandler object.
Source code inlibdebug/debugger/debugger.py def hijack_syscall(\n self: Debugger,\n original_syscall: int | str,\n new_syscall: int | str,\n recursive: bool = False,\n **kwargs: int,\n) -> SyscallHandler:\n \"\"\"Hijacks a syscall in the target process.\n\n Args:\n original_syscall (int | str): The syscall name or number to hijack. If \"*\", \"ALL\", \"all\" or -1 is passed, all syscalls will be hijacked.\n new_syscall (int | str): The syscall name or number to hijack the original syscall with.\n recursive (bool, optional): Whether, when the syscall is hijacked with another one, the syscall handler associated with the new syscall should be considered as well. Defaults to False.\n **kwargs: (int, optional): The arguments to pass to the new syscall.\n\n Returns:\n SyscallHandler: The SyscallHandler object.\n \"\"\"\n return self._internal_debugger.hijack_syscall(original_syscall, new_syscall, recursive, **kwargs)\n"},{"location":"from_pydoc/generated/debugger/debugger/#libdebug.debugger.debugger.Debugger.int","title":"int()","text":"Alias for the interrupt method.
Interrupts the process.
Source code inlibdebug/debugger/debugger.py def int(self: Debugger) -> None:\n \"\"\"Alias for the `interrupt` method.\n\n Interrupts the process.\n \"\"\"\n self._internal_debugger.interrupt()\n"},{"location":"from_pydoc/generated/debugger/debugger/#libdebug.debugger.debugger.Debugger.interrupt","title":"interrupt()","text":"Interrupts the process.
Source code inlibdebug/debugger/debugger.py def interrupt(self: Debugger) -> None:\n \"\"\"Interrupts the process.\"\"\"\n self._internal_debugger.interrupt()\n"},{"location":"from_pydoc/generated/debugger/debugger/#libdebug.debugger.debugger.Debugger.kill","title":"kill()","text":"Kills the process.
Source code inlibdebug/debugger/debugger.py def kill(self: Debugger) -> None:\n \"\"\"Kills the process.\"\"\"\n self._internal_debugger.kill()\n"},{"location":"from_pydoc/generated/debugger/debugger/#libdebug.debugger.debugger.Debugger.load_snapshot","title":"load_snapshot(file_path)","text":"Load a snapshot of the thread / process state.
Parameters:
Name Type Description Defaultfile_path str The path to the snapshot file.
required Source code inlibdebug/debugger/debugger.py def load_snapshot(self: Debugger, file_path: str) -> Snapshot:\n \"\"\"Load a snapshot of the thread / process state.\n\n Args:\n file_path (str): The path to the snapshot file.\n \"\"\"\n return self._internal_debugger.load_snapshot(file_path)\n"},{"location":"from_pydoc/generated/debugger/debugger/#libdebug.debugger.debugger.Debugger.next","title":"next()","text":"Executes the next instruction of the process. If the instruction is a call, the debugger will continue until the called function returns.
Source code inlibdebug/debugger/debugger.py def next(self: Debugger) -> None:\n \"\"\"Executes the next instruction of the process. If the instruction is a call, the debugger will continue until the called function returns.\"\"\"\n self._internal_debugger.next(self)\n"},{"location":"from_pydoc/generated/debugger/debugger/#libdebug.debugger.debugger.Debugger.ni","title":"ni()","text":"Alias for the next method. Executes the next instruction of the process. If the instruction is a call, the debugger will continue until the called function returns.
libdebug/debugger/debugger.py def ni(self: Debugger) -> None:\n \"\"\"Alias for the `next` method. Executes the next instruction of the process. If the instruction is a call, the debugger will continue until the called function returns.\"\"\"\n self._internal_debugger.next(self)\n"},{"location":"from_pydoc/generated/debugger/debugger/#libdebug.debugger.debugger.Debugger.post_init_","title":"post_init_(internal_debugger)","text":"Do not use this constructor directly. Use the debugger function instead.
libdebug/debugger/debugger.py def post_init_(self: Debugger, internal_debugger: InternalDebugger) -> None:\n \"\"\"Do not use this constructor directly. Use the `debugger` function instead.\"\"\"\n self._internal_debugger = internal_debugger\n self._internal_debugger.start_up()\n"},{"location":"from_pydoc/generated/debugger/debugger/#libdebug.debugger.debugger.Debugger.pprint_backtrace","title":"pprint_backtrace()","text":"Pretty pints the current backtrace of the main thread.
Source code inlibdebug/debugger/debugger.py def pprint_backtrace(self: Debugger) -> None:\n \"\"\"Pretty pints the current backtrace of the main thread.\"\"\"\n if not self.threads:\n raise RuntimeError(\"No threads available. Did you call `run` or `attach`?\")\n self.threads[0].pprint_backtrace()\n"},{"location":"from_pydoc/generated/debugger/debugger/#libdebug.debugger.debugger.Debugger.pprint_maps","title":"pprint_maps()","text":"Prints the memory maps of the process.
Source code inlibdebug/debugger/debugger.py def pprint_maps(self: Debugger) -> None:\n \"\"\"Prints the memory maps of the process.\"\"\"\n self._internal_debugger.pprint_maps()\n"},{"location":"from_pydoc/generated/debugger/debugger/#libdebug.debugger.debugger.Debugger.pprint_memory","title":"pprint_memory(start, end, file='hybrid', override_word_size=None, integer_mode=False)","text":"Pretty prints the memory contents of the process.
Parameters:
Name Type Description Defaultstart int The start address of the memory region.
requiredend int The end address of the memory region.
requiredfile str The user-defined backing file to resolve the address in. Defaults to \"hybrid\" (libdebug will first try to solve the address as an absolute address, then as a relative address w.r.t. the \"binary\" map file).
'hybrid' override_word_size int The word size to use for the memory dump. Defaults to None.
None integer_mode bool Whether to print the memory contents as integers. Defaults to False.
False Source code in libdebug/debugger/debugger.py def pprint_memory(\n self: Debugger,\n start: int,\n end: int,\n file: str = \"hybrid\",\n override_word_size: int | None = None,\n integer_mode: bool = False,\n) -> None:\n \"\"\"Pretty prints the memory contents of the process.\n\n Args:\n start (int): The start address of the memory region.\n end (int): The end address of the memory region.\n file (str, optional): The user-defined backing file to resolve the address in. Defaults to \"hybrid\" (libdebug will first try to solve the address as an absolute address, then as a relative address w.r.t. the \"binary\" map file).\n override_word_size (int, optional): The word size to use for the memory dump. Defaults to None.\n integer_mode (bool, optional): Whether to print the memory contents as integers. Defaults to False.\n \"\"\"\n self._internal_debugger.pprint_memory(start, end, file, override_word_size, integer_mode)\n"},{"location":"from_pydoc/generated/debugger/debugger/#libdebug.debugger.debugger.Debugger.pprint_registers","title":"pprint_registers()","text":"Pretty prints the main thread's registers.
Source code inlibdebug/debugger/debugger.py def pprint_registers(self: Debugger) -> None:\n \"\"\"Pretty prints the main thread's registers.\"\"\"\n if not self.threads:\n raise RuntimeError(\"No threads available. Did you call `run` or `attach`?\")\n self.threads[0].pprint_registers()\n"},{"location":"from_pydoc/generated/debugger/debugger/#libdebug.debugger.debugger.Debugger.pprint_registers_all","title":"pprint_registers_all()","text":"Pretty prints all the main thread's registers.
Source code inlibdebug/debugger/debugger.py def pprint_registers_all(self: Debugger) -> None:\n \"\"\"Pretty prints all the main thread's registers.\"\"\"\n if not self.threads:\n raise RuntimeError(\"No threads available. Did you call `run` or `attach`?\")\n self.threads[0].pprint_registers_all()\n"},{"location":"from_pydoc/generated/debugger/debugger/#libdebug.debugger.debugger.Debugger.pprint_regs","title":"pprint_regs()","text":"Alias for the pprint_registers method.
Pretty prints the main thread's registers.
Source code inlibdebug/debugger/debugger.py def pprint_regs(self: Debugger) -> None:\n \"\"\"Alias for the `pprint_registers` method.\n\n Pretty prints the main thread's registers.\n \"\"\"\n self.pprint_registers()\n"},{"location":"from_pydoc/generated/debugger/debugger/#libdebug.debugger.debugger.Debugger.pprint_regs_all","title":"pprint_regs_all()","text":"Alias for the pprint_registers_all method.
Pretty prints all the main thread's registers.
Source code inlibdebug/debugger/debugger.py def pprint_regs_all(self: Debugger) -> None:\n \"\"\"Alias for the `pprint_registers_all` method.\n\n Pretty prints all the main thread's registers.\n \"\"\"\n self.pprint_registers_all()\n"},{"location":"from_pydoc/generated/debugger/debugger/#libdebug.debugger.debugger.Debugger.pprint_syscalls_context","title":"pprint_syscalls_context(value)","text":"A context manager to temporarily change the state of the pprint_syscalls flag.
Parameters:
Name Type Description Defaultvalue bool the value to set.
required Source code inlibdebug/debugger/debugger.py @contextmanager\ndef pprint_syscalls_context(self: Debugger, value: bool) -> ...:\n \"\"\"A context manager to temporarily change the state of the pprint_syscalls flag.\n\n Args:\n value (bool): the value to set.\n \"\"\"\n old_value = self.pprint_syscalls\n self.pprint_syscalls = value\n yield\n self.pprint_syscalls = old_value\n"},{"location":"from_pydoc/generated/debugger/debugger/#libdebug.debugger.debugger.Debugger.print_maps","title":"print_maps()","text":"Prints the memory maps of the process.
Source code inlibdebug/debugger/debugger.py def print_maps(self: Debugger) -> None:\n \"\"\"Prints the memory maps of the process.\"\"\"\n liblog.warning(\"The `print_maps` method is deprecated. Use `d.pprint_maps` instead.\")\n self._internal_debugger.pprint_maps()\n"},{"location":"from_pydoc/generated/debugger/debugger/#libdebug.debugger.debugger.Debugger.r","title":"r(redirect_pipes=True)","text":"Alias for the run method.
Starts the process and waits for it to stop.
Parameters:
Name Type Description Defaultredirect_pipes bool Whether to hook and redirect the pipes of the process to a PipeManager.
True Source code in libdebug/debugger/debugger.py def r(self: Debugger, redirect_pipes: bool = True) -> PipeManager | None:\n \"\"\"Alias for the `run` method.\n\n Starts the process and waits for it to stop.\n\n Args:\n redirect_pipes (bool): Whether to hook and redirect the pipes of the process to a PipeManager.\n \"\"\"\n return self._internal_debugger.run(redirect_pipes)\n"},{"location":"from_pydoc/generated/debugger/debugger/#libdebug.debugger.debugger.Debugger.resolve_symbol","title":"resolve_symbol(symbol, file='binary')","text":"Resolves the address of the specified symbol.
Parameters:
Name Type Description Defaultsymbol str The symbol to resolve.
requiredfile str The backing file to resolve the symbol in. Defaults to \"binary\"
'binary' Returns:
Name Type Descriptionint int The address of the symbol.
Source code inlibdebug/debugger/debugger.py def resolve_symbol(self: Debugger, symbol: str, file: str = \"binary\") -> int:\n \"\"\"Resolves the address of the specified symbol.\n\n Args:\n symbol (str): The symbol to resolve.\n file (str): The backing file to resolve the symbol in. Defaults to \"binary\"\n\n Returns:\n int: The address of the symbol.\n \"\"\"\n return self._internal_debugger.resolve_symbol(symbol, file)\n"},{"location":"from_pydoc/generated/debugger/debugger/#libdebug.debugger.debugger.Debugger.run","title":"run(redirect_pipes=True)","text":"Starts the process and waits for it to stop.
Parameters:
Name Type Description Defaultredirect_pipes bool Whether to hook and redirect the pipes of the process to a PipeManager.
True Source code in libdebug/debugger/debugger.py def run(self: Debugger, redirect_pipes: bool = True) -> PipeManager | None:\n \"\"\"Starts the process and waits for it to stop.\n\n Args:\n redirect_pipes (bool): Whether to hook and redirect the pipes of the process to a PipeManager.\n \"\"\"\n return self._internal_debugger.run(redirect_pipes)\n"},{"location":"from_pydoc/generated/debugger/debugger/#libdebug.debugger.debugger.Debugger.si","title":"si()","text":"Alias for the step method.
Executes a single instruction of the process.
Source code inlibdebug/debugger/debugger.py def si(self: Debugger) -> None:\n \"\"\"Alias for the `step` method.\n\n Executes a single instruction of the process.\n \"\"\"\n self._internal_debugger.step(self)\n"},{"location":"from_pydoc/generated/debugger/debugger/#libdebug.debugger.debugger.Debugger.step","title":"step()","text":"Executes a single instruction of the process.
Source code inlibdebug/debugger/debugger.py def step(self: Debugger) -> None:\n \"\"\"Executes a single instruction of the process.\"\"\"\n self._internal_debugger.step(self)\n"},{"location":"from_pydoc/generated/debugger/debugger/#libdebug.debugger.debugger.Debugger.step_until","title":"step_until(position, max_steps=-1, file='hybrid')","text":"Executes instructions of the process until the specified location is reached.
Parameters:
Name Type Description Defaultposition int | bytes The location to reach.
requiredmax_steps int The maximum number of steps to execute. Defaults to -1.
-1 file str The user-defined backing file to resolve the address in. Defaults to \"hybrid\" (libdebug will first try to solve the address as an absolute address, then as a relative address w.r.t. the \"binary\" map file).
'hybrid' Source code in libdebug/debugger/debugger.py def step_until(\n self: Debugger,\n position: int | str,\n max_steps: int = -1,\n file: str = \"hybrid\",\n) -> None:\n \"\"\"Executes instructions of the process until the specified location is reached.\n\n Args:\n position (int | bytes): The location to reach.\n max_steps (int, optional): The maximum number of steps to execute. Defaults to -1.\n file (str, optional): The user-defined backing file to resolve the address in. Defaults to \"hybrid\" (libdebug will first try to solve the address as an absolute address, then as a relative address w.r.t. the \"binary\" map file).\n \"\"\"\n self._internal_debugger.step_until(self, position, max_steps, file)\n"},{"location":"from_pydoc/generated/debugger/debugger/#libdebug.debugger.debugger.Debugger.su","title":"su(position, max_steps=-1)","text":"Alias for the step_until method.
Executes instructions of the process until the specified location is reached.
Parameters:
Name Type Description Defaultposition int | bytes The location to reach.
requiredmax_steps int The maximum number of steps to execute. Defaults to -1.
-1 Source code in libdebug/debugger/debugger.py def su(\n self: Debugger,\n position: int | str,\n max_steps: int = -1,\n) -> None:\n \"\"\"Alias for the `step_until` method.\n\n Executes instructions of the process until the specified location is reached.\n\n Args:\n position (int | bytes): The location to reach.\n max_steps (int, optional): The maximum number of steps to execute. Defaults to -1.\n \"\"\"\n self._internal_debugger.step_until(self, position, max_steps)\n"},{"location":"from_pydoc/generated/debugger/debugger/#libdebug.debugger.debugger.Debugger.terminate","title":"terminate()","text":"Interrupts the process, kills it and then terminates the background thread.
The debugger object will not be usable after this method is called. This method should only be called to free up resources when the debugger object is no longer needed.
Source code inlibdebug/debugger/debugger.py def terminate(self: Debugger) -> None:\n \"\"\"Interrupts the process, kills it and then terminates the background thread.\n\n The debugger object will not be usable after this method is called.\n This method should only be called to free up resources when the debugger object is no longer needed.\n \"\"\"\n self._internal_debugger.terminate()\n"},{"location":"from_pydoc/generated/debugger/debugger/#libdebug.debugger.debugger.Debugger.w","title":"w()","text":"Alias for the wait method.
Waits for the process to stop.
Source code inlibdebug/debugger/debugger.py def w(self: Debugger) -> None:\n \"\"\"Alias for the `wait` method.\n\n Waits for the process to stop.\n \"\"\"\n self._internal_debugger.wait()\n"},{"location":"from_pydoc/generated/debugger/debugger/#libdebug.debugger.debugger.Debugger.wait","title":"wait()","text":"Waits for the process to stop.
Source code inlibdebug/debugger/debugger.py def wait(self: Debugger) -> None:\n \"\"\"Waits for the process to stop.\"\"\"\n self._internal_debugger.wait()\n"},{"location":"from_pydoc/generated/debugger/debugger/#libdebug.debugger.debugger.Debugger.watchpoint","title":"watchpoint(position, condition='w', length=1, callback=None, file='hybrid')","text":"Sets a watchpoint at the specified location. Internally, watchpoints are implemented as breakpoints.
Parameters:
Name Type Description Defaultposition int | bytes The location of the breakpoint.
requiredcondition str The trigger condition for the watchpoint (either \"w\", \"rw\" or \"x\"). Defaults to \"w\".
'w' length int The size of the word in being watched (1, 2, 4 or 8). Defaults to 1.
1 callback None | bool | Callable[[ThreadContext, Breakpoint], None] A callback to be called when the watchpoint is hit. If True, an empty callback will be set. Defaults to None.
None file str The user-defined backing file to resolve the address in. Defaults to \"hybrid\" (libdebug will first try to solve the address as an absolute address, then as a relative address w.r.t. the \"binary\" map file).
'hybrid' Source code in libdebug/debugger/debugger.py def watchpoint(\n self: Debugger,\n position: int | str,\n condition: str = \"w\",\n length: int = 1,\n callback: None | bool | Callable[[ThreadContext, Breakpoint], None] = None,\n file: str = \"hybrid\",\n) -> Breakpoint:\n \"\"\"Sets a watchpoint at the specified location. Internally, watchpoints are implemented as breakpoints.\n\n Args:\n position (int | bytes): The location of the breakpoint.\n condition (str, optional): The trigger condition for the watchpoint (either \"w\", \"rw\" or \"x\"). Defaults to \"w\".\n length (int, optional): The size of the word in being watched (1, 2, 4 or 8). Defaults to 1.\n callback (None | bool | Callable[[ThreadContext, Breakpoint], None], optional): A callback to be called when the watchpoint is hit. If True, an empty callback will be set. Defaults to None.\n file (str, optional): The user-defined backing file to resolve the address in. Defaults to \"hybrid\" (libdebug will first try to solve the address as an absolute address, then as a relative address w.r.t. the \"binary\" map file).\n \"\"\"\n return self._internal_debugger.breakpoint(\n position,\n hardware=True,\n condition=condition,\n length=length,\n callback=callback,\n file=file,\n )\n"},{"location":"from_pydoc/generated/debugger/debugger/#libdebug.debugger.debugger.Debugger.wp","title":"wp(position, condition='w', length=1, callback=None, file='hybrid')","text":"Alias for the watchpoint method.
Sets a watchpoint at the specified location. Internally, watchpoints are implemented as breakpoints.
Parameters:
Name Type Description Defaultposition int | bytes The location of the breakpoint.
requiredcondition str The trigger condition for the watchpoint (either \"w\", \"rw\" or \"x\"). Defaults to \"w\".
'w' length int The size of the word in being watched (1, 2, 4 or 8). Defaults to 1.
1 callback Callable[[ThreadContext, Breakpoint], None] A callback to be called when the watchpoint is hit. Defaults to None.
None file str The user-defined backing file to resolve the address in. Defaults to \"hybrid\" (libdebug will first try to solve the address as an absolute address, then as a relative address w.r.t. the \"binary\" map file).
'hybrid' Source code in libdebug/debugger/debugger.py def wp(\n self: Debugger,\n position: int | str,\n condition: str = \"w\",\n length: int = 1,\n callback: None | Callable[[ThreadContext, Breakpoint], None] = None,\n file: str = \"hybrid\",\n) -> Breakpoint:\n \"\"\"Alias for the `watchpoint` method.\n\n Sets a watchpoint at the specified location. Internally, watchpoints are implemented as breakpoints.\n\n Args:\n position (int | bytes): The location of the breakpoint.\n condition (str, optional): The trigger condition for the watchpoint (either \"w\", \"rw\" or \"x\"). Defaults to \"w\".\n length (int, optional): The size of the word in being watched (1, 2, 4 or 8). Defaults to 1.\n callback (Callable[[ThreadContext, Breakpoint], None], optional): A callback to be called when the watchpoint is hit. Defaults to None.\n file (str, optional): The user-defined backing file to resolve the address in. Defaults to \"hybrid\" (libdebug will first try to solve the address as an absolute address, then as a relative address w.r.t. the \"binary\" map file).\n \"\"\"\n return self._internal_debugger.breakpoint(\n position,\n hardware=True,\n condition=condition,\n length=length,\n callback=callback,\n file=file,\n )\n"},{"location":"from_pydoc/generated/debugger/internal_debugger/","title":"libdebug.debugger.internal_debugger","text":""},{"location":"from_pydoc/generated/debugger/internal_debugger/#libdebug.debugger.internal_debugger.InternalDebugger","title":"InternalDebugger","text":"A class that holds the global debugging state.
Source code inlibdebug/debugger/internal_debugger.py class InternalDebugger:\n \"\"\"A class that holds the global debugging state.\"\"\"\n\n aslr_enabled: bool\n \"\"\"A flag that indicates if ASLR is enabled or not.\"\"\"\n\n arch: str\n \"\"\"The architecture of the debugged process.\"\"\"\n\n argv: list[str]\n \"\"\"The command line arguments of the debugged process.\"\"\"\n\n env: dict[str, str] | None\n \"\"\"The environment variables of the debugged process.\"\"\"\n\n escape_antidebug: bool\n \"\"\"A flag that indicates if the debugger should escape anti-debugging techniques.\"\"\"\n\n fast_memory: bool\n \"\"\"A flag that indicates if the debugger should use a faster memory access method.\"\"\"\n\n autoreach_entrypoint: bool\n \"\"\"A flag that indicates if the debugger should automatically reach the entry point of the debugged process.\"\"\"\n\n auto_interrupt_on_command: bool\n \"\"\"A flag that indicates if the debugger should automatically interrupt the debugged process when a command is issued.\"\"\"\n\n follow_children: bool\n \"\"\"A flag that indicates if the debugger should follow child processes creating a new debugger for each one.\"\"\"\n\n breakpoints: dict[int, Breakpoint]\n \"\"\"A dictionary of all the breakpoints set on the process. Key: the address of the breakpoint.\"\"\"\n\n handled_syscalls: dict[int, SyscallHandler]\n \"\"\"A dictionary of all the syscall handled in the process. Key: the syscall number.\"\"\"\n\n caught_signals: dict[int, SignalCatcher]\n \"\"\"A dictionary of all the signals caught in the process. Key: the signal number.\"\"\"\n\n signals_to_block: list[int]\n \"\"\"The signals to not forward to the process.\"\"\"\n\n syscalls_to_pprint: list[int] | None\n \"\"\"The syscalls to pretty print.\"\"\"\n\n syscalls_to_not_pprint: list[int] | None\n \"\"\"The syscalls to not pretty print.\"\"\"\n\n kill_on_exit: bool\n \"\"\"A flag that indicates if the debugger should kill the debugged process when it exits.\"\"\"\n\n threads: list[ThreadContext]\n \"\"\"A list of all the threads of the debugged process.\"\"\"\n\n process_id: int\n \"\"\"The PID of the debugged process.\"\"\"\n\n pipe_manager: PipeManager\n \"\"\"The PipeManager used to communicate with the debugged process.\"\"\"\n\n memory: AbstractMemoryView\n \"\"\"The memory view of the debugged process.\"\"\"\n\n debugging_interface: DebuggingInterface\n \"\"\"The debugging interface used to communicate with the debugged process.\"\"\"\n\n instanced: bool = False\n \"\"\"Whether the process was started and has not been killed yet.\"\"\"\n\n is_debugging: bool = False\n \"\"\"Whether the debugger is currently debugging a process.\"\"\"\n\n children: list[Debugger]\n \"\"\"The list of child debuggers.\"\"\"\n\n pprint_syscalls: bool\n \"\"\"A flag that indicates if the debugger should pretty print syscalls.\"\"\"\n\n resume_context: ResumeContext\n \"\"\"Context that indicates if the debugger should resume the debugged process.\"\"\"\n\n debugger: Debugger\n \"\"\"The debugger object.\"\"\"\n\n stdin_settings_backup: list[Any]\n \"\"\"The backup of the stdin settings. Used to restore the original settings after possible conflicts due to the pipe manager interacactive mode.\"\"\"\n\n __polling_thread: Thread | None\n \"\"\"The background thread used to poll the process for state change.\"\"\"\n\n __polling_thread_command_queue: Queue | None\n \"\"\"The queue used to send commands to the background thread.\"\"\"\n\n __polling_thread_response_queue: Queue | None\n \"\"\"The queue used to receive responses from the background thread.\"\"\"\n\n _is_running: bool\n \"\"\"The overall state of the debugged process. True if the process is running, False otherwise.\"\"\"\n\n _is_migrated_to_gdb: bool\n \"\"\"A flag that indicates if the debuggee was migrated to GDB.\"\"\"\n\n _fast_memory: DirectMemoryView\n \"\"\"The memory view of the debugged process using the fast memory access method.\"\"\"\n\n _slow_memory: ChunkedMemoryView\n \"\"\"The memory view of the debugged process using the slow memory access method.\"\"\"\n\n _snapshot_count: int\n \"\"\"The counter used to assign an ID to each snapshot.\"\"\"\n\n def __init__(self: InternalDebugger) -> None:\n \"\"\"Initialize the context.\"\"\"\n # These must be reinitialized on every call to \"debugger\"\n self.aslr_enabled = False\n self.autoreach_entrypoint = True\n self.argv = []\n self.env = {}\n self.escape_antidebug = False\n self.breakpoints = {}\n self.handled_syscalls = {}\n self.caught_signals = {}\n self.syscalls_to_pprint = None\n self.syscalls_to_not_pprint = None\n self.signals_to_block = []\n self.pprint_syscalls = False\n self.pipe_manager = None\n self.process_id = 0\n self.threads = []\n self.instanced = False\n self.is_debugging = False\n self._is_running = False\n self._is_migrated_to_gdb = False\n self.resume_context = ResumeContext()\n self.stdin_settings_backup = []\n self.arch = map_arch(libcontext.platform)\n self.kill_on_exit = True\n self._process_memory_manager = ProcessMemoryManager()\n self.fast_memory = True\n self.__polling_thread_command_queue = Queue()\n self.__polling_thread_response_queue = Queue()\n self._snapshot_count = 0\n self.serialization_helper = SerializationHelper()\n self.children = []\n\n def clear(self: InternalDebugger) -> None:\n \"\"\"Reinitializes the context, so it is ready for a new run.\"\"\"\n # These must be reinitialized on every call to \"run\"\n self.breakpoints.clear()\n self.handled_syscalls.clear()\n self.caught_signals.clear()\n self.syscalls_to_pprint = None\n self.syscalls_to_not_pprint = None\n self.signals_to_block.clear()\n self.pprint_syscalls = False\n self.pipe_manager = None\n self.process_id = 0\n\n for t in self.threads:\n del t.regs.register_file\n del t.regs._fp_register_file\n\n self.threads.clear()\n self.instanced = False\n self.is_debugging = False\n self._is_running = False\n self.resume_context.clear()\n self.children.clear()\n\n def start_up(self: InternalDebugger) -> None:\n \"\"\"Starts up the context.\"\"\"\n # The context is linked to itself\n link_to_internal_debugger(self, self)\n\n self.start_processing_thread()\n with extend_internal_debugger(self):\n self.debugging_interface = provide_debugging_interface()\n self._fast_memory = DirectMemoryView(self._fast_read_memory, self._fast_write_memory)\n self._slow_memory = ChunkedMemoryView(\n self._peek_memory,\n self._poke_memory,\n unit_size=get_platform_gp_register_size(libcontext.platform),\n )\n\n def start_processing_thread(self: InternalDebugger) -> None:\n \"\"\"Starts the thread that will poll the traced process for state change.\"\"\"\n # Set as daemon so that the Python interpreter can exit even if the thread is still running\n self.__polling_thread = Thread(\n target=self.__polling_thread_function,\n name=\"libdebug__polling_thread\",\n daemon=True,\n )\n self.__polling_thread.start()\n\n def _background_invalid_call(self: InternalDebugger, *_: ..., **__: ...) -> None:\n \"\"\"Raises an error when an invalid call is made in background mode.\"\"\"\n raise RuntimeError(\"This method is not available in a callback.\")\n\n def run(self: InternalDebugger, redirect_pipes: bool = True) -> PipeManager | None:\n \"\"\"Starts the process and waits for it to stop.\n\n Args:\n redirect_pipes (bool): Whether to hook and redirect the pipes of the process to a PipeManager.\n \"\"\"\n if not self.argv:\n raise RuntimeError(\"No binary file specified.\")\n\n ensure_file_executable(self.argv[0])\n\n if self.is_debugging:\n liblog.debugger(\"Process already running, stopping it before restarting.\")\n self.kill()\n if self.threads:\n self.clear()\n\n self.debugging_interface.reset()\n\n self.instanced = True\n self.is_debugging = True\n\n if not self.__polling_thread_command_queue.empty():\n raise RuntimeError(\"Polling thread command queue not empty.\")\n\n self.__polling_thread_command_queue.put((self.__threaded_run, (redirect_pipes,)))\n\n self._join_and_check_status()\n\n if self.escape_antidebug:\n liblog.debugger(\"Enabling anti-debugging escape mechanism.\")\n self._enable_antidebug_escaping()\n\n if redirect_pipes and not self.pipe_manager:\n raise RuntimeError(\"Something went wrong during pipe initialization.\")\n\n self._process_memory_manager.open(self.process_id)\n\n return self.pipe_manager\n\n def attach(self: InternalDebugger, pid: int) -> None:\n \"\"\"Attaches to an existing process.\"\"\"\n if self.is_debugging:\n liblog.debugger(\"Process already running, stopping it before restarting.\")\n self.kill()\n if self.threads:\n self.clear()\n self.debugging_interface.reset()\n\n self.instanced = True\n self.is_debugging = True\n\n if not self.__polling_thread_command_queue.empty():\n raise RuntimeError(\"Polling thread command queue not empty.\")\n\n self.__polling_thread_command_queue.put((self.__threaded_attach, (pid,)))\n\n self._join_and_check_status()\n\n self._process_memory_manager.open(self.process_id)\n\n def detach(self: InternalDebugger) -> None:\n \"\"\"Detaches from the process.\"\"\"\n if not self.is_debugging:\n raise RuntimeError(\"Process not running, cannot detach.\")\n\n self._ensure_process_stopped()\n\n self.__polling_thread_command_queue.put((self.__threaded_detach, ()))\n\n self.is_debugging = False\n\n self._join_and_check_status()\n\n self._process_memory_manager.close()\n\n def set_child_debugger(self: InternalDebugger, child_pid: int) -> None:\n \"\"\"Sets the child debugger after a fork.\n\n Args:\n child_pid (int): The PID of the child process.\n \"\"\"\n # Create a new InternalDebugger instance for the child process with the same configuration\n # of the parent debugger\n child_internal_debugger = InternalDebugger()\n child_internal_debugger.argv = self.argv\n child_internal_debugger.env = self.env\n child_internal_debugger.aslr_enabled = self.aslr_enabled\n child_internal_debugger.autoreach_entrypoint = self.autoreach_entrypoint\n child_internal_debugger.auto_interrupt_on_command = self.auto_interrupt_on_command\n child_internal_debugger.escape_antidebug = self.escape_antidebug\n child_internal_debugger.fast_memory = self.fast_memory\n child_internal_debugger.kill_on_exit = self.kill_on_exit\n child_internal_debugger.follow_children = self.follow_children\n\n # Create the new Debugger instance for the child process\n child_debugger = Debugger()\n child_debugger.post_init_(child_internal_debugger)\n child_internal_debugger.debugger = child_debugger\n child_debugger.arch = self.arch\n\n # Attach to the child process with the new debugger\n child_internal_debugger.attach(child_pid)\n self.children.append(child_debugger)\n liblog.debugger(\n \"Child process with pid %d registered to the parent debugger (pid %d)\",\n child_pid,\n self.process_id,\n )\n\n @background_alias(_background_invalid_call)\n def kill(self: InternalDebugger) -> None:\n \"\"\"Kills the process.\"\"\"\n if not self.is_debugging:\n raise RuntimeError(\"No process currently debugged, cannot kill.\")\n try:\n self._ensure_process_stopped()\n except (OSError, RuntimeError):\n # This exception might occur if the process has already died\n liblog.debugger(\"OSError raised during kill\")\n\n self._process_memory_manager.close()\n\n self.__polling_thread_command_queue.put((self.__threaded_kill, ()))\n\n self.instanced = False\n self.is_debugging = False\n\n self.set_all_threads_as_dead()\n\n if self.pipe_manager:\n self.pipe_manager.close()\n\n self._join_and_check_status()\n\n def terminate(self: InternalDebugger) -> None:\n \"\"\"Interrupts the process, kills it and then terminates the background thread.\n\n The debugger object will not be usable after this method is called.\n This method should only be called to free up resources when the debugger object is no longer needed.\n \"\"\"\n if self.instanced and self.running:\n try:\n self.interrupt()\n except ProcessLookupError:\n # The process has already been killed by someone or something else\n liblog.debugger(\"Interrupting process failed: already terminated\")\n\n if self.instanced and self.is_debugging:\n try:\n self.kill()\n except ProcessLookupError:\n # The process has already been killed by someone or something else\n liblog.debugger(\"Killing process failed: already terminated\")\n\n self.instanced = False\n self.is_debugging = False\n\n if self.__polling_thread is not None:\n self.__polling_thread_command_queue.put((THREAD_TERMINATE, ()))\n self.__polling_thread.join()\n del self.__polling_thread\n self.__polling_thread = None\n\n # Remove elemement from internal_debugger_holder to avoid memleaks\n remove_internal_debugger_refs(self)\n\n # Clean up the register accessors\n for thread in self.threads:\n thread._register_holder.cleanup()\n\n @background_alias(_background_invalid_call)\n @change_state_function_process\n def cont(self: InternalDebugger) -> None:\n \"\"\"Continues the process.\n\n Args:\n auto_wait (bool, optional): Whether to automatically wait for the process to stop after continuing. Defaults to True.\n \"\"\"\n self.__polling_thread_command_queue.put((self.__threaded_cont, ()))\n\n self._join_and_check_status()\n\n self.__polling_thread_command_queue.put((self.__threaded_wait, ()))\n\n @background_alias(_background_invalid_call)\n def interrupt(self: InternalDebugger) -> None:\n \"\"\"Interrupts the process.\"\"\"\n if not self.is_debugging:\n raise RuntimeError(\"Process not running, cannot interrupt.\")\n\n # We have to ensure that at least one thread is alive before executing the method\n if self.threads[0].dead:\n raise RuntimeError(\"All threads are dead.\")\n\n if not self.running:\n return\n\n self.resume_context.force_interrupt = True\n os.kill(self.process_id, SIGSTOP)\n\n self.wait()\n\n @background_alias(_background_invalid_call)\n def wait(self: InternalDebugger) -> None:\n \"\"\"Waits for the process to stop.\"\"\"\n if not self.is_debugging:\n raise RuntimeError(\"Process not running, cannot wait.\")\n\n self._join_and_check_status()\n\n if self.threads[0].dead or not self.running:\n # Most of the time the function returns here, as there was a wait already\n # queued by the previous command\n return\n\n self.__polling_thread_command_queue.put((self.__threaded_wait, ()))\n\n self._join_and_check_status()\n\n @property\n @change_state_function_process\n def maps(self: InternalDebugger) -> MemoryMapList[MemoryMap]:\n \"\"\"Returns the memory maps of the process.\"\"\"\n self._ensure_process_stopped()\n return self.debugging_interface.get_maps()\n\n @property\n @change_state_function_process\n def memory(self: InternalDebugger) -> AbstractMemoryView:\n \"\"\"The memory view of the debugged process.\"\"\"\n return self._fast_memory if self.fast_memory else self._slow_memory\n\n def pprint_maps(self: InternalDebugger) -> None:\n \"\"\"Prints the memory maps of the process.\"\"\"\n self._ensure_process_stopped()\n pprint_maps_util(self.maps)\n\n def pprint_memory(\n self: InternalDebugger,\n start: int,\n end: int,\n file: str = \"hybrid\",\n override_word_size: int | None = None,\n integer_mode: bool = False,\n ) -> None:\n \"\"\"Pretty print the memory diff.\n\n Args:\n start (int): The start address of the memory diff.\n end (int): The end address of the memory diff.\n file (str, optional): The backing file for relative / absolute addressing. Defaults to \"hybrid\".\n override_word_size (int, optional): The word size to use for the diff in place of the ISA word size. Defaults to None.\n integer_mode (bool, optional): If True, the diff will be printed as hex integers (system endianness applies). Defaults to False.\n \"\"\"\n if start > end:\n tmp = start\n start = end\n end = tmp\n\n word_size = get_platform_gp_register_size(self.arch) if override_word_size is None else override_word_size\n\n # Resolve the address\n if file == \"absolute\":\n address_start = start\n elif file == \"hybrid\":\n try:\n # Try to resolve the address as absolute\n self.memory[start, 1, \"absolute\"]\n address_start = start\n except ValueError:\n # If the address is not in the maps, we use the binary file\n address_start = start + self.maps.filter(\"binary\")[0].start\n file = \"binary\"\n else:\n map_file = self.maps.filter(file)[0]\n address_start = start + map_file.base\n file = map_file.backing_file if file != \"binary\" else \"binary\"\n\n extract = self.memory[start:end, file]\n\n file_info = f\" (file: {file})\" if file not in (\"absolute\", \"hybrid\") else \"\"\n print(f\"Memory from {start:#x} to {end:#x}{file_info}:\")\n\n pprint_memory_util(\n address_start,\n extract,\n word_size,\n self.maps,\n integer_mode=integer_mode,\n )\n\n @background_alias(_background_invalid_call)\n @change_state_function_process\n def breakpoint(\n self: InternalDebugger,\n position: int | str,\n hardware: bool = False,\n condition: str = \"x\",\n length: int = 1,\n callback: None | bool | Callable[[ThreadContext, Breakpoint], None] = None,\n file: str = \"hybrid\",\n ) -> Breakpoint:\n \"\"\"Sets a breakpoint at the specified location.\n\n Args:\n position (int | bytes): The location of the breakpoint.\n hardware (bool, optional): Whether the breakpoint should be hardware-assisted or purely software. Defaults to False.\n condition (str, optional): The trigger condition for the breakpoint. Defaults to None.\n length (int, optional): The length of the breakpoint. Only for watchpoints. Defaults to 1.\n callback (None | bool | Callable[[ThreadContext, Breakpoint], None], optional): A callback to be called when the breakpoint is hit. If True, an empty callback will be set. Defaults to None.\n file (str, optional): The user-defined backing file to resolve the address in. Defaults to \"hybrid\" (libdebug will first try to solve the address as an absolute address, then as a relative address w.r.t. the \"binary\" map file).\n \"\"\"\n if isinstance(position, str):\n address = self.resolve_symbol(position, file)\n else:\n address = self.resolve_address(position, file)\n position = hex(address)\n\n if condition != \"x\" and not hardware:\n raise ValueError(\"Breakpoint condition is supported only for hardware watchpoints.\")\n\n if callback is True:\n\n def callback(_: ThreadContext, __: Breakpoint) -> None:\n pass\n\n bp = Breakpoint(address, position, 0, hardware, callback, condition.lower(), length)\n\n if hardware:\n validate_hardware_breakpoint(self.arch, bp)\n\n link_to_internal_debugger(bp, self)\n\n self.__polling_thread_command_queue.put((self.__threaded_breakpoint, (bp,)))\n\n self._join_and_check_status()\n\n # the breakpoint should have been set by interface\n if address not in self.breakpoints:\n raise RuntimeError(\"Something went wrong while inserting the breakpoint.\")\n\n return bp\n\n @background_alias(_background_invalid_call)\n @change_state_function_process\n def catch_signal(\n self: InternalDebugger,\n signal: int | str,\n callback: None | bool | Callable[[ThreadContext, SignalCatcher], None] = None,\n recursive: bool = False,\n ) -> SignalCatcher:\n \"\"\"Catch a signal in the target process.\n\n Args:\n signal (int | str): The signal to catch. If \"*\", \"ALL\", \"all\" or -1 is passed, all signals will be caught.\n callback (None | bool | Callable[[ThreadContext, SignalCatcher], None], optional): A callback to be called when the signal is caught. If True, an empty callback will be set. Defaults to None.\n recursive (bool, optional): Whether, when the signal is hijacked with another one, the signal catcher associated with the new signal should be considered as well. Defaults to False.\n\n Returns:\n SignalCatcher: The SignalCatcher object.\n \"\"\"\n if isinstance(signal, str):\n signal_number = resolve_signal_number(signal)\n elif isinstance(signal, int):\n signal_number = signal\n else:\n raise TypeError(\"signal must be an int or a str\")\n\n match signal_number:\n case SIGKILL.value:\n raise ValueError(\n f\"Cannot catch SIGKILL ({signal_number}) as it cannot be caught or ignored. This is a kernel restriction.\",\n )\n case SIGSTOP.value:\n raise ValueError(\n f\"Cannot catch SIGSTOP ({signal_number}) as it is used by the debugger or ptrace for their internal operations.\",\n )\n case SIGTRAP.value:\n liblog.warning(\n f\"Catching SIGTRAP ({signal_number}) may interfere with libdebug operations as it is used by the debugger or ptrace for their internal operations. Use with care.\"\n )\n\n if signal_number in self.caught_signals:\n liblog.warning(\n f\"Signal {resolve_signal_name(signal_number)} ({signal_number}) has already been caught. Overriding it.\",\n )\n\n if not isinstance(recursive, bool):\n raise TypeError(\"recursive must be a boolean\")\n\n if callback is True:\n\n def callback(_: ThreadContext, __: SignalCatcher) -> None:\n pass\n\n catcher = SignalCatcher(signal_number, callback, recursive)\n\n link_to_internal_debugger(catcher, self)\n\n self.__polling_thread_command_queue.put((self.__threaded_catch_signal, (catcher,)))\n\n self._join_and_check_status()\n\n return catcher\n\n @background_alias(_background_invalid_call)\n @change_state_function_process\n def hijack_signal(\n self: InternalDebugger,\n original_signal: int | str,\n new_signal: int | str,\n recursive: bool = False,\n ) -> SignalCatcher:\n \"\"\"Hijack a signal in the target process.\n\n Args:\n original_signal (int | str): The signal to hijack. If \"*\", \"ALL\", \"all\" or -1 is passed, all signals will be hijacked.\n new_signal (int | str): The signal to hijack the original signal with.\n recursive (bool, optional): Whether, when the signal is hijacked with another one, the signal catcher associated with the new signal should be considered as well. Defaults to False.\n\n Returns:\n SignalCatcher: The SignalCatcher object.\n \"\"\"\n if isinstance(original_signal, str):\n original_signal_number = resolve_signal_number(original_signal)\n else:\n original_signal_number = original_signal\n\n new_signal_number = resolve_signal_number(new_signal) if isinstance(new_signal, str) else new_signal\n\n if new_signal_number == -1:\n raise ValueError(\"Cannot hijack a signal with the 'ALL' signal.\")\n\n if original_signal_number == new_signal_number:\n raise ValueError(\n \"The original signal and the new signal must be different during hijacking.\",\n )\n\n def callback(thread: ThreadContext, _: SignalCatcher) -> None:\n \"\"\"The callback to execute when the signal is received.\"\"\"\n thread.signal = new_signal_number\n\n return self.catch_signal(original_signal_number, callback, recursive)\n\n @background_alias(_background_invalid_call)\n @change_state_function_process\n def handle_syscall(\n self: InternalDebugger,\n syscall: int | str,\n on_enter: Callable[[ThreadContext, SyscallHandler], None] | None = None,\n on_exit: Callable[[ThreadContext, SyscallHandler], None] | None = None,\n recursive: bool = False,\n ) -> SyscallHandler:\n \"\"\"Handle a syscall in the target process.\n\n Args:\n syscall (int | str): The syscall name or number to handle. If \"*\", \"ALL\", \"all\", or -1 is passed, all syscalls will be handled.\n on_enter (None | bool |Callable[[ThreadContext, SyscallHandler], None], optional): The callback to execute when the syscall is entered. If True, an empty callback will be set. Defaults to None.\n on_exit (None | bool | Callable[[ThreadContext, SyscallHandler], None], optional): The callback to execute when the syscall is exited. If True, an empty callback will be set. Defaults to None.\n recursive (bool, optional): Whether, when the syscall is hijacked with another one, the syscall handler associated with the new syscall should be considered as well. Defaults to False.\n\n Returns:\n SyscallHandler: The SyscallHandler object.\n \"\"\"\n syscall_number = resolve_syscall_number(self.arch, syscall) if isinstance(syscall, str) else syscall\n\n if not isinstance(recursive, bool):\n raise TypeError(\"recursive must be a boolean\")\n\n if on_enter is True:\n\n def on_enter(_: ThreadContext, __: SyscallHandler) -> None:\n pass\n\n if on_exit is True:\n\n def on_exit(_: ThreadContext, __: SyscallHandler) -> None:\n pass\n\n # Check if the syscall is already handled (by the user or by the pretty print handler)\n if syscall_number in self.handled_syscalls:\n handler = self.handled_syscalls[syscall_number]\n if handler.on_enter_user or handler.on_exit_user:\n liblog.warning(\n f\"Syscall {resolve_syscall_name(self.arch, syscall_number)} is already handled by a user-defined handler. Overriding it.\",\n )\n handler.on_enter_user = on_enter\n handler.on_exit_user = on_exit\n handler.recursive = recursive\n handler.enabled = True\n else:\n handler = SyscallHandler(\n syscall_number,\n on_enter,\n on_exit,\n None,\n None,\n recursive,\n )\n\n link_to_internal_debugger(handler, self)\n\n self.__polling_thread_command_queue.put(\n (self.__threaded_handle_syscall, (handler,)),\n )\n\n self._join_and_check_status()\n\n return handler\n\n @background_alias(_background_invalid_call)\n @change_state_function_process\n def hijack_syscall(\n self: InternalDebugger,\n original_syscall: int | str,\n new_syscall: int | str,\n recursive: bool = True,\n **kwargs: int,\n ) -> SyscallHandler:\n \"\"\"Hijacks a syscall in the target process.\n\n Args:\n original_syscall (int | str): The syscall name or number to hijack. If \"*\", \"ALL\", \"all\" or -1 is passed, all syscalls will be hijacked.\n new_syscall (int | str): The syscall name or number to hijack the original syscall with.\n recursive (bool, optional): Whether, when the syscall is hijacked with another one, the syscall handler associated with the new syscall should be considered as well. Defaults to False.\n **kwargs: (int, optional): The arguments to pass to the new syscall.\n\n Returns:\n SyscallHandler: The SyscallHandler object.\n \"\"\"\n if set(kwargs) - SyscallHijacker.allowed_args:\n raise ValueError(\"Invalid keyword arguments in syscall hijack\")\n\n if isinstance(original_syscall, str):\n original_syscall_number = resolve_syscall_number(self.arch, original_syscall)\n else:\n original_syscall_number = original_syscall\n\n new_syscall_number = (\n resolve_syscall_number(self.arch, new_syscall) if isinstance(new_syscall, str) else new_syscall\n )\n\n if new_syscall_number == -1:\n raise ValueError(\"Cannot hijack a syscall with the 'ALL' syscall.\")\n\n if original_syscall_number == new_syscall_number:\n raise ValueError(\n \"The original syscall and the new syscall must be different during hijacking.\",\n )\n\n on_enter = SyscallHijacker().create_hijacker(\n new_syscall_number,\n **kwargs,\n )\n\n # Check if the syscall is already handled (by the user or by the pretty print handler)\n if original_syscall_number in self.handled_syscalls:\n handler = self.handled_syscalls[original_syscall_number]\n if handler.on_enter_user or handler.on_exit_user:\n liblog.warning(\n f\"Syscall {original_syscall_number} is already handled by a user-defined handler. Overriding it.\",\n )\n handler.on_enter_user = on_enter\n handler.on_exit_user = None\n handler.recursive = recursive\n handler.enabled = True\n else:\n handler = SyscallHandler(\n original_syscall_number,\n on_enter,\n None,\n None,\n None,\n recursive,\n )\n\n link_to_internal_debugger(handler, self)\n\n self.__polling_thread_command_queue.put(\n (self.__threaded_handle_syscall, (handler,)),\n )\n\n self._join_and_check_status()\n\n return handler\n\n @background_alias(_background_invalid_call)\n @change_state_function_process\n def gdb(\n self: InternalDebugger,\n migrate_breakpoints: bool = True,\n open_in_new_process: bool = True,\n blocking: bool = True,\n ) -> GdbResumeEvent:\n \"\"\"Migrates the current debugging session to GDB.\"\"\"\n # TODO: not needed?\n self.interrupt()\n\n # Create the command file\n command_file = self._craft_gdb_migration_file(migrate_breakpoints)\n\n if open_in_new_process and libcontext.terminal:\n lambda_fun = self._open_gdb_in_new_process(command_file)\n elif open_in_new_process:\n self._auto_detect_terminal()\n if not libcontext.terminal:\n liblog.warning(\n \"Cannot auto-detect terminal. Please configure the terminal in libcontext.terminal. Opening gdb in the current shell.\",\n )\n lambda_fun = self._open_gdb_in_shell(command_file)\n else:\n lambda_fun = self._open_gdb_in_new_process(command_file)\n else:\n lambda_fun = self._open_gdb_in_shell(command_file)\n\n resume_event = GdbResumeEvent(self, lambda_fun)\n\n self._is_migrated_to_gdb = True\n\n if blocking:\n resume_event.join()\n return None\n else:\n return resume_event\n\n def _auto_detect_terminal(self: InternalDebugger) -> None:\n \"\"\"Auto-detects the terminal.\"\"\"\n try:\n process = Process(self.process_id)\n while process:\n pname = process.name().lower()\n if terminal_command := TerminalTypes.get_command(pname):\n libcontext.terminal = terminal_command\n liblog.debugger(f\"Auto-detected terminal: {libcontext.terminal}\")\n process = process.parent()\n except Error:\n pass\n\n def _craft_gdb_migration_command(self: InternalDebugger, migrate_breakpoints: bool) -> str:\n \"\"\"Crafts the command to migrate to GDB.\n\n Args:\n migrate_breakpoints (bool): Whether to migrate the breakpoints.\n\n Returns:\n str: The command to migrate to GDB.\n \"\"\"\n gdb_command = f'/bin/gdb -q --pid {self.process_id} -ex \"source {GDB_GOBACK_LOCATION} \" -ex \"ni\" -ex \"ni\"'\n\n if not migrate_breakpoints:\n return gdb_command\n\n for bp in self.breakpoints.values():\n if bp.enabled:\n if bp.hardware and bp.condition == \"rw\":\n gdb_command += f' -ex \"awatch *(int{bp.length * 8}_t *) {bp.address:#x}\"'\n elif bp.hardware and bp.condition == \"w\":\n gdb_command += f' -ex \"watch *(int{bp.length * 8}_t *) {bp.address:#x}\"'\n elif bp.hardware:\n gdb_command += f' -ex \"hb *{bp.address:#x}\"'\n else:\n gdb_command += f' -ex \"b *{bp.address:#x}\"'\n\n if self.threads[0].instruction_pointer == bp.address and not bp.hardware:\n # We have to enqueue an additional continue\n gdb_command += ' -ex \"ni\"'\n\n return gdb_command\n\n def _craft_gdb_migration_file(self: InternalDebugger, migrate_breakpoints: bool) -> str:\n \"\"\"Crafts the file to migrate to GDB.\n\n Args:\n migrate_breakpoints (bool): Whether to migrate the breakpoints.\n\n Returns:\n str: The path to the file.\n \"\"\"\n # Different terminals accept what to run in different ways. To make this work with all terminals, we need to\n # create a temporary script that will run the command. This script will be executed by the terminal.\n command = self._craft_gdb_migration_command(migrate_breakpoints)\n with NamedTemporaryFile(delete=False, mode=\"w\", suffix=\".sh\") as temp_file:\n temp_file.write(\"#!/bin/bash\\n\")\n temp_file.write(command)\n script_path = temp_file.name\n\n # Make the script executable\n Path.chmod(Path(script_path), 0o755)\n return script_path\n\n def _open_gdb_in_new_process(self: InternalDebugger, script_path: str) -> None:\n \"\"\"Opens GDB in a new process following the configuration in libcontext.terminal.\n\n Args:\n script_path (str): The path to the script to run in the terminal.\n \"\"\"\n # Check if the terminal has been configured correctly\n try:\n check_call([*libcontext.terminal, \"uname\"], stderr=DEVNULL, stdout=DEVNULL)\n except (CalledProcessError, FileNotFoundError) as err:\n raise RuntimeError(\n \"Failed to open GDB in terminal. Check the terminal configuration in libcontext.terminal.\",\n ) from err\n\n self.__polling_thread_command_queue.put((self.__threaded_gdb, ()))\n self._join_and_check_status()\n\n # Create the command to open the terminal and run the script\n command = [*libcontext.terminal, script_path]\n\n # Open GDB in a new terminal\n terminal_pid = Popen(command).pid\n\n # This is the command line that we are looking for\n cmdline_target = [\"/bin/bash\", script_path]\n\n self._wait_for_gdb(terminal_pid, cmdline_target)\n\n def wait_for_termination() -> None:\n liblog.debugger(\"Waiting for GDB process to terminate...\")\n\n for proc in process_iter():\n try:\n cmdline = proc.cmdline()\n except ZombieProcess:\n # This is a zombie process, which psutil tracks but we cannot interact with\n continue\n\n if cmdline_target == cmdline:\n gdb_process = proc\n break\n else:\n raise RuntimeError(\"GDB process not found.\")\n\n while gdb_process.is_running() and gdb_process.status() != STATUS_ZOMBIE:\n # As the GDB process is in a different group, we do not have the authority to wait on it\n # So we must keep polling it until it is no longer running\n pass\n\n return wait_for_termination\n\n def _open_gdb_in_shell(self: InternalDebugger, script_path: str) -> None:\n \"\"\"Open GDB in the current shell.\n\n Args:\n script_path (str): The path to the script to run in the terminal.\n \"\"\"\n self.__polling_thread_command_queue.put((self.__threaded_gdb, ()))\n self._join_and_check_status()\n\n gdb_pid = os.fork()\n\n if gdb_pid == 0: # This is the child process.\n os.execv(\"/bin/bash\", [\"/bin/bash\", script_path])\n raise RuntimeError(\"Failed to execute GDB.\")\n\n # This is the parent process.\n # Parent ignores SIGINT, so only GDB (child) receives it\n signal.signal(signal.SIGINT, signal.SIG_IGN)\n\n def wait_for_termination() -> None:\n # Wait for the child process to finish\n os.waitpid(gdb_pid, 0)\n\n # Reset the SIGINT behavior to default handling after child exits\n signal.signal(signal.SIGINT, signal.SIG_DFL)\n\n return wait_for_termination\n\n def _wait_for_gdb(self: InternalDebugger, terminal_pid: int, cmdline_target: list[str]) -> None:\n \"\"\"Waits for GDB to open in the terminal.\n\n Args:\n terminal_pid (int): The PID of the terminal process.\n cmdline_target (list[str]): The command line that we are looking for.\n \"\"\"\n # We need to wait for GDB to open in the terminal. However, different terminals have different behaviors\n # so we need to manually check if the terminal is still alive and if GDB has opened\n waiting_for_gdb = True\n terminal_alive = False\n scan_after_terminal_death = 0\n scan_after_terminal_death_max = 3\n while waiting_for_gdb:\n terminal_alive = False\n for proc in process_iter():\n try:\n cmdline = proc.cmdline()\n if cmdline == cmdline_target:\n waiting_for_gdb = False\n elif proc.pid == terminal_pid:\n terminal_alive = True\n except ZombieProcess:\n # This is a zombie process, which psutil tracks but we cannot interact with\n continue\n if not terminal_alive and waiting_for_gdb and scan_after_terminal_death < scan_after_terminal_death_max:\n # If the terminal has died, we need to wait a bit before we can be sure that GDB will not open.\n # Indeed, some terminals take different steps to open GDB. We must be sure to refresh the list\n # of processes. One extra iteration should be enough, but we will iterate more just to be sure.\n scan_after_terminal_death += 1\n elif not terminal_alive and waiting_for_gdb:\n # If the terminal has died and GDB has not opened, we are sure that GDB will not open\n raise RuntimeError(\"Failed to open GDB in terminal.\")\n\n def _resume_from_gdb(self: InternalDebugger) -> None:\n \"\"\"Resumes the process after migrating from GDB.\"\"\"\n self.__polling_thread_command_queue.put((self.__threaded_migrate_from_gdb, ()))\n\n self._join_and_check_status()\n\n self._is_migrated_to_gdb = False\n\n def _background_step(self: InternalDebugger, thread: ThreadContext) -> None:\n \"\"\"Executes a single instruction of the process.\n\n Args:\n thread (ThreadContext): The thread to step. Defaults to None.\n \"\"\"\n self.__threaded_step(thread)\n self.__threaded_wait()\n\n # At this point, we need to continue the execution of the callback from which the step was called\n self.resume_context.resume = True\n\n @background_alias(_background_step)\n @change_state_function_thread\n def step(self: InternalDebugger, thread: ThreadContext) -> None:\n \"\"\"Executes a single instruction of the process.\n\n Args:\n thread (ThreadContext): The thread to step. Defaults to None.\n \"\"\"\n self._ensure_process_stopped()\n self.__polling_thread_command_queue.put((self.__threaded_step, (thread,)))\n self.__polling_thread_command_queue.put((self.__threaded_wait, ()))\n self._join_and_check_status()\n\n def _background_step_until(\n self: InternalDebugger,\n thread: ThreadContext,\n position: int | str,\n max_steps: int = -1,\n file: str = \"hybrid\",\n ) -> None:\n \"\"\"Executes instructions of the process until the specified location is reached.\n\n Args:\n thread (ThreadContext): The thread to step. Defaults to None.\n position (int | bytes): The location to reach.\n max_steps (int, optional): The maximum number of steps to execute. Defaults to -1.\n file (str, optional): The user-defined backing file to resolve the address in. Defaults to \"hybrid\" (libdebug will first try to solve the address as an absolute address, then as a relative address w.r.t. the \"binary\" map file).\n \"\"\"\n if isinstance(position, str):\n address = self.resolve_symbol(position, file)\n else:\n address = self.resolve_address(position, file)\n\n self.__threaded_step_until(thread, address, max_steps)\n\n # At this point, we need to continue the execution of the callback from which the step was called\n self.resume_context.resume = True\n\n @background_alias(_background_step_until)\n @change_state_function_thread\n def step_until(\n self: InternalDebugger,\n thread: ThreadContext,\n position: int | str,\n max_steps: int = -1,\n file: str = \"hybrid\",\n ) -> None:\n \"\"\"Executes instructions of the process until the specified location is reached.\n\n Args:\n thread (ThreadContext): The thread to step. Defaults to None.\n position (int | bytes): The location to reach.\n max_steps (int, optional): The maximum number of steps to execute. Defaults to -1.\n file (str, optional): The user-defined backing file to resolve the address in. Defaults to \"hybrid\" (libdebug will first try to solve the address as an absolute address, then as a relative address w.r.t. the \"binary\" map file).\n \"\"\"\n if isinstance(position, str):\n address = self.resolve_symbol(position, file)\n else:\n address = self.resolve_address(position, file)\n\n arguments = (\n thread,\n address,\n max_steps,\n )\n\n self.__polling_thread_command_queue.put((self.__threaded_step_until, arguments))\n\n self._join_and_check_status()\n\n def _background_finish(\n self: InternalDebugger,\n thread: ThreadContext,\n heuristic: str = \"backtrace\",\n ) -> None:\n \"\"\"Continues execution until the current function returns or the process stops.\n\n The command requires a heuristic to determine the end of the function. The available heuristics are:\n - `backtrace`: The debugger will place a breakpoint on the saved return address found on the stack and continue execution on all threads.\n - `step-mode`: The debugger will step on the specified thread until the current function returns. This will be slower.\n\n Args:\n thread (ThreadContext): The thread to finish.\n heuristic (str, optional): The heuristic to use. Defaults to \"backtrace\".\n \"\"\"\n self.__threaded_finish(thread, heuristic)\n\n # At this point, we need to continue the execution of the callback from which the step was called\n self.resume_context.resume = True\n\n @background_alias(_background_finish)\n @change_state_function_thread\n def finish(self: InternalDebugger, thread: ThreadContext, heuristic: str = \"backtrace\") -> None:\n \"\"\"Continues execution until the current function returns or the process stops.\n\n The command requires a heuristic to determine the end of the function. The available heuristics are:\n - `backtrace`: The debugger will place a breakpoint on the saved return address found on the stack and continue execution on all threads.\n - `step-mode`: The debugger will step on the specified thread until the current function returns. This will be slower.\n\n Args:\n thread (ThreadContext): The thread to finish.\n heuristic (str, optional): The heuristic to use. Defaults to \"backtrace\".\n \"\"\"\n self.__polling_thread_command_queue.put(\n (self.__threaded_finish, (thread, heuristic)),\n )\n\n self._join_and_check_status()\n\n def _background_next(\n self: InternalDebugger,\n thread: ThreadContext,\n ) -> None:\n \"\"\"Executes the next instruction of the process. If the instruction is a call, the debugger will continue until the called function returns.\"\"\"\n self.__threaded_next(thread)\n\n # At this point, we need to continue the execution of the callback from which the step was called\n self.resume_context.resume = True\n\n @background_alias(_background_next)\n @change_state_function_thread\n def next(self: InternalDebugger, thread: ThreadContext) -> None:\n \"\"\"Executes the next instruction of the process. If the instruction is a call, the debugger will continue until the called function returns.\"\"\"\n self._ensure_process_stopped()\n self.__polling_thread_command_queue.put((self.__threaded_next, (thread,)))\n self._join_and_check_status()\n\n def enable_pretty_print(\n self: InternalDebugger,\n ) -> SyscallHandler:\n \"\"\"Handles a syscall in the target process to pretty prints its arguments and return value.\"\"\"\n self._ensure_process_stopped()\n\n syscall_numbers = get_all_syscall_numbers(self.arch)\n\n for syscall_number in syscall_numbers:\n # Check if the syscall is already handled (by the user or by the pretty print handler)\n if syscall_number in self.handled_syscalls:\n handler = self.handled_syscalls[syscall_number]\n if syscall_number not in (self.syscalls_to_not_pprint or []) and syscall_number in (\n self.syscalls_to_pprint or syscall_numbers\n ):\n handler.on_enter_pprint = pprint_on_enter\n handler.on_exit_pprint = pprint_on_exit\n else:\n # Remove the pretty print handler from previous pretty print calls\n handler.on_enter_pprint = None\n handler.on_exit_pprint = None\n elif syscall_number not in (self.syscalls_to_not_pprint or []) and syscall_number in (\n self.syscalls_to_pprint or syscall_numbers\n ):\n handler = SyscallHandler(\n syscall_number,\n None,\n None,\n pprint_on_enter,\n pprint_on_exit,\n )\n\n link_to_internal_debugger(handler, self)\n\n # We have to disable the handler since it is not user-defined\n handler.disable()\n\n self.__polling_thread_command_queue.put(\n (self.__threaded_handle_syscall, (handler,)),\n )\n\n self._join_and_check_status()\n\n def disable_pretty_print(self: InternalDebugger) -> None:\n \"\"\"Disable the handler for all the syscalls that are pretty printed.\"\"\"\n self._ensure_process_stopped()\n\n installed_handlers = list(self.handled_syscalls.values())\n for handler in installed_handlers:\n if handler.on_enter_pprint or handler.on_exit_pprint:\n if handler.on_enter_user or handler.on_exit_user:\n handler.on_enter_pprint = None\n handler.on_exit_pprint = None\n else:\n self.__polling_thread_command_queue.put(\n (self.__threaded_unhandle_syscall, (handler,)),\n )\n\n self._join_and_check_status()\n\n def insert_new_thread(self: InternalDebugger, thread: ThreadContext) -> None:\n \"\"\"Insert a new thread in the context.\n\n Args:\n thread (ThreadContext): the thread to insert.\n \"\"\"\n if thread in self.threads:\n raise RuntimeError(\"Thread already registered.\")\n\n self.threads.append(thread)\n\n def set_thread_as_dead(\n self: InternalDebugger,\n thread_id: int,\n exit_code: int | None,\n exit_signal: int | None,\n ) -> None:\n \"\"\"Set a thread as dead and update its exit code and exit signal.\n\n Args:\n thread_id (int): the ID of the thread to set as dead.\n exit_code (int, optional): the exit code of the thread.\n exit_signal (int, optional): the exit signal of the thread.\n \"\"\"\n for thread in self.threads:\n if thread.thread_id == thread_id:\n thread.set_as_dead()\n thread._exit_code = exit_code\n thread._exit_signal = exit_signal\n break\n\n def set_all_threads_as_dead(self: InternalDebugger) -> None:\n \"\"\"Set all threads as dead.\"\"\"\n for thread in self.threads:\n thread.set_as_dead()\n\n def get_thread_by_id(self: InternalDebugger, thread_id: int) -> ThreadContext:\n \"\"\"Get a thread by its ID.\n\n Args:\n thread_id (int): the ID of the thread to get.\n\n Returns:\n ThreadContext: the thread with the specified ID.\n \"\"\"\n for thread in self.threads:\n if thread.thread_id == thread_id and not thread.dead:\n return thread\n\n return None\n\n def resolve_address(\n self: InternalDebugger,\n address: int,\n backing_file: str,\n skip_absolute_address_validation: bool = False,\n ) -> int:\n \"\"\"Normalizes and validates the specified address.\n\n Args:\n address (int): The address to normalize and validate.\n backing_file (str): The backing file to resolve the address in.\n skip_absolute_address_validation (bool, optional): Whether to skip bounds checking for absolute addresses. Defaults to False.\n\n Returns:\n int: The normalized and validated address.\n\n Raises:\n ValueError: If the substring `backing_file` is present in multiple backing files.\n \"\"\"\n if skip_absolute_address_validation and backing_file == \"absolute\":\n return address\n\n maps = self.maps\n\n if backing_file in [\"hybrid\", \"absolute\"]:\n if maps.filter(address):\n # If the address is absolute, we can return it directly\n return address\n elif backing_file == \"absolute\":\n # The address is explicitly an absolute address but we did not find it\n raise ValueError(\n \"The specified absolute address does not exist. Check the address or specify a backing file.\",\n )\n else:\n # If the address was not found and the backing file is not \"absolute\",\n # we have to assume it is in the main map\n backing_file = self._process_full_path\n liblog.warning(\n f\"No backing file specified and no corresponding absolute address found for {hex(address)}. Assuming `{backing_file}`.\",\n )\n\n filtered_maps = maps.filter(backing_file)\n\n return normalize_and_validate_address(address, filtered_maps)\n\n @change_state_function_process\n def resolve_symbol(self: InternalDebugger, symbol: str, backing_file: str) -> int:\n \"\"\"Resolves the address of the specified symbol.\n\n Args:\n symbol (str): The symbol to resolve.\n backing_file (str): The backing file to resolve the symbol in.\n\n Returns:\n int: The address of the symbol.\n \"\"\"\n if backing_file == \"absolute\":\n raise ValueError(\"Cannot use `absolute` backing file with symbols.\")\n\n if backing_file == \"hybrid\":\n # If no explicit backing file is specified, we try resolving the symbol in the main map\n filtered_maps = self.maps.filter(\"binary\")\n try:\n with extend_internal_debugger(self):\n return resolve_symbol_in_maps(symbol, filtered_maps)\n except ValueError:\n liblog.warning(\n f\"No backing file specified for the symbol `{symbol}`. Resolving the symbol in ALL the maps (slow!)\",\n )\n\n # Otherwise, we resolve the symbol in all the maps: as this can be slow,\n # we issue a warning with the file containing it\n maps = self.maps\n with extend_internal_debugger(self):\n address = resolve_symbol_in_maps(symbol, maps)\n\n filtered_maps = self.maps.filter(address)\n if len(filtered_maps) != 1:\n # Shouldn't happen, but you never know...\n raise RuntimeError(\n \"The symbol address is present in zero or multiple backing files. Please specify the correct backing file.\",\n )\n liblog.warning(\n f\"Symbol `{symbol}` found in `{filtered_maps[0].backing_file}`, \"\n f\"specify it manually as the backing file for better performance.\",\n )\n\n return address\n\n if backing_file in [\"binary\", self._process_name]:\n backing_file = self._process_full_path\n\n filtered_maps = self.maps.filter(backing_file)\n\n with extend_internal_debugger(self):\n return resolve_symbol_in_maps(symbol, filtered_maps)\n\n @property\n def symbols(self: InternalDebugger) -> SymbolList[Symbol]:\n \"\"\"Get the symbols of the process.\"\"\"\n self._ensure_process_stopped()\n backing_files = {vmap.backing_file for vmap in self.maps}\n with extend_internal_debugger(self):\n return get_all_symbols(backing_files)\n\n def _background_ensure_process_stopped(self: InternalDebugger) -> None:\n \"\"\"Validates the state of the process.\"\"\"\n # There is no case where this should ever happen, but...\n if self._is_migrated_to_gdb:\n raise RuntimeError(\"Cannot execute this command after migrating to GDB.\")\n\n @background_alias(_background_ensure_process_stopped)\n def _ensure_process_stopped(self: InternalDebugger) -> None:\n \"\"\"Validates the state of the process.\"\"\"\n if self._is_migrated_to_gdb:\n raise RuntimeError(\"Cannot execute this command after migrating to GDB.\")\n\n if not self.running:\n return\n\n if self.auto_interrupt_on_command:\n self.interrupt()\n\n self._join_and_check_status()\n\n @background_alias(_background_ensure_process_stopped)\n def _ensure_process_stopped_regs(self: InternalDebugger) -> None:\n \"\"\"Validates the state of the process. This is designed to be used by register-related commands.\"\"\"\n if self._is_migrated_to_gdb:\n raise RuntimeError(\"Cannot execute this command after migrating to GDB.\")\n\n if not self.is_debugging and not self.threads[0].dead:\n # The process is not being debugged, we cannot access registers\n # We can still access registers if the process is dead to guarantee post-mortem analysis\n raise RuntimeError(\"The process is not being debugged, cannot access registers. Check your script.\")\n\n if not self.running:\n return\n\n if self.auto_interrupt_on_command:\n self.interrupt()\n\n self._join_and_check_status()\n\n def _is_in_background(self: InternalDebugger) -> None:\n return current_thread() == self.__polling_thread\n\n def __polling_thread_function(self: InternalDebugger) -> None:\n \"\"\"This function is run in a thread. It is used to poll the process for state change.\"\"\"\n while True:\n # Wait for the main thread to signal a command to execute\n command, args = self.__polling_thread_command_queue.get()\n\n if command == THREAD_TERMINATE:\n # Signal that the command has been executed\n self.__polling_thread_command_queue.task_done()\n return\n\n # Execute the command\n try:\n return_value = command(*args)\n except BaseException as e:\n return_value = e\n\n if return_value is not None:\n self.__polling_thread_response_queue.put(return_value)\n\n # Signal that the command has been executed\n self.__polling_thread_command_queue.task_done()\n\n if return_value is not None:\n self.__polling_thread_response_queue.join()\n\n def _join_and_check_status(self: InternalDebugger) -> None:\n # Wait for the background thread to signal \"task done\" before returning\n # We don't want any asynchronous behaviour here\n self.__polling_thread_command_queue.join()\n\n # Check for any exceptions raised by the background thread\n if not self.__polling_thread_response_queue.empty():\n response = self.__polling_thread_response_queue.get()\n self.__polling_thread_response_queue.task_done()\n if response is not None:\n raise response\n\n @functools.cached_property\n def _process_full_path(self: InternalDebugger) -> str:\n \"\"\"Get the full path of the process.\n\n Returns:\n str: the full path of the process.\n \"\"\"\n return str(Path(f\"/proc/{self.process_id}/exe\").readlink())\n\n @functools.cached_property\n def _process_name(self: InternalDebugger) -> str:\n \"\"\"Get the name of the process.\n\n Returns:\n str: the name of the process.\n \"\"\"\n with Path(f\"/proc/{self.process_id}/comm\").open() as f:\n return f.read().strip()\n\n def __threaded_run(self: InternalDebugger, redirect_pipes: bool) -> None:\n liblog.debugger(\"Starting process %s.\", self.argv[0])\n self.debugging_interface.run(redirect_pipes)\n\n self.set_stopped()\n\n def __threaded_attach(self: InternalDebugger, pid: int) -> None:\n liblog.debugger(\"Attaching to process %d.\", pid)\n self.debugging_interface.attach(pid)\n\n self.set_stopped()\n\n def __threaded_detach(self: InternalDebugger) -> None:\n liblog.debugger(\"Detaching from process %d.\", self.process_id)\n self.debugging_interface.detach()\n\n self.set_stopped()\n\n def __threaded_kill(self: InternalDebugger) -> None:\n if self.argv:\n liblog.debugger(\n \"Killing process %s (%d).\",\n self.argv[0],\n self.process_id,\n )\n else:\n liblog.debugger(\"Killing process %d.\", self.process_id)\n self.debugging_interface.kill()\n\n def __threaded_cont(self: InternalDebugger) -> None:\n if self.argv:\n liblog.debugger(\n \"Continuing process %s (%d).\",\n self.argv[0],\n self.process_id,\n )\n else:\n liblog.debugger(\"Continuing process %d.\", self.process_id)\n\n self.set_running()\n self.debugging_interface.cont()\n\n def __threaded_wait(self: InternalDebugger) -> None:\n if self.argv:\n liblog.debugger(\n \"Waiting for process %s (%d) to stop.\",\n self.argv[0],\n self.process_id,\n )\n else:\n liblog.debugger(\"Waiting for process %d to stop.\", self.process_id)\n\n while True:\n if self.threads[0].dead:\n # All threads are dead\n liblog.debugger(\"All threads dead\")\n break\n self.resume_context.resume = True\n self.debugging_interface.wait()\n if self.resume_context.resume:\n self.debugging_interface.cont()\n else:\n break\n self.set_stopped()\n\n def __threaded_breakpoint(self: InternalDebugger, bp: Breakpoint) -> None:\n liblog.debugger(\"Setting breakpoint at 0x%x.\", bp.address)\n self.debugging_interface.set_breakpoint(bp)\n\n def __threaded_catch_signal(self: InternalDebugger, catcher: SignalCatcher) -> None:\n liblog.debugger(\n f\"Setting the catcher for signal {resolve_signal_name(catcher.signal_number)} ({catcher.signal_number}).\",\n )\n self.debugging_interface.set_signal_catcher(catcher)\n\n def __threaded_handle_syscall(self: InternalDebugger, handler: SyscallHandler) -> None:\n liblog.debugger(f\"Setting the handler for syscall {handler.syscall_number}.\")\n self.debugging_interface.set_syscall_handler(handler)\n\n def __threaded_unhandle_syscall(self: InternalDebugger, handler: SyscallHandler) -> None:\n liblog.debugger(f\"Unsetting the handler for syscall {handler.syscall_number}.\")\n self.debugging_interface.unset_syscall_handler(handler)\n\n def __threaded_step(self: InternalDebugger, thread: ThreadContext) -> None:\n liblog.debugger(\"Stepping thread %s.\", thread.thread_id)\n self.debugging_interface.step(thread)\n self.set_running()\n\n def __threaded_step_until(\n self: InternalDebugger,\n thread: ThreadContext,\n address: int,\n max_steps: int,\n ) -> None:\n liblog.debugger(\"Stepping thread %s until 0x%x.\", thread.thread_id, address)\n self.debugging_interface.step_until(thread, address, max_steps)\n self.set_stopped()\n\n def __threaded_finish(self: InternalDebugger, thread: ThreadContext, heuristic: str) -> None:\n prefix = heuristic.capitalize()\n\n liblog.debugger(f\"{prefix} finish on thread %s\", thread.thread_id)\n self.debugging_interface.finish(thread, heuristic=heuristic)\n\n self.set_stopped()\n\n def __threaded_next(self: InternalDebugger, thread: ThreadContext) -> None:\n liblog.debugger(\"Next on thread %s.\", thread.thread_id)\n self.debugging_interface.next(thread)\n self.set_stopped()\n\n def __threaded_gdb(self: InternalDebugger) -> None:\n self.debugging_interface.migrate_to_gdb()\n\n def __threaded_migrate_from_gdb(self: InternalDebugger) -> None:\n self.debugging_interface.migrate_from_gdb()\n\n def __threaded_peek_memory(self: InternalDebugger, address: int) -> bytes | BaseException:\n value = self.debugging_interface.peek_memory(address)\n return value.to_bytes(get_platform_gp_register_size(libcontext.platform), sys.byteorder)\n\n def __threaded_poke_memory(self: InternalDebugger, address: int, data: bytes) -> None:\n int_data = int.from_bytes(data, sys.byteorder)\n self.debugging_interface.poke_memory(address, int_data)\n\n def __threaded_fetch_fp_registers(self: InternalDebugger, registers: Registers) -> None:\n self.debugging_interface.fetch_fp_registers(registers)\n\n def __threaded_flush_fp_registers(self: InternalDebugger, registers: Registers) -> None:\n self.debugging_interface.flush_fp_registers(registers)\n\n @background_alias(__threaded_peek_memory)\n def _peek_memory(self: InternalDebugger, address: int) -> bytes:\n \"\"\"Reads memory from the process.\"\"\"\n if not self.is_debugging:\n raise RuntimeError(\"Process not running, cannot access memory.\")\n\n if self.running:\n # Reading memory while the process is running could lead to concurrency issues\n # and corrupted values\n liblog.debugger(\n \"Process is running. Waiting for it to stop before reading memory.\",\n )\n\n self._ensure_process_stopped()\n\n self.__polling_thread_command_queue.put(\n (self.__threaded_peek_memory, (address,)),\n )\n\n # We cannot call _join_and_check_status here, as we need the return value which might not be an exception\n self.__polling_thread_command_queue.join()\n\n value = self.__polling_thread_response_queue.get()\n self.__polling_thread_response_queue.task_done()\n\n if isinstance(value, BaseException):\n raise value\n\n return value\n\n def _fast_read_memory(self: InternalDebugger, address: int, size: int) -> bytes:\n \"\"\"Reads memory from the process.\"\"\"\n if not self.is_debugging:\n raise RuntimeError(\"Process not running, cannot access memory.\")\n\n if self.running:\n # Reading memory while the process is running could lead to concurrency issues\n # and corrupted values\n liblog.debugger(\n \"Process is running. Waiting for it to stop before reading memory.\",\n )\n\n self._ensure_process_stopped()\n\n return self._process_memory_manager.read(address, size)\n\n @background_alias(__threaded_poke_memory)\n def _poke_memory(self: InternalDebugger, address: int, data: bytes) -> None:\n \"\"\"Writes memory to the process.\"\"\"\n if not self.is_debugging:\n raise RuntimeError(\"Process not running, cannot access memory.\")\n\n if self.running:\n # Reading memory while the process is running could lead to concurrency issues\n # and corrupted values\n liblog.debugger(\n \"Process is running. Waiting for it to stop before writing to memory.\",\n )\n\n self._ensure_process_stopped()\n\n self.__polling_thread_command_queue.put(\n (self.__threaded_poke_memory, (address, data)),\n )\n\n self._join_and_check_status()\n\n def _fast_write_memory(self: InternalDebugger, address: int, data: bytes) -> None:\n \"\"\"Writes memory to the process.\"\"\"\n if not self.is_debugging:\n raise RuntimeError(\"Process not running, cannot access memory.\")\n\n if self.running:\n # Reading memory while the process is running could lead to concurrency issues\n # and corrupted values\n liblog.debugger(\n \"Process is running. Waiting for it to stop before writing to memory.\",\n )\n\n self._ensure_process_stopped()\n\n self._process_memory_manager.write(address, data)\n\n @background_alias(__threaded_fetch_fp_registers)\n def _fetch_fp_registers(self: InternalDebugger, registers: Registers) -> None:\n \"\"\"Fetches the floating point registers of a thread.\"\"\"\n if not self.is_debugging:\n raise RuntimeError(\"Process not running, cannot read floating-point registers.\")\n\n self._ensure_process_stopped()\n\n self.__polling_thread_command_queue.put(\n (self.__threaded_fetch_fp_registers, (registers,)),\n )\n\n self._join_and_check_status()\n\n @background_alias(__threaded_flush_fp_registers)\n def _flush_fp_registers(self: InternalDebugger, registers: Registers) -> None:\n \"\"\"Flushes the floating point registers of a thread.\"\"\"\n if not self.is_debugging:\n raise RuntimeError(\"Process not running, cannot write floating-point registers.\")\n\n self._ensure_process_stopped()\n\n self.__polling_thread_command_queue.put(\n (self.__threaded_flush_fp_registers, (registers,)),\n )\n\n self._join_and_check_status()\n\n def _enable_antidebug_escaping(self: InternalDebugger) -> None:\n \"\"\"Enables the anti-debugging escape mechanism.\"\"\"\n handler = SyscallHandler(\n resolve_syscall_number(self.arch, \"ptrace\"),\n on_enter_ptrace,\n on_exit_ptrace,\n None,\n None,\n )\n\n link_to_internal_debugger(handler, self)\n\n self.__polling_thread_command_queue.put((self.__threaded_handle_syscall, (handler,)))\n\n self._join_and_check_status()\n\n # Seutp hidden state for the handler\n handler._traceme_called = False\n handler._command = None\n\n @property\n def running(self: InternalDebugger) -> bool:\n \"\"\"Get the state of the process.\n\n Returns:\n bool: True if the process is running, False otherwise.\n \"\"\"\n return self._is_running\n\n def set_running(self: InternalDebugger) -> None:\n \"\"\"Set the state of the process to running.\"\"\"\n self._is_running = True\n\n def set_stopped(self: InternalDebugger) -> None:\n \"\"\"Set the state of the process to stopped.\"\"\"\n self._is_running = False\n\n @change_state_function_process\n def create_snapshot(self: Debugger, level: str = \"base\", name: str | None = None) -> ProcessSnapshot:\n \"\"\"Create a snapshot of the current process state.\n\n Snapshot levels:\n - base: Registers\n - writable: Registers, writable memory contents\n - full: Registers, all memory contents\n\n Args:\n level (str): The level of the snapshot.\n name (str, optional): The name of the snapshot. Defaults to None.\n\n Returns:\n ProcessSnapshot: The created snapshot.\n \"\"\"\n self._ensure_process_stopped()\n return ProcessSnapshot(self, level, name)\n\n def load_snapshot(self: Debugger, file_path: str) -> Snapshot:\n \"\"\"Load a snapshot of the thread / process state.\n\n Args:\n file_path (str): The path to the snapshot file.\n \"\"\"\n loaded_snap = self.serialization_helper.load(file_path)\n\n # Log the creation of the snapshot\n named_addition = \" named \" + loaded_snap.name if loaded_snap.name is not None else \"\"\n liblog.debugger(\n f\"Loaded {type(loaded_snap)} snapshot {loaded_snap.snapshot_id} of level {loaded_snap.level} from file {file_path}{named_addition}\"\n )\n\n return loaded_snap\n\n def notify_snaphot_taken(self: InternalDebugger) -> None:\n \"\"\"Notify the debugger that a snapshot has been taken.\"\"\"\n self._snapshot_count += 1\n"},{"location":"from_pydoc/generated/debugger/internal_debugger/#libdebug.debugger.internal_debugger.InternalDebugger.__polling_thread","title":"__polling_thread instance-attribute","text":"The background thread used to poll the process for state change.
"},{"location":"from_pydoc/generated/debugger/internal_debugger/#libdebug.debugger.internal_debugger.InternalDebugger.__polling_thread_command_queue","title":"__polling_thread_command_queue = Queue() instance-attribute","text":"The queue used to send commands to the background thread.
"},{"location":"from_pydoc/generated/debugger/internal_debugger/#libdebug.debugger.internal_debugger.InternalDebugger.__polling_thread_response_queue","title":"__polling_thread_response_queue = Queue() instance-attribute","text":"The queue used to receive responses from the background thread.
"},{"location":"from_pydoc/generated/debugger/internal_debugger/#libdebug.debugger.internal_debugger.InternalDebugger._fast_memory","title":"_fast_memory instance-attribute","text":"The memory view of the debugged process using the fast memory access method.
"},{"location":"from_pydoc/generated/debugger/internal_debugger/#libdebug.debugger.internal_debugger.InternalDebugger._is_migrated_to_gdb","title":"_is_migrated_to_gdb = False instance-attribute","text":"A flag that indicates if the debuggee was migrated to GDB.
"},{"location":"from_pydoc/generated/debugger/internal_debugger/#libdebug.debugger.internal_debugger.InternalDebugger._is_running","title":"_is_running = False instance-attribute","text":"The overall state of the debugged process. True if the process is running, False otherwise.
"},{"location":"from_pydoc/generated/debugger/internal_debugger/#libdebug.debugger.internal_debugger.InternalDebugger._process_full_path","title":"_process_full_path cached property","text":"Get the full path of the process.
Returns:
Name Type Descriptionstr str the full path of the process.
"},{"location":"from_pydoc/generated/debugger/internal_debugger/#libdebug.debugger.internal_debugger.InternalDebugger._process_name","title":"_process_name cached property","text":"Get the name of the process.
Returns:
Name Type Descriptionstr str the name of the process.
"},{"location":"from_pydoc/generated/debugger/internal_debugger/#libdebug.debugger.internal_debugger.InternalDebugger._slow_memory","title":"_slow_memory instance-attribute","text":"The memory view of the debugged process using the slow memory access method.
"},{"location":"from_pydoc/generated/debugger/internal_debugger/#libdebug.debugger.internal_debugger.InternalDebugger._snapshot_count","title":"_snapshot_count = 0 instance-attribute","text":"The counter used to assign an ID to each snapshot.
"},{"location":"from_pydoc/generated/debugger/internal_debugger/#libdebug.debugger.internal_debugger.InternalDebugger.arch","title":"arch = map_arch(libcontext.platform) instance-attribute","text":"The architecture of the debugged process.
"},{"location":"from_pydoc/generated/debugger/internal_debugger/#libdebug.debugger.internal_debugger.InternalDebugger.argv","title":"argv = [] instance-attribute","text":"The command line arguments of the debugged process.
"},{"location":"from_pydoc/generated/debugger/internal_debugger/#libdebug.debugger.internal_debugger.InternalDebugger.aslr_enabled","title":"aslr_enabled = False instance-attribute","text":"A flag that indicates if ASLR is enabled or not.
"},{"location":"from_pydoc/generated/debugger/internal_debugger/#libdebug.debugger.internal_debugger.InternalDebugger.auto_interrupt_on_command","title":"auto_interrupt_on_command instance-attribute","text":"A flag that indicates if the debugger should automatically interrupt the debugged process when a command is issued.
"},{"location":"from_pydoc/generated/debugger/internal_debugger/#libdebug.debugger.internal_debugger.InternalDebugger.autoreach_entrypoint","title":"autoreach_entrypoint = True instance-attribute","text":"A flag that indicates if the debugger should automatically reach the entry point of the debugged process.
"},{"location":"from_pydoc/generated/debugger/internal_debugger/#libdebug.debugger.internal_debugger.InternalDebugger.breakpoints","title":"breakpoints = {} instance-attribute","text":"A dictionary of all the breakpoints set on the process. Key: the address of the breakpoint.
"},{"location":"from_pydoc/generated/debugger/internal_debugger/#libdebug.debugger.internal_debugger.InternalDebugger.caught_signals","title":"caught_signals = {} instance-attribute","text":"A dictionary of all the signals caught in the process. Key: the signal number.
"},{"location":"from_pydoc/generated/debugger/internal_debugger/#libdebug.debugger.internal_debugger.InternalDebugger.children","title":"children = [] instance-attribute","text":"The list of child debuggers.
"},{"location":"from_pydoc/generated/debugger/internal_debugger/#libdebug.debugger.internal_debugger.InternalDebugger.debugger","title":"debugger instance-attribute","text":"The debugger object.
"},{"location":"from_pydoc/generated/debugger/internal_debugger/#libdebug.debugger.internal_debugger.InternalDebugger.debugging_interface","title":"debugging_interface instance-attribute","text":"The debugging interface used to communicate with the debugged process.
"},{"location":"from_pydoc/generated/debugger/internal_debugger/#libdebug.debugger.internal_debugger.InternalDebugger.env","title":"env = {} instance-attribute","text":"The environment variables of the debugged process.
"},{"location":"from_pydoc/generated/debugger/internal_debugger/#libdebug.debugger.internal_debugger.InternalDebugger.escape_antidebug","title":"escape_antidebug = False instance-attribute","text":"A flag that indicates if the debugger should escape anti-debugging techniques.
"},{"location":"from_pydoc/generated/debugger/internal_debugger/#libdebug.debugger.internal_debugger.InternalDebugger.fast_memory","title":"fast_memory = True instance-attribute","text":"A flag that indicates if the debugger should use a faster memory access method.
"},{"location":"from_pydoc/generated/debugger/internal_debugger/#libdebug.debugger.internal_debugger.InternalDebugger.follow_children","title":"follow_children instance-attribute","text":"A flag that indicates if the debugger should follow child processes creating a new debugger for each one.
"},{"location":"from_pydoc/generated/debugger/internal_debugger/#libdebug.debugger.internal_debugger.InternalDebugger.handled_syscalls","title":"handled_syscalls = {} instance-attribute","text":"A dictionary of all the syscall handled in the process. Key: the syscall number.
"},{"location":"from_pydoc/generated/debugger/internal_debugger/#libdebug.debugger.internal_debugger.InternalDebugger.instanced","title":"instanced = False class-attribute instance-attribute","text":"Whether the process was started and has not been killed yet.
"},{"location":"from_pydoc/generated/debugger/internal_debugger/#libdebug.debugger.internal_debugger.InternalDebugger.is_debugging","title":"is_debugging = False class-attribute instance-attribute","text":"Whether the debugger is currently debugging a process.
"},{"location":"from_pydoc/generated/debugger/internal_debugger/#libdebug.debugger.internal_debugger.InternalDebugger.kill_on_exit","title":"kill_on_exit = True instance-attribute","text":"A flag that indicates if the debugger should kill the debugged process when it exits.
"},{"location":"from_pydoc/generated/debugger/internal_debugger/#libdebug.debugger.internal_debugger.InternalDebugger.maps","title":"maps property","text":"Returns the memory maps of the process.
"},{"location":"from_pydoc/generated/debugger/internal_debugger/#libdebug.debugger.internal_debugger.InternalDebugger.memory","title":"memory property","text":"The memory view of the debugged process.
"},{"location":"from_pydoc/generated/debugger/internal_debugger/#libdebug.debugger.internal_debugger.InternalDebugger.pipe_manager","title":"pipe_manager = None instance-attribute","text":"The PipeManager used to communicate with the debugged process.
"},{"location":"from_pydoc/generated/debugger/internal_debugger/#libdebug.debugger.internal_debugger.InternalDebugger.pprint_syscalls","title":"pprint_syscalls = False instance-attribute","text":"A flag that indicates if the debugger should pretty print syscalls.
"},{"location":"from_pydoc/generated/debugger/internal_debugger/#libdebug.debugger.internal_debugger.InternalDebugger.process_id","title":"process_id = 0 instance-attribute","text":"The PID of the debugged process.
"},{"location":"from_pydoc/generated/debugger/internal_debugger/#libdebug.debugger.internal_debugger.InternalDebugger.resume_context","title":"resume_context = ResumeContext() instance-attribute","text":"Context that indicates if the debugger should resume the debugged process.
"},{"location":"from_pydoc/generated/debugger/internal_debugger/#libdebug.debugger.internal_debugger.InternalDebugger.running","title":"running property","text":"Get the state of the process.
Returns:
Name Type Descriptionbool bool True if the process is running, False otherwise.
"},{"location":"from_pydoc/generated/debugger/internal_debugger/#libdebug.debugger.internal_debugger.InternalDebugger.signals_to_block","title":"signals_to_block = [] instance-attribute","text":"The signals to not forward to the process.
"},{"location":"from_pydoc/generated/debugger/internal_debugger/#libdebug.debugger.internal_debugger.InternalDebugger.stdin_settings_backup","title":"stdin_settings_backup = [] instance-attribute","text":"The backup of the stdin settings. Used to restore the original settings after possible conflicts due to the pipe manager interacactive mode.
"},{"location":"from_pydoc/generated/debugger/internal_debugger/#libdebug.debugger.internal_debugger.InternalDebugger.symbols","title":"symbols property","text":"Get the symbols of the process.
"},{"location":"from_pydoc/generated/debugger/internal_debugger/#libdebug.debugger.internal_debugger.InternalDebugger.syscalls_to_not_pprint","title":"syscalls_to_not_pprint = None instance-attribute","text":"The syscalls to not pretty print.
"},{"location":"from_pydoc/generated/debugger/internal_debugger/#libdebug.debugger.internal_debugger.InternalDebugger.syscalls_to_pprint","title":"syscalls_to_pprint = None instance-attribute","text":"The syscalls to pretty print.
"},{"location":"from_pydoc/generated/debugger/internal_debugger/#libdebug.debugger.internal_debugger.InternalDebugger.threads","title":"threads = [] instance-attribute","text":"A list of all the threads of the debugged process.
"},{"location":"from_pydoc/generated/debugger/internal_debugger/#libdebug.debugger.internal_debugger.InternalDebugger.__init__","title":"__init__()","text":"Initialize the context.
Source code inlibdebug/debugger/internal_debugger.py def __init__(self: InternalDebugger) -> None:\n \"\"\"Initialize the context.\"\"\"\n # These must be reinitialized on every call to \"debugger\"\n self.aslr_enabled = False\n self.autoreach_entrypoint = True\n self.argv = []\n self.env = {}\n self.escape_antidebug = False\n self.breakpoints = {}\n self.handled_syscalls = {}\n self.caught_signals = {}\n self.syscalls_to_pprint = None\n self.syscalls_to_not_pprint = None\n self.signals_to_block = []\n self.pprint_syscalls = False\n self.pipe_manager = None\n self.process_id = 0\n self.threads = []\n self.instanced = False\n self.is_debugging = False\n self._is_running = False\n self._is_migrated_to_gdb = False\n self.resume_context = ResumeContext()\n self.stdin_settings_backup = []\n self.arch = map_arch(libcontext.platform)\n self.kill_on_exit = True\n self._process_memory_manager = ProcessMemoryManager()\n self.fast_memory = True\n self.__polling_thread_command_queue = Queue()\n self.__polling_thread_response_queue = Queue()\n self._snapshot_count = 0\n self.serialization_helper = SerializationHelper()\n self.children = []\n"},{"location":"from_pydoc/generated/debugger/internal_debugger/#libdebug.debugger.internal_debugger.InternalDebugger.__polling_thread_function","title":"__polling_thread_function()","text":"This function is run in a thread. It is used to poll the process for state change.
Source code inlibdebug/debugger/internal_debugger.py def __polling_thread_function(self: InternalDebugger) -> None:\n \"\"\"This function is run in a thread. It is used to poll the process for state change.\"\"\"\n while True:\n # Wait for the main thread to signal a command to execute\n command, args = self.__polling_thread_command_queue.get()\n\n if command == THREAD_TERMINATE:\n # Signal that the command has been executed\n self.__polling_thread_command_queue.task_done()\n return\n\n # Execute the command\n try:\n return_value = command(*args)\n except BaseException as e:\n return_value = e\n\n if return_value is not None:\n self.__polling_thread_response_queue.put(return_value)\n\n # Signal that the command has been executed\n self.__polling_thread_command_queue.task_done()\n\n if return_value is not None:\n self.__polling_thread_response_queue.join()\n"},{"location":"from_pydoc/generated/debugger/internal_debugger/#libdebug.debugger.internal_debugger.InternalDebugger._auto_detect_terminal","title":"_auto_detect_terminal()","text":"Auto-detects the terminal.
Source code inlibdebug/debugger/internal_debugger.py def _auto_detect_terminal(self: InternalDebugger) -> None:\n \"\"\"Auto-detects the terminal.\"\"\"\n try:\n process = Process(self.process_id)\n while process:\n pname = process.name().lower()\n if terminal_command := TerminalTypes.get_command(pname):\n libcontext.terminal = terminal_command\n liblog.debugger(f\"Auto-detected terminal: {libcontext.terminal}\")\n process = process.parent()\n except Error:\n pass\n"},{"location":"from_pydoc/generated/debugger/internal_debugger/#libdebug.debugger.internal_debugger.InternalDebugger._background_ensure_process_stopped","title":"_background_ensure_process_stopped()","text":"Validates the state of the process.
Source code inlibdebug/debugger/internal_debugger.py def _background_ensure_process_stopped(self: InternalDebugger) -> None:\n \"\"\"Validates the state of the process.\"\"\"\n # There is no case where this should ever happen, but...\n if self._is_migrated_to_gdb:\n raise RuntimeError(\"Cannot execute this command after migrating to GDB.\")\n"},{"location":"from_pydoc/generated/debugger/internal_debugger/#libdebug.debugger.internal_debugger.InternalDebugger._background_finish","title":"_background_finish(thread, heuristic='backtrace')","text":"Continues execution until the current function returns or the process stops.
The command requires a heuristic to determine the end of the function. The available heuristics are: - backtrace: The debugger will place a breakpoint on the saved return address found on the stack and continue execution on all threads. - step-mode: The debugger will step on the specified thread until the current function returns. This will be slower.
Parameters:
Name Type Description Defaultthread ThreadContext The thread to finish.
requiredheuristic str The heuristic to use. Defaults to \"backtrace\".
'backtrace' Source code in libdebug/debugger/internal_debugger.py def _background_finish(\n self: InternalDebugger,\n thread: ThreadContext,\n heuristic: str = \"backtrace\",\n) -> None:\n \"\"\"Continues execution until the current function returns or the process stops.\n\n The command requires a heuristic to determine the end of the function. The available heuristics are:\n - `backtrace`: The debugger will place a breakpoint on the saved return address found on the stack and continue execution on all threads.\n - `step-mode`: The debugger will step on the specified thread until the current function returns. This will be slower.\n\n Args:\n thread (ThreadContext): The thread to finish.\n heuristic (str, optional): The heuristic to use. Defaults to \"backtrace\".\n \"\"\"\n self.__threaded_finish(thread, heuristic)\n\n # At this point, we need to continue the execution of the callback from which the step was called\n self.resume_context.resume = True\n"},{"location":"from_pydoc/generated/debugger/internal_debugger/#libdebug.debugger.internal_debugger.InternalDebugger._background_invalid_call","title":"_background_invalid_call(*_, **__)","text":"Raises an error when an invalid call is made in background mode.
Source code inlibdebug/debugger/internal_debugger.py def _background_invalid_call(self: InternalDebugger, *_: ..., **__: ...) -> None:\n \"\"\"Raises an error when an invalid call is made in background mode.\"\"\"\n raise RuntimeError(\"This method is not available in a callback.\")\n"},{"location":"from_pydoc/generated/debugger/internal_debugger/#libdebug.debugger.internal_debugger.InternalDebugger._background_next","title":"_background_next(thread)","text":"Executes the next instruction of the process. If the instruction is a call, the debugger will continue until the called function returns.
Source code inlibdebug/debugger/internal_debugger.py def _background_next(\n self: InternalDebugger,\n thread: ThreadContext,\n) -> None:\n \"\"\"Executes the next instruction of the process. If the instruction is a call, the debugger will continue until the called function returns.\"\"\"\n self.__threaded_next(thread)\n\n # At this point, we need to continue the execution of the callback from which the step was called\n self.resume_context.resume = True\n"},{"location":"from_pydoc/generated/debugger/internal_debugger/#libdebug.debugger.internal_debugger.InternalDebugger._background_step","title":"_background_step(thread)","text":"Executes a single instruction of the process.
Parameters:
Name Type Description Defaultthread ThreadContext The thread to step. Defaults to None.
required Source code inlibdebug/debugger/internal_debugger.py def _background_step(self: InternalDebugger, thread: ThreadContext) -> None:\n \"\"\"Executes a single instruction of the process.\n\n Args:\n thread (ThreadContext): The thread to step. Defaults to None.\n \"\"\"\n self.__threaded_step(thread)\n self.__threaded_wait()\n\n # At this point, we need to continue the execution of the callback from which the step was called\n self.resume_context.resume = True\n"},{"location":"from_pydoc/generated/debugger/internal_debugger/#libdebug.debugger.internal_debugger.InternalDebugger._background_step_until","title":"_background_step_until(thread, position, max_steps=-1, file='hybrid')","text":"Executes instructions of the process until the specified location is reached.
Parameters:
Name Type Description Defaultthread ThreadContext The thread to step. Defaults to None.
requiredposition int | bytes The location to reach.
requiredmax_steps int The maximum number of steps to execute. Defaults to -1.
-1 file str The user-defined backing file to resolve the address in. Defaults to \"hybrid\" (libdebug will first try to solve the address as an absolute address, then as a relative address w.r.t. the \"binary\" map file).
'hybrid' Source code in libdebug/debugger/internal_debugger.py def _background_step_until(\n self: InternalDebugger,\n thread: ThreadContext,\n position: int | str,\n max_steps: int = -1,\n file: str = \"hybrid\",\n) -> None:\n \"\"\"Executes instructions of the process until the specified location is reached.\n\n Args:\n thread (ThreadContext): The thread to step. Defaults to None.\n position (int | bytes): The location to reach.\n max_steps (int, optional): The maximum number of steps to execute. Defaults to -1.\n file (str, optional): The user-defined backing file to resolve the address in. Defaults to \"hybrid\" (libdebug will first try to solve the address as an absolute address, then as a relative address w.r.t. the \"binary\" map file).\n \"\"\"\n if isinstance(position, str):\n address = self.resolve_symbol(position, file)\n else:\n address = self.resolve_address(position, file)\n\n self.__threaded_step_until(thread, address, max_steps)\n\n # At this point, we need to continue the execution of the callback from which the step was called\n self.resume_context.resume = True\n"},{"location":"from_pydoc/generated/debugger/internal_debugger/#libdebug.debugger.internal_debugger.InternalDebugger._craft_gdb_migration_command","title":"_craft_gdb_migration_command(migrate_breakpoints)","text":"Crafts the command to migrate to GDB.
Parameters:
Name Type Description Defaultmigrate_breakpoints bool Whether to migrate the breakpoints.
requiredReturns:
Name Type Descriptionstr str The command to migrate to GDB.
Source code inlibdebug/debugger/internal_debugger.py def _craft_gdb_migration_command(self: InternalDebugger, migrate_breakpoints: bool) -> str:\n \"\"\"Crafts the command to migrate to GDB.\n\n Args:\n migrate_breakpoints (bool): Whether to migrate the breakpoints.\n\n Returns:\n str: The command to migrate to GDB.\n \"\"\"\n gdb_command = f'/bin/gdb -q --pid {self.process_id} -ex \"source {GDB_GOBACK_LOCATION} \" -ex \"ni\" -ex \"ni\"'\n\n if not migrate_breakpoints:\n return gdb_command\n\n for bp in self.breakpoints.values():\n if bp.enabled:\n if bp.hardware and bp.condition == \"rw\":\n gdb_command += f' -ex \"awatch *(int{bp.length * 8}_t *) {bp.address:#x}\"'\n elif bp.hardware and bp.condition == \"w\":\n gdb_command += f' -ex \"watch *(int{bp.length * 8}_t *) {bp.address:#x}\"'\n elif bp.hardware:\n gdb_command += f' -ex \"hb *{bp.address:#x}\"'\n else:\n gdb_command += f' -ex \"b *{bp.address:#x}\"'\n\n if self.threads[0].instruction_pointer == bp.address and not bp.hardware:\n # We have to enqueue an additional continue\n gdb_command += ' -ex \"ni\"'\n\n return gdb_command\n"},{"location":"from_pydoc/generated/debugger/internal_debugger/#libdebug.debugger.internal_debugger.InternalDebugger._craft_gdb_migration_file","title":"_craft_gdb_migration_file(migrate_breakpoints)","text":"Crafts the file to migrate to GDB.
Parameters:
Name Type Description Defaultmigrate_breakpoints bool Whether to migrate the breakpoints.
requiredReturns:
Name Type Descriptionstr str The path to the file.
Source code inlibdebug/debugger/internal_debugger.py def _craft_gdb_migration_file(self: InternalDebugger, migrate_breakpoints: bool) -> str:\n \"\"\"Crafts the file to migrate to GDB.\n\n Args:\n migrate_breakpoints (bool): Whether to migrate the breakpoints.\n\n Returns:\n str: The path to the file.\n \"\"\"\n # Different terminals accept what to run in different ways. To make this work with all terminals, we need to\n # create a temporary script that will run the command. This script will be executed by the terminal.\n command = self._craft_gdb_migration_command(migrate_breakpoints)\n with NamedTemporaryFile(delete=False, mode=\"w\", suffix=\".sh\") as temp_file:\n temp_file.write(\"#!/bin/bash\\n\")\n temp_file.write(command)\n script_path = temp_file.name\n\n # Make the script executable\n Path.chmod(Path(script_path), 0o755)\n return script_path\n"},{"location":"from_pydoc/generated/debugger/internal_debugger/#libdebug.debugger.internal_debugger.InternalDebugger._enable_antidebug_escaping","title":"_enable_antidebug_escaping()","text":"Enables the anti-debugging escape mechanism.
Source code inlibdebug/debugger/internal_debugger.py def _enable_antidebug_escaping(self: InternalDebugger) -> None:\n \"\"\"Enables the anti-debugging escape mechanism.\"\"\"\n handler = SyscallHandler(\n resolve_syscall_number(self.arch, \"ptrace\"),\n on_enter_ptrace,\n on_exit_ptrace,\n None,\n None,\n )\n\n link_to_internal_debugger(handler, self)\n\n self.__polling_thread_command_queue.put((self.__threaded_handle_syscall, (handler,)))\n\n self._join_and_check_status()\n\n # Seutp hidden state for the handler\n handler._traceme_called = False\n handler._command = None\n"},{"location":"from_pydoc/generated/debugger/internal_debugger/#libdebug.debugger.internal_debugger.InternalDebugger._ensure_process_stopped","title":"_ensure_process_stopped()","text":"Validates the state of the process.
Source code inlibdebug/debugger/internal_debugger.py @background_alias(_background_ensure_process_stopped)\ndef _ensure_process_stopped(self: InternalDebugger) -> None:\n \"\"\"Validates the state of the process.\"\"\"\n if self._is_migrated_to_gdb:\n raise RuntimeError(\"Cannot execute this command after migrating to GDB.\")\n\n if not self.running:\n return\n\n if self.auto_interrupt_on_command:\n self.interrupt()\n\n self._join_and_check_status()\n"},{"location":"from_pydoc/generated/debugger/internal_debugger/#libdebug.debugger.internal_debugger.InternalDebugger._ensure_process_stopped_regs","title":"_ensure_process_stopped_regs()","text":"Validates the state of the process. This is designed to be used by register-related commands.
Source code inlibdebug/debugger/internal_debugger.py @background_alias(_background_ensure_process_stopped)\ndef _ensure_process_stopped_regs(self: InternalDebugger) -> None:\n \"\"\"Validates the state of the process. This is designed to be used by register-related commands.\"\"\"\n if self._is_migrated_to_gdb:\n raise RuntimeError(\"Cannot execute this command after migrating to GDB.\")\n\n if not self.is_debugging and not self.threads[0].dead:\n # The process is not being debugged, we cannot access registers\n # We can still access registers if the process is dead to guarantee post-mortem analysis\n raise RuntimeError(\"The process is not being debugged, cannot access registers. Check your script.\")\n\n if not self.running:\n return\n\n if self.auto_interrupt_on_command:\n self.interrupt()\n\n self._join_and_check_status()\n"},{"location":"from_pydoc/generated/debugger/internal_debugger/#libdebug.debugger.internal_debugger.InternalDebugger._fast_read_memory","title":"_fast_read_memory(address, size)","text":"Reads memory from the process.
Source code inlibdebug/debugger/internal_debugger.py def _fast_read_memory(self: InternalDebugger, address: int, size: int) -> bytes:\n \"\"\"Reads memory from the process.\"\"\"\n if not self.is_debugging:\n raise RuntimeError(\"Process not running, cannot access memory.\")\n\n if self.running:\n # Reading memory while the process is running could lead to concurrency issues\n # and corrupted values\n liblog.debugger(\n \"Process is running. Waiting for it to stop before reading memory.\",\n )\n\n self._ensure_process_stopped()\n\n return self._process_memory_manager.read(address, size)\n"},{"location":"from_pydoc/generated/debugger/internal_debugger/#libdebug.debugger.internal_debugger.InternalDebugger._fast_write_memory","title":"_fast_write_memory(address, data)","text":"Writes memory to the process.
Source code inlibdebug/debugger/internal_debugger.py def _fast_write_memory(self: InternalDebugger, address: int, data: bytes) -> None:\n \"\"\"Writes memory to the process.\"\"\"\n if not self.is_debugging:\n raise RuntimeError(\"Process not running, cannot access memory.\")\n\n if self.running:\n # Reading memory while the process is running could lead to concurrency issues\n # and corrupted values\n liblog.debugger(\n \"Process is running. Waiting for it to stop before writing to memory.\",\n )\n\n self._ensure_process_stopped()\n\n self._process_memory_manager.write(address, data)\n"},{"location":"from_pydoc/generated/debugger/internal_debugger/#libdebug.debugger.internal_debugger.InternalDebugger._fetch_fp_registers","title":"_fetch_fp_registers(registers)","text":"Fetches the floating point registers of a thread.
Source code inlibdebug/debugger/internal_debugger.py @background_alias(__threaded_fetch_fp_registers)\ndef _fetch_fp_registers(self: InternalDebugger, registers: Registers) -> None:\n \"\"\"Fetches the floating point registers of a thread.\"\"\"\n if not self.is_debugging:\n raise RuntimeError(\"Process not running, cannot read floating-point registers.\")\n\n self._ensure_process_stopped()\n\n self.__polling_thread_command_queue.put(\n (self.__threaded_fetch_fp_registers, (registers,)),\n )\n\n self._join_and_check_status()\n"},{"location":"from_pydoc/generated/debugger/internal_debugger/#libdebug.debugger.internal_debugger.InternalDebugger._flush_fp_registers","title":"_flush_fp_registers(registers)","text":"Flushes the floating point registers of a thread.
Source code inlibdebug/debugger/internal_debugger.py @background_alias(__threaded_flush_fp_registers)\ndef _flush_fp_registers(self: InternalDebugger, registers: Registers) -> None:\n \"\"\"Flushes the floating point registers of a thread.\"\"\"\n if not self.is_debugging:\n raise RuntimeError(\"Process not running, cannot write floating-point registers.\")\n\n self._ensure_process_stopped()\n\n self.__polling_thread_command_queue.put(\n (self.__threaded_flush_fp_registers, (registers,)),\n )\n\n self._join_and_check_status()\n"},{"location":"from_pydoc/generated/debugger/internal_debugger/#libdebug.debugger.internal_debugger.InternalDebugger._open_gdb_in_new_process","title":"_open_gdb_in_new_process(script_path)","text":"Opens GDB in a new process following the configuration in libcontext.terminal.
Parameters:
Name Type Description Defaultscript_path str The path to the script to run in the terminal.
required Source code inlibdebug/debugger/internal_debugger.py def _open_gdb_in_new_process(self: InternalDebugger, script_path: str) -> None:\n \"\"\"Opens GDB in a new process following the configuration in libcontext.terminal.\n\n Args:\n script_path (str): The path to the script to run in the terminal.\n \"\"\"\n # Check if the terminal has been configured correctly\n try:\n check_call([*libcontext.terminal, \"uname\"], stderr=DEVNULL, stdout=DEVNULL)\n except (CalledProcessError, FileNotFoundError) as err:\n raise RuntimeError(\n \"Failed to open GDB in terminal. Check the terminal configuration in libcontext.terminal.\",\n ) from err\n\n self.__polling_thread_command_queue.put((self.__threaded_gdb, ()))\n self._join_and_check_status()\n\n # Create the command to open the terminal and run the script\n command = [*libcontext.terminal, script_path]\n\n # Open GDB in a new terminal\n terminal_pid = Popen(command).pid\n\n # This is the command line that we are looking for\n cmdline_target = [\"/bin/bash\", script_path]\n\n self._wait_for_gdb(terminal_pid, cmdline_target)\n\n def wait_for_termination() -> None:\n liblog.debugger(\"Waiting for GDB process to terminate...\")\n\n for proc in process_iter():\n try:\n cmdline = proc.cmdline()\n except ZombieProcess:\n # This is a zombie process, which psutil tracks but we cannot interact with\n continue\n\n if cmdline_target == cmdline:\n gdb_process = proc\n break\n else:\n raise RuntimeError(\"GDB process not found.\")\n\n while gdb_process.is_running() and gdb_process.status() != STATUS_ZOMBIE:\n # As the GDB process is in a different group, we do not have the authority to wait on it\n # So we must keep polling it until it is no longer running\n pass\n\n return wait_for_termination\n"},{"location":"from_pydoc/generated/debugger/internal_debugger/#libdebug.debugger.internal_debugger.InternalDebugger._open_gdb_in_shell","title":"_open_gdb_in_shell(script_path)","text":"Open GDB in the current shell.
Parameters:
Name Type Description Defaultscript_path str The path to the script to run in the terminal.
required Source code inlibdebug/debugger/internal_debugger.py def _open_gdb_in_shell(self: InternalDebugger, script_path: str) -> None:\n \"\"\"Open GDB in the current shell.\n\n Args:\n script_path (str): The path to the script to run in the terminal.\n \"\"\"\n self.__polling_thread_command_queue.put((self.__threaded_gdb, ()))\n self._join_and_check_status()\n\n gdb_pid = os.fork()\n\n if gdb_pid == 0: # This is the child process.\n os.execv(\"/bin/bash\", [\"/bin/bash\", script_path])\n raise RuntimeError(\"Failed to execute GDB.\")\n\n # This is the parent process.\n # Parent ignores SIGINT, so only GDB (child) receives it\n signal.signal(signal.SIGINT, signal.SIG_IGN)\n\n def wait_for_termination() -> None:\n # Wait for the child process to finish\n os.waitpid(gdb_pid, 0)\n\n # Reset the SIGINT behavior to default handling after child exits\n signal.signal(signal.SIGINT, signal.SIG_DFL)\n\n return wait_for_termination\n"},{"location":"from_pydoc/generated/debugger/internal_debugger/#libdebug.debugger.internal_debugger.InternalDebugger._peek_memory","title":"_peek_memory(address)","text":"Reads memory from the process.
Source code inlibdebug/debugger/internal_debugger.py @background_alias(__threaded_peek_memory)\ndef _peek_memory(self: InternalDebugger, address: int) -> bytes:\n \"\"\"Reads memory from the process.\"\"\"\n if not self.is_debugging:\n raise RuntimeError(\"Process not running, cannot access memory.\")\n\n if self.running:\n # Reading memory while the process is running could lead to concurrency issues\n # and corrupted values\n liblog.debugger(\n \"Process is running. Waiting for it to stop before reading memory.\",\n )\n\n self._ensure_process_stopped()\n\n self.__polling_thread_command_queue.put(\n (self.__threaded_peek_memory, (address,)),\n )\n\n # We cannot call _join_and_check_status here, as we need the return value which might not be an exception\n self.__polling_thread_command_queue.join()\n\n value = self.__polling_thread_response_queue.get()\n self.__polling_thread_response_queue.task_done()\n\n if isinstance(value, BaseException):\n raise value\n\n return value\n"},{"location":"from_pydoc/generated/debugger/internal_debugger/#libdebug.debugger.internal_debugger.InternalDebugger._poke_memory","title":"_poke_memory(address, data)","text":"Writes memory to the process.
Source code inlibdebug/debugger/internal_debugger.py @background_alias(__threaded_poke_memory)\ndef _poke_memory(self: InternalDebugger, address: int, data: bytes) -> None:\n \"\"\"Writes memory to the process.\"\"\"\n if not self.is_debugging:\n raise RuntimeError(\"Process not running, cannot access memory.\")\n\n if self.running:\n # Reading memory while the process is running could lead to concurrency issues\n # and corrupted values\n liblog.debugger(\n \"Process is running. Waiting for it to stop before writing to memory.\",\n )\n\n self._ensure_process_stopped()\n\n self.__polling_thread_command_queue.put(\n (self.__threaded_poke_memory, (address, data)),\n )\n\n self._join_and_check_status()\n"},{"location":"from_pydoc/generated/debugger/internal_debugger/#libdebug.debugger.internal_debugger.InternalDebugger._resume_from_gdb","title":"_resume_from_gdb()","text":"Resumes the process after migrating from GDB.
Source code inlibdebug/debugger/internal_debugger.py def _resume_from_gdb(self: InternalDebugger) -> None:\n \"\"\"Resumes the process after migrating from GDB.\"\"\"\n self.__polling_thread_command_queue.put((self.__threaded_migrate_from_gdb, ()))\n\n self._join_and_check_status()\n\n self._is_migrated_to_gdb = False\n"},{"location":"from_pydoc/generated/debugger/internal_debugger/#libdebug.debugger.internal_debugger.InternalDebugger._wait_for_gdb","title":"_wait_for_gdb(terminal_pid, cmdline_target)","text":"Waits for GDB to open in the terminal.
Parameters:
Name Type Description Defaultterminal_pid int The PID of the terminal process.
requiredcmdline_target list[str] The command line that we are looking for.
required Source code inlibdebug/debugger/internal_debugger.py def _wait_for_gdb(self: InternalDebugger, terminal_pid: int, cmdline_target: list[str]) -> None:\n \"\"\"Waits for GDB to open in the terminal.\n\n Args:\n terminal_pid (int): The PID of the terminal process.\n cmdline_target (list[str]): The command line that we are looking for.\n \"\"\"\n # We need to wait for GDB to open in the terminal. However, different terminals have different behaviors\n # so we need to manually check if the terminal is still alive and if GDB has opened\n waiting_for_gdb = True\n terminal_alive = False\n scan_after_terminal_death = 0\n scan_after_terminal_death_max = 3\n while waiting_for_gdb:\n terminal_alive = False\n for proc in process_iter():\n try:\n cmdline = proc.cmdline()\n if cmdline == cmdline_target:\n waiting_for_gdb = False\n elif proc.pid == terminal_pid:\n terminal_alive = True\n except ZombieProcess:\n # This is a zombie process, which psutil tracks but we cannot interact with\n continue\n if not terminal_alive and waiting_for_gdb and scan_after_terminal_death < scan_after_terminal_death_max:\n # If the terminal has died, we need to wait a bit before we can be sure that GDB will not open.\n # Indeed, some terminals take different steps to open GDB. We must be sure to refresh the list\n # of processes. One extra iteration should be enough, but we will iterate more just to be sure.\n scan_after_terminal_death += 1\n elif not terminal_alive and waiting_for_gdb:\n # If the terminal has died and GDB has not opened, we are sure that GDB will not open\n raise RuntimeError(\"Failed to open GDB in terminal.\")\n"},{"location":"from_pydoc/generated/debugger/internal_debugger/#libdebug.debugger.internal_debugger.InternalDebugger.attach","title":"attach(pid)","text":"Attaches to an existing process.
Source code inlibdebug/debugger/internal_debugger.py def attach(self: InternalDebugger, pid: int) -> None:\n \"\"\"Attaches to an existing process.\"\"\"\n if self.is_debugging:\n liblog.debugger(\"Process already running, stopping it before restarting.\")\n self.kill()\n if self.threads:\n self.clear()\n self.debugging_interface.reset()\n\n self.instanced = True\n self.is_debugging = True\n\n if not self.__polling_thread_command_queue.empty():\n raise RuntimeError(\"Polling thread command queue not empty.\")\n\n self.__polling_thread_command_queue.put((self.__threaded_attach, (pid,)))\n\n self._join_and_check_status()\n\n self._process_memory_manager.open(self.process_id)\n"},{"location":"from_pydoc/generated/debugger/internal_debugger/#libdebug.debugger.internal_debugger.InternalDebugger.breakpoint","title":"breakpoint(position, hardware=False, condition='x', length=1, callback=None, file='hybrid')","text":"Sets a breakpoint at the specified location.
Parameters:
Name Type Description Defaultposition int | bytes The location of the breakpoint.
requiredhardware bool Whether the breakpoint should be hardware-assisted or purely software. Defaults to False.
False condition str The trigger condition for the breakpoint. Defaults to None.
'x' length int The length of the breakpoint. Only for watchpoints. Defaults to 1.
1 callback None | bool | Callable[[ThreadContext, Breakpoint], None] A callback to be called when the breakpoint is hit. If True, an empty callback will be set. Defaults to None.
None file str The user-defined backing file to resolve the address in. Defaults to \"hybrid\" (libdebug will first try to solve the address as an absolute address, then as a relative address w.r.t. the \"binary\" map file).
'hybrid' Source code in libdebug/debugger/internal_debugger.py @background_alias(_background_invalid_call)\n@change_state_function_process\ndef breakpoint(\n self: InternalDebugger,\n position: int | str,\n hardware: bool = False,\n condition: str = \"x\",\n length: int = 1,\n callback: None | bool | Callable[[ThreadContext, Breakpoint], None] = None,\n file: str = \"hybrid\",\n) -> Breakpoint:\n \"\"\"Sets a breakpoint at the specified location.\n\n Args:\n position (int | bytes): The location of the breakpoint.\n hardware (bool, optional): Whether the breakpoint should be hardware-assisted or purely software. Defaults to False.\n condition (str, optional): The trigger condition for the breakpoint. Defaults to None.\n length (int, optional): The length of the breakpoint. Only for watchpoints. Defaults to 1.\n callback (None | bool | Callable[[ThreadContext, Breakpoint], None], optional): A callback to be called when the breakpoint is hit. If True, an empty callback will be set. Defaults to None.\n file (str, optional): The user-defined backing file to resolve the address in. Defaults to \"hybrid\" (libdebug will first try to solve the address as an absolute address, then as a relative address w.r.t. the \"binary\" map file).\n \"\"\"\n if isinstance(position, str):\n address = self.resolve_symbol(position, file)\n else:\n address = self.resolve_address(position, file)\n position = hex(address)\n\n if condition != \"x\" and not hardware:\n raise ValueError(\"Breakpoint condition is supported only for hardware watchpoints.\")\n\n if callback is True:\n\n def callback(_: ThreadContext, __: Breakpoint) -> None:\n pass\n\n bp = Breakpoint(address, position, 0, hardware, callback, condition.lower(), length)\n\n if hardware:\n validate_hardware_breakpoint(self.arch, bp)\n\n link_to_internal_debugger(bp, self)\n\n self.__polling_thread_command_queue.put((self.__threaded_breakpoint, (bp,)))\n\n self._join_and_check_status()\n\n # the breakpoint should have been set by interface\n if address not in self.breakpoints:\n raise RuntimeError(\"Something went wrong while inserting the breakpoint.\")\n\n return bp\n"},{"location":"from_pydoc/generated/debugger/internal_debugger/#libdebug.debugger.internal_debugger.InternalDebugger.catch_signal","title":"catch_signal(signal, callback=None, recursive=False)","text":"Catch a signal in the target process.
Parameters:
Name Type Description Defaultsignal int | str The signal to catch. If \"*\", \"ALL\", \"all\" or -1 is passed, all signals will be caught.
requiredcallback None | bool | Callable[[ThreadContext, SignalCatcher], None] A callback to be called when the signal is caught. If True, an empty callback will be set. Defaults to None.
None recursive bool Whether, when the signal is hijacked with another one, the signal catcher associated with the new signal should be considered as well. Defaults to False.
False Returns:
Name Type DescriptionSignalCatcher SignalCatcher The SignalCatcher object.
Source code inlibdebug/debugger/internal_debugger.py @background_alias(_background_invalid_call)\n@change_state_function_process\ndef catch_signal(\n self: InternalDebugger,\n signal: int | str,\n callback: None | bool | Callable[[ThreadContext, SignalCatcher], None] = None,\n recursive: bool = False,\n) -> SignalCatcher:\n \"\"\"Catch a signal in the target process.\n\n Args:\n signal (int | str): The signal to catch. If \"*\", \"ALL\", \"all\" or -1 is passed, all signals will be caught.\n callback (None | bool | Callable[[ThreadContext, SignalCatcher], None], optional): A callback to be called when the signal is caught. If True, an empty callback will be set. Defaults to None.\n recursive (bool, optional): Whether, when the signal is hijacked with another one, the signal catcher associated with the new signal should be considered as well. Defaults to False.\n\n Returns:\n SignalCatcher: The SignalCatcher object.\n \"\"\"\n if isinstance(signal, str):\n signal_number = resolve_signal_number(signal)\n elif isinstance(signal, int):\n signal_number = signal\n else:\n raise TypeError(\"signal must be an int or a str\")\n\n match signal_number:\n case SIGKILL.value:\n raise ValueError(\n f\"Cannot catch SIGKILL ({signal_number}) as it cannot be caught or ignored. This is a kernel restriction.\",\n )\n case SIGSTOP.value:\n raise ValueError(\n f\"Cannot catch SIGSTOP ({signal_number}) as it is used by the debugger or ptrace for their internal operations.\",\n )\n case SIGTRAP.value:\n liblog.warning(\n f\"Catching SIGTRAP ({signal_number}) may interfere with libdebug operations as it is used by the debugger or ptrace for their internal operations. Use with care.\"\n )\n\n if signal_number in self.caught_signals:\n liblog.warning(\n f\"Signal {resolve_signal_name(signal_number)} ({signal_number}) has already been caught. Overriding it.\",\n )\n\n if not isinstance(recursive, bool):\n raise TypeError(\"recursive must be a boolean\")\n\n if callback is True:\n\n def callback(_: ThreadContext, __: SignalCatcher) -> None:\n pass\n\n catcher = SignalCatcher(signal_number, callback, recursive)\n\n link_to_internal_debugger(catcher, self)\n\n self.__polling_thread_command_queue.put((self.__threaded_catch_signal, (catcher,)))\n\n self._join_and_check_status()\n\n return catcher\n"},{"location":"from_pydoc/generated/debugger/internal_debugger/#libdebug.debugger.internal_debugger.InternalDebugger.clear","title":"clear()","text":"Reinitializes the context, so it is ready for a new run.
Source code inlibdebug/debugger/internal_debugger.py def clear(self: InternalDebugger) -> None:\n \"\"\"Reinitializes the context, so it is ready for a new run.\"\"\"\n # These must be reinitialized on every call to \"run\"\n self.breakpoints.clear()\n self.handled_syscalls.clear()\n self.caught_signals.clear()\n self.syscalls_to_pprint = None\n self.syscalls_to_not_pprint = None\n self.signals_to_block.clear()\n self.pprint_syscalls = False\n self.pipe_manager = None\n self.process_id = 0\n\n for t in self.threads:\n del t.regs.register_file\n del t.regs._fp_register_file\n\n self.threads.clear()\n self.instanced = False\n self.is_debugging = False\n self._is_running = False\n self.resume_context.clear()\n self.children.clear()\n"},{"location":"from_pydoc/generated/debugger/internal_debugger/#libdebug.debugger.internal_debugger.InternalDebugger.cont","title":"cont()","text":"Continues the process.
Parameters:
Name Type Description Defaultauto_wait bool Whether to automatically wait for the process to stop after continuing. Defaults to True.
required Source code inlibdebug/debugger/internal_debugger.py @background_alias(_background_invalid_call)\n@change_state_function_process\ndef cont(self: InternalDebugger) -> None:\n \"\"\"Continues the process.\n\n Args:\n auto_wait (bool, optional): Whether to automatically wait for the process to stop after continuing. Defaults to True.\n \"\"\"\n self.__polling_thread_command_queue.put((self.__threaded_cont, ()))\n\n self._join_and_check_status()\n\n self.__polling_thread_command_queue.put((self.__threaded_wait, ()))\n"},{"location":"from_pydoc/generated/debugger/internal_debugger/#libdebug.debugger.internal_debugger.InternalDebugger.create_snapshot","title":"create_snapshot(level='base', name=None)","text":"Create a snapshot of the current process state.
Snapshot levels: - base: Registers - writable: Registers, writable memory contents - full: Registers, all memory contents
Parameters:
Name Type Description Defaultlevel str The level of the snapshot.
'base' name str The name of the snapshot. Defaults to None.
None Returns:
Name Type DescriptionProcessSnapshot ProcessSnapshot The created snapshot.
Source code inlibdebug/debugger/internal_debugger.py @change_state_function_process\ndef create_snapshot(self: Debugger, level: str = \"base\", name: str | None = None) -> ProcessSnapshot:\n \"\"\"Create a snapshot of the current process state.\n\n Snapshot levels:\n - base: Registers\n - writable: Registers, writable memory contents\n - full: Registers, all memory contents\n\n Args:\n level (str): The level of the snapshot.\n name (str, optional): The name of the snapshot. Defaults to None.\n\n Returns:\n ProcessSnapshot: The created snapshot.\n \"\"\"\n self._ensure_process_stopped()\n return ProcessSnapshot(self, level, name)\n"},{"location":"from_pydoc/generated/debugger/internal_debugger/#libdebug.debugger.internal_debugger.InternalDebugger.detach","title":"detach()","text":"Detaches from the process.
Source code inlibdebug/debugger/internal_debugger.py def detach(self: InternalDebugger) -> None:\n \"\"\"Detaches from the process.\"\"\"\n if not self.is_debugging:\n raise RuntimeError(\"Process not running, cannot detach.\")\n\n self._ensure_process_stopped()\n\n self.__polling_thread_command_queue.put((self.__threaded_detach, ()))\n\n self.is_debugging = False\n\n self._join_and_check_status()\n\n self._process_memory_manager.close()\n"},{"location":"from_pydoc/generated/debugger/internal_debugger/#libdebug.debugger.internal_debugger.InternalDebugger.disable_pretty_print","title":"disable_pretty_print()","text":"Disable the handler for all the syscalls that are pretty printed.
Source code inlibdebug/debugger/internal_debugger.py def disable_pretty_print(self: InternalDebugger) -> None:\n \"\"\"Disable the handler for all the syscalls that are pretty printed.\"\"\"\n self._ensure_process_stopped()\n\n installed_handlers = list(self.handled_syscalls.values())\n for handler in installed_handlers:\n if handler.on_enter_pprint or handler.on_exit_pprint:\n if handler.on_enter_user or handler.on_exit_user:\n handler.on_enter_pprint = None\n handler.on_exit_pprint = None\n else:\n self.__polling_thread_command_queue.put(\n (self.__threaded_unhandle_syscall, (handler,)),\n )\n\n self._join_and_check_status()\n"},{"location":"from_pydoc/generated/debugger/internal_debugger/#libdebug.debugger.internal_debugger.InternalDebugger.enable_pretty_print","title":"enable_pretty_print()","text":"Handles a syscall in the target process to pretty prints its arguments and return value.
Source code inlibdebug/debugger/internal_debugger.py def enable_pretty_print(\n self: InternalDebugger,\n) -> SyscallHandler:\n \"\"\"Handles a syscall in the target process to pretty prints its arguments and return value.\"\"\"\n self._ensure_process_stopped()\n\n syscall_numbers = get_all_syscall_numbers(self.arch)\n\n for syscall_number in syscall_numbers:\n # Check if the syscall is already handled (by the user or by the pretty print handler)\n if syscall_number in self.handled_syscalls:\n handler = self.handled_syscalls[syscall_number]\n if syscall_number not in (self.syscalls_to_not_pprint or []) and syscall_number in (\n self.syscalls_to_pprint or syscall_numbers\n ):\n handler.on_enter_pprint = pprint_on_enter\n handler.on_exit_pprint = pprint_on_exit\n else:\n # Remove the pretty print handler from previous pretty print calls\n handler.on_enter_pprint = None\n handler.on_exit_pprint = None\n elif syscall_number not in (self.syscalls_to_not_pprint or []) and syscall_number in (\n self.syscalls_to_pprint or syscall_numbers\n ):\n handler = SyscallHandler(\n syscall_number,\n None,\n None,\n pprint_on_enter,\n pprint_on_exit,\n )\n\n link_to_internal_debugger(handler, self)\n\n # We have to disable the handler since it is not user-defined\n handler.disable()\n\n self.__polling_thread_command_queue.put(\n (self.__threaded_handle_syscall, (handler,)),\n )\n\n self._join_and_check_status()\n"},{"location":"from_pydoc/generated/debugger/internal_debugger/#libdebug.debugger.internal_debugger.InternalDebugger.finish","title":"finish(thread, heuristic='backtrace')","text":"Continues execution until the current function returns or the process stops.
The command requires a heuristic to determine the end of the function. The available heuristics are: - backtrace: The debugger will place a breakpoint on the saved return address found on the stack and continue execution on all threads. - step-mode: The debugger will step on the specified thread until the current function returns. This will be slower.
Parameters:
Name Type Description Defaultthread ThreadContext The thread to finish.
requiredheuristic str The heuristic to use. Defaults to \"backtrace\".
'backtrace' Source code in libdebug/debugger/internal_debugger.py @background_alias(_background_finish)\n@change_state_function_thread\ndef finish(self: InternalDebugger, thread: ThreadContext, heuristic: str = \"backtrace\") -> None:\n \"\"\"Continues execution until the current function returns or the process stops.\n\n The command requires a heuristic to determine the end of the function. The available heuristics are:\n - `backtrace`: The debugger will place a breakpoint on the saved return address found on the stack and continue execution on all threads.\n - `step-mode`: The debugger will step on the specified thread until the current function returns. This will be slower.\n\n Args:\n thread (ThreadContext): The thread to finish.\n heuristic (str, optional): The heuristic to use. Defaults to \"backtrace\".\n \"\"\"\n self.__polling_thread_command_queue.put(\n (self.__threaded_finish, (thread, heuristic)),\n )\n\n self._join_and_check_status()\n"},{"location":"from_pydoc/generated/debugger/internal_debugger/#libdebug.debugger.internal_debugger.InternalDebugger.gdb","title":"gdb(migrate_breakpoints=True, open_in_new_process=True, blocking=True)","text":"Migrates the current debugging session to GDB.
Source code inlibdebug/debugger/internal_debugger.py @background_alias(_background_invalid_call)\n@change_state_function_process\ndef gdb(\n self: InternalDebugger,\n migrate_breakpoints: bool = True,\n open_in_new_process: bool = True,\n blocking: bool = True,\n) -> GdbResumeEvent:\n \"\"\"Migrates the current debugging session to GDB.\"\"\"\n # TODO: not needed?\n self.interrupt()\n\n # Create the command file\n command_file = self._craft_gdb_migration_file(migrate_breakpoints)\n\n if open_in_new_process and libcontext.terminal:\n lambda_fun = self._open_gdb_in_new_process(command_file)\n elif open_in_new_process:\n self._auto_detect_terminal()\n if not libcontext.terminal:\n liblog.warning(\n \"Cannot auto-detect terminal. Please configure the terminal in libcontext.terminal. Opening gdb in the current shell.\",\n )\n lambda_fun = self._open_gdb_in_shell(command_file)\n else:\n lambda_fun = self._open_gdb_in_new_process(command_file)\n else:\n lambda_fun = self._open_gdb_in_shell(command_file)\n\n resume_event = GdbResumeEvent(self, lambda_fun)\n\n self._is_migrated_to_gdb = True\n\n if blocking:\n resume_event.join()\n return None\n else:\n return resume_event\n"},{"location":"from_pydoc/generated/debugger/internal_debugger/#libdebug.debugger.internal_debugger.InternalDebugger.get_thread_by_id","title":"get_thread_by_id(thread_id)","text":"Get a thread by its ID.
Parameters:
Name Type Description Defaultthread_id int the ID of the thread to get.
requiredReturns:
Name Type DescriptionThreadContext ThreadContext the thread with the specified ID.
Source code inlibdebug/debugger/internal_debugger.py def get_thread_by_id(self: InternalDebugger, thread_id: int) -> ThreadContext:\n \"\"\"Get a thread by its ID.\n\n Args:\n thread_id (int): the ID of the thread to get.\n\n Returns:\n ThreadContext: the thread with the specified ID.\n \"\"\"\n for thread in self.threads:\n if thread.thread_id == thread_id and not thread.dead:\n return thread\n\n return None\n"},{"location":"from_pydoc/generated/debugger/internal_debugger/#libdebug.debugger.internal_debugger.InternalDebugger.handle_syscall","title":"handle_syscall(syscall, on_enter=None, on_exit=None, recursive=False)","text":"Handle a syscall in the target process.
Parameters:
Name Type Description Defaultsyscall int | str The syscall name or number to handle. If \"*\", \"ALL\", \"all\", or -1 is passed, all syscalls will be handled.
requiredon_enter None | bool | Callable[[ThreadContext, SyscallHandler], None] The callback to execute when the syscall is entered. If True, an empty callback will be set. Defaults to None.
None on_exit None | bool | Callable[[ThreadContext, SyscallHandler], None] The callback to execute when the syscall is exited. If True, an empty callback will be set. Defaults to None.
None recursive bool Whether, when the syscall is hijacked with another one, the syscall handler associated with the new syscall should be considered as well. Defaults to False.
False Returns:
Name Type DescriptionSyscallHandler SyscallHandler The SyscallHandler object.
Source code inlibdebug/debugger/internal_debugger.py @background_alias(_background_invalid_call)\n@change_state_function_process\ndef handle_syscall(\n self: InternalDebugger,\n syscall: int | str,\n on_enter: Callable[[ThreadContext, SyscallHandler], None] | None = None,\n on_exit: Callable[[ThreadContext, SyscallHandler], None] | None = None,\n recursive: bool = False,\n) -> SyscallHandler:\n \"\"\"Handle a syscall in the target process.\n\n Args:\n syscall (int | str): The syscall name or number to handle. If \"*\", \"ALL\", \"all\", or -1 is passed, all syscalls will be handled.\n on_enter (None | bool |Callable[[ThreadContext, SyscallHandler], None], optional): The callback to execute when the syscall is entered. If True, an empty callback will be set. Defaults to None.\n on_exit (None | bool | Callable[[ThreadContext, SyscallHandler], None], optional): The callback to execute when the syscall is exited. If True, an empty callback will be set. Defaults to None.\n recursive (bool, optional): Whether, when the syscall is hijacked with another one, the syscall handler associated with the new syscall should be considered as well. Defaults to False.\n\n Returns:\n SyscallHandler: The SyscallHandler object.\n \"\"\"\n syscall_number = resolve_syscall_number(self.arch, syscall) if isinstance(syscall, str) else syscall\n\n if not isinstance(recursive, bool):\n raise TypeError(\"recursive must be a boolean\")\n\n if on_enter is True:\n\n def on_enter(_: ThreadContext, __: SyscallHandler) -> None:\n pass\n\n if on_exit is True:\n\n def on_exit(_: ThreadContext, __: SyscallHandler) -> None:\n pass\n\n # Check if the syscall is already handled (by the user or by the pretty print handler)\n if syscall_number in self.handled_syscalls:\n handler = self.handled_syscalls[syscall_number]\n if handler.on_enter_user or handler.on_exit_user:\n liblog.warning(\n f\"Syscall {resolve_syscall_name(self.arch, syscall_number)} is already handled by a user-defined handler. Overriding it.\",\n )\n handler.on_enter_user = on_enter\n handler.on_exit_user = on_exit\n handler.recursive = recursive\n handler.enabled = True\n else:\n handler = SyscallHandler(\n syscall_number,\n on_enter,\n on_exit,\n None,\n None,\n recursive,\n )\n\n link_to_internal_debugger(handler, self)\n\n self.__polling_thread_command_queue.put(\n (self.__threaded_handle_syscall, (handler,)),\n )\n\n self._join_and_check_status()\n\n return handler\n"},{"location":"from_pydoc/generated/debugger/internal_debugger/#libdebug.debugger.internal_debugger.InternalDebugger.hijack_signal","title":"hijack_signal(original_signal, new_signal, recursive=False)","text":"Hijack a signal in the target process.
Parameters:
Name Type Description Defaultoriginal_signal int | str The signal to hijack. If \"*\", \"ALL\", \"all\" or -1 is passed, all signals will be hijacked.
requirednew_signal int | str The signal to hijack the original signal with.
requiredrecursive bool Whether, when the signal is hijacked with another one, the signal catcher associated with the new signal should be considered as well. Defaults to False.
False Returns:
Name Type DescriptionSignalCatcher SignalCatcher The SignalCatcher object.
Source code inlibdebug/debugger/internal_debugger.py @background_alias(_background_invalid_call)\n@change_state_function_process\ndef hijack_signal(\n self: InternalDebugger,\n original_signal: int | str,\n new_signal: int | str,\n recursive: bool = False,\n) -> SignalCatcher:\n \"\"\"Hijack a signal in the target process.\n\n Args:\n original_signal (int | str): The signal to hijack. If \"*\", \"ALL\", \"all\" or -1 is passed, all signals will be hijacked.\n new_signal (int | str): The signal to hijack the original signal with.\n recursive (bool, optional): Whether, when the signal is hijacked with another one, the signal catcher associated with the new signal should be considered as well. Defaults to False.\n\n Returns:\n SignalCatcher: The SignalCatcher object.\n \"\"\"\n if isinstance(original_signal, str):\n original_signal_number = resolve_signal_number(original_signal)\n else:\n original_signal_number = original_signal\n\n new_signal_number = resolve_signal_number(new_signal) if isinstance(new_signal, str) else new_signal\n\n if new_signal_number == -1:\n raise ValueError(\"Cannot hijack a signal with the 'ALL' signal.\")\n\n if original_signal_number == new_signal_number:\n raise ValueError(\n \"The original signal and the new signal must be different during hijacking.\",\n )\n\n def callback(thread: ThreadContext, _: SignalCatcher) -> None:\n \"\"\"The callback to execute when the signal is received.\"\"\"\n thread.signal = new_signal_number\n\n return self.catch_signal(original_signal_number, callback, recursive)\n"},{"location":"from_pydoc/generated/debugger/internal_debugger/#libdebug.debugger.internal_debugger.InternalDebugger.hijack_syscall","title":"hijack_syscall(original_syscall, new_syscall, recursive=True, **kwargs)","text":"Hijacks a syscall in the target process.
Parameters:
Name Type Description Defaultoriginal_syscall int | str The syscall name or number to hijack. If \"*\", \"ALL\", \"all\" or -1 is passed, all syscalls will be hijacked.
requirednew_syscall int | str The syscall name or number to hijack the original syscall with.
requiredrecursive bool Whether, when the syscall is hijacked with another one, the syscall handler associated with the new syscall should be considered as well. Defaults to False.
True **kwargs int (int, optional): The arguments to pass to the new syscall.
{} Returns:
Name Type DescriptionSyscallHandler SyscallHandler The SyscallHandler object.
Source code inlibdebug/debugger/internal_debugger.py @background_alias(_background_invalid_call)\n@change_state_function_process\ndef hijack_syscall(\n self: InternalDebugger,\n original_syscall: int | str,\n new_syscall: int | str,\n recursive: bool = True,\n **kwargs: int,\n) -> SyscallHandler:\n \"\"\"Hijacks a syscall in the target process.\n\n Args:\n original_syscall (int | str): The syscall name or number to hijack. If \"*\", \"ALL\", \"all\" or -1 is passed, all syscalls will be hijacked.\n new_syscall (int | str): The syscall name or number to hijack the original syscall with.\n recursive (bool, optional): Whether, when the syscall is hijacked with another one, the syscall handler associated with the new syscall should be considered as well. Defaults to False.\n **kwargs: (int, optional): The arguments to pass to the new syscall.\n\n Returns:\n SyscallHandler: The SyscallHandler object.\n \"\"\"\n if set(kwargs) - SyscallHijacker.allowed_args:\n raise ValueError(\"Invalid keyword arguments in syscall hijack\")\n\n if isinstance(original_syscall, str):\n original_syscall_number = resolve_syscall_number(self.arch, original_syscall)\n else:\n original_syscall_number = original_syscall\n\n new_syscall_number = (\n resolve_syscall_number(self.arch, new_syscall) if isinstance(new_syscall, str) else new_syscall\n )\n\n if new_syscall_number == -1:\n raise ValueError(\"Cannot hijack a syscall with the 'ALL' syscall.\")\n\n if original_syscall_number == new_syscall_number:\n raise ValueError(\n \"The original syscall and the new syscall must be different during hijacking.\",\n )\n\n on_enter = SyscallHijacker().create_hijacker(\n new_syscall_number,\n **kwargs,\n )\n\n # Check if the syscall is already handled (by the user or by the pretty print handler)\n if original_syscall_number in self.handled_syscalls:\n handler = self.handled_syscalls[original_syscall_number]\n if handler.on_enter_user or handler.on_exit_user:\n liblog.warning(\n f\"Syscall {original_syscall_number} is already handled by a user-defined handler. Overriding it.\",\n )\n handler.on_enter_user = on_enter\n handler.on_exit_user = None\n handler.recursive = recursive\n handler.enabled = True\n else:\n handler = SyscallHandler(\n original_syscall_number,\n on_enter,\n None,\n None,\n None,\n recursive,\n )\n\n link_to_internal_debugger(handler, self)\n\n self.__polling_thread_command_queue.put(\n (self.__threaded_handle_syscall, (handler,)),\n )\n\n self._join_and_check_status()\n\n return handler\n"},{"location":"from_pydoc/generated/debugger/internal_debugger/#libdebug.debugger.internal_debugger.InternalDebugger.insert_new_thread","title":"insert_new_thread(thread)","text":"Insert a new thread in the context.
Parameters:
Name Type Description Defaultthread ThreadContext the thread to insert.
required Source code inlibdebug/debugger/internal_debugger.py def insert_new_thread(self: InternalDebugger, thread: ThreadContext) -> None:\n \"\"\"Insert a new thread in the context.\n\n Args:\n thread (ThreadContext): the thread to insert.\n \"\"\"\n if thread in self.threads:\n raise RuntimeError(\"Thread already registered.\")\n\n self.threads.append(thread)\n"},{"location":"from_pydoc/generated/debugger/internal_debugger/#libdebug.debugger.internal_debugger.InternalDebugger.interrupt","title":"interrupt()","text":"Interrupts the process.
Source code inlibdebug/debugger/internal_debugger.py @background_alias(_background_invalid_call)\ndef interrupt(self: InternalDebugger) -> None:\n \"\"\"Interrupts the process.\"\"\"\n if not self.is_debugging:\n raise RuntimeError(\"Process not running, cannot interrupt.\")\n\n # We have to ensure that at least one thread is alive before executing the method\n if self.threads[0].dead:\n raise RuntimeError(\"All threads are dead.\")\n\n if not self.running:\n return\n\n self.resume_context.force_interrupt = True\n os.kill(self.process_id, SIGSTOP)\n\n self.wait()\n"},{"location":"from_pydoc/generated/debugger/internal_debugger/#libdebug.debugger.internal_debugger.InternalDebugger.kill","title":"kill()","text":"Kills the process.
Source code inlibdebug/debugger/internal_debugger.py @background_alias(_background_invalid_call)\ndef kill(self: InternalDebugger) -> None:\n \"\"\"Kills the process.\"\"\"\n if not self.is_debugging:\n raise RuntimeError(\"No process currently debugged, cannot kill.\")\n try:\n self._ensure_process_stopped()\n except (OSError, RuntimeError):\n # This exception might occur if the process has already died\n liblog.debugger(\"OSError raised during kill\")\n\n self._process_memory_manager.close()\n\n self.__polling_thread_command_queue.put((self.__threaded_kill, ()))\n\n self.instanced = False\n self.is_debugging = False\n\n self.set_all_threads_as_dead()\n\n if self.pipe_manager:\n self.pipe_manager.close()\n\n self._join_and_check_status()\n"},{"location":"from_pydoc/generated/debugger/internal_debugger/#libdebug.debugger.internal_debugger.InternalDebugger.load_snapshot","title":"load_snapshot(file_path)","text":"Load a snapshot of the thread / process state.
Parameters:
Name Type Description Defaultfile_path str The path to the snapshot file.
required Source code inlibdebug/debugger/internal_debugger.py def load_snapshot(self: Debugger, file_path: str) -> Snapshot:\n \"\"\"Load a snapshot of the thread / process state.\n\n Args:\n file_path (str): The path to the snapshot file.\n \"\"\"\n loaded_snap = self.serialization_helper.load(file_path)\n\n # Log the creation of the snapshot\n named_addition = \" named \" + loaded_snap.name if loaded_snap.name is not None else \"\"\n liblog.debugger(\n f\"Loaded {type(loaded_snap)} snapshot {loaded_snap.snapshot_id} of level {loaded_snap.level} from file {file_path}{named_addition}\"\n )\n\n return loaded_snap\n"},{"location":"from_pydoc/generated/debugger/internal_debugger/#libdebug.debugger.internal_debugger.InternalDebugger.next","title":"next(thread)","text":"Executes the next instruction of the process. If the instruction is a call, the debugger will continue until the called function returns.
Source code inlibdebug/debugger/internal_debugger.py @background_alias(_background_next)\n@change_state_function_thread\ndef next(self: InternalDebugger, thread: ThreadContext) -> None:\n \"\"\"Executes the next instruction of the process. If the instruction is a call, the debugger will continue until the called function returns.\"\"\"\n self._ensure_process_stopped()\n self.__polling_thread_command_queue.put((self.__threaded_next, (thread,)))\n self._join_and_check_status()\n"},{"location":"from_pydoc/generated/debugger/internal_debugger/#libdebug.debugger.internal_debugger.InternalDebugger.notify_snaphot_taken","title":"notify_snaphot_taken()","text":"Notify the debugger that a snapshot has been taken.
Source code inlibdebug/debugger/internal_debugger.py def notify_snaphot_taken(self: InternalDebugger) -> None:\n \"\"\"Notify the debugger that a snapshot has been taken.\"\"\"\n self._snapshot_count += 1\n"},{"location":"from_pydoc/generated/debugger/internal_debugger/#libdebug.debugger.internal_debugger.InternalDebugger.pprint_maps","title":"pprint_maps()","text":"Prints the memory maps of the process.
Source code inlibdebug/debugger/internal_debugger.py def pprint_maps(self: InternalDebugger) -> None:\n \"\"\"Prints the memory maps of the process.\"\"\"\n self._ensure_process_stopped()\n pprint_maps_util(self.maps)\n"},{"location":"from_pydoc/generated/debugger/internal_debugger/#libdebug.debugger.internal_debugger.InternalDebugger.pprint_memory","title":"pprint_memory(start, end, file='hybrid', override_word_size=None, integer_mode=False)","text":"Pretty print the memory diff.
Parameters:
Name Type Description Defaultstart int The start address of the memory diff.
requiredend int The end address of the memory diff.
requiredfile str The backing file for relative / absolute addressing. Defaults to \"hybrid\".
'hybrid' override_word_size int The word size to use for the diff in place of the ISA word size. Defaults to None.
None integer_mode bool If True, the diff will be printed as hex integers (system endianness applies). Defaults to False.
False Source code in libdebug/debugger/internal_debugger.py def pprint_memory(\n self: InternalDebugger,\n start: int,\n end: int,\n file: str = \"hybrid\",\n override_word_size: int | None = None,\n integer_mode: bool = False,\n) -> None:\n \"\"\"Pretty print the memory diff.\n\n Args:\n start (int): The start address of the memory diff.\n end (int): The end address of the memory diff.\n file (str, optional): The backing file for relative / absolute addressing. Defaults to \"hybrid\".\n override_word_size (int, optional): The word size to use for the diff in place of the ISA word size. Defaults to None.\n integer_mode (bool, optional): If True, the diff will be printed as hex integers (system endianness applies). Defaults to False.\n \"\"\"\n if start > end:\n tmp = start\n start = end\n end = tmp\n\n word_size = get_platform_gp_register_size(self.arch) if override_word_size is None else override_word_size\n\n # Resolve the address\n if file == \"absolute\":\n address_start = start\n elif file == \"hybrid\":\n try:\n # Try to resolve the address as absolute\n self.memory[start, 1, \"absolute\"]\n address_start = start\n except ValueError:\n # If the address is not in the maps, we use the binary file\n address_start = start + self.maps.filter(\"binary\")[0].start\n file = \"binary\"\n else:\n map_file = self.maps.filter(file)[0]\n address_start = start + map_file.base\n file = map_file.backing_file if file != \"binary\" else \"binary\"\n\n extract = self.memory[start:end, file]\n\n file_info = f\" (file: {file})\" if file not in (\"absolute\", \"hybrid\") else \"\"\n print(f\"Memory from {start:#x} to {end:#x}{file_info}:\")\n\n pprint_memory_util(\n address_start,\n extract,\n word_size,\n self.maps,\n integer_mode=integer_mode,\n )\n"},{"location":"from_pydoc/generated/debugger/internal_debugger/#libdebug.debugger.internal_debugger.InternalDebugger.resolve_address","title":"resolve_address(address, backing_file, skip_absolute_address_validation=False)","text":"Normalizes and validates the specified address.
Parameters:
Name Type Description Defaultaddress int The address to normalize and validate.
requiredbacking_file str The backing file to resolve the address in.
requiredskip_absolute_address_validation bool Whether to skip bounds checking for absolute addresses. Defaults to False.
False Returns:
Name Type Descriptionint int The normalized and validated address.
Raises:
Type DescriptionValueError If the substring backing_file is present in multiple backing files.
libdebug/debugger/internal_debugger.py def resolve_address(\n self: InternalDebugger,\n address: int,\n backing_file: str,\n skip_absolute_address_validation: bool = False,\n) -> int:\n \"\"\"Normalizes and validates the specified address.\n\n Args:\n address (int): The address to normalize and validate.\n backing_file (str): The backing file to resolve the address in.\n skip_absolute_address_validation (bool, optional): Whether to skip bounds checking for absolute addresses. Defaults to False.\n\n Returns:\n int: The normalized and validated address.\n\n Raises:\n ValueError: If the substring `backing_file` is present in multiple backing files.\n \"\"\"\n if skip_absolute_address_validation and backing_file == \"absolute\":\n return address\n\n maps = self.maps\n\n if backing_file in [\"hybrid\", \"absolute\"]:\n if maps.filter(address):\n # If the address is absolute, we can return it directly\n return address\n elif backing_file == \"absolute\":\n # The address is explicitly an absolute address but we did not find it\n raise ValueError(\n \"The specified absolute address does not exist. Check the address or specify a backing file.\",\n )\n else:\n # If the address was not found and the backing file is not \"absolute\",\n # we have to assume it is in the main map\n backing_file = self._process_full_path\n liblog.warning(\n f\"No backing file specified and no corresponding absolute address found for {hex(address)}. Assuming `{backing_file}`.\",\n )\n\n filtered_maps = maps.filter(backing_file)\n\n return normalize_and_validate_address(address, filtered_maps)\n"},{"location":"from_pydoc/generated/debugger/internal_debugger/#libdebug.debugger.internal_debugger.InternalDebugger.resolve_symbol","title":"resolve_symbol(symbol, backing_file)","text":"Resolves the address of the specified symbol.
Parameters:
Name Type Description Defaultsymbol str The symbol to resolve.
requiredbacking_file str The backing file to resolve the symbol in.
requiredReturns:
Name Type Descriptionint int The address of the symbol.
Source code inlibdebug/debugger/internal_debugger.py @change_state_function_process\ndef resolve_symbol(self: InternalDebugger, symbol: str, backing_file: str) -> int:\n \"\"\"Resolves the address of the specified symbol.\n\n Args:\n symbol (str): The symbol to resolve.\n backing_file (str): The backing file to resolve the symbol in.\n\n Returns:\n int: The address of the symbol.\n \"\"\"\n if backing_file == \"absolute\":\n raise ValueError(\"Cannot use `absolute` backing file with symbols.\")\n\n if backing_file == \"hybrid\":\n # If no explicit backing file is specified, we try resolving the symbol in the main map\n filtered_maps = self.maps.filter(\"binary\")\n try:\n with extend_internal_debugger(self):\n return resolve_symbol_in_maps(symbol, filtered_maps)\n except ValueError:\n liblog.warning(\n f\"No backing file specified for the symbol `{symbol}`. Resolving the symbol in ALL the maps (slow!)\",\n )\n\n # Otherwise, we resolve the symbol in all the maps: as this can be slow,\n # we issue a warning with the file containing it\n maps = self.maps\n with extend_internal_debugger(self):\n address = resolve_symbol_in_maps(symbol, maps)\n\n filtered_maps = self.maps.filter(address)\n if len(filtered_maps) != 1:\n # Shouldn't happen, but you never know...\n raise RuntimeError(\n \"The symbol address is present in zero or multiple backing files. Please specify the correct backing file.\",\n )\n liblog.warning(\n f\"Symbol `{symbol}` found in `{filtered_maps[0].backing_file}`, \"\n f\"specify it manually as the backing file for better performance.\",\n )\n\n return address\n\n if backing_file in [\"binary\", self._process_name]:\n backing_file = self._process_full_path\n\n filtered_maps = self.maps.filter(backing_file)\n\n with extend_internal_debugger(self):\n return resolve_symbol_in_maps(symbol, filtered_maps)\n"},{"location":"from_pydoc/generated/debugger/internal_debugger/#libdebug.debugger.internal_debugger.InternalDebugger.run","title":"run(redirect_pipes=True)","text":"Starts the process and waits for it to stop.
Parameters:
Name Type Description Defaultredirect_pipes bool Whether to hook and redirect the pipes of the process to a PipeManager.
True Source code in libdebug/debugger/internal_debugger.py def run(self: InternalDebugger, redirect_pipes: bool = True) -> PipeManager | None:\n \"\"\"Starts the process and waits for it to stop.\n\n Args:\n redirect_pipes (bool): Whether to hook and redirect the pipes of the process to a PipeManager.\n \"\"\"\n if not self.argv:\n raise RuntimeError(\"No binary file specified.\")\n\n ensure_file_executable(self.argv[0])\n\n if self.is_debugging:\n liblog.debugger(\"Process already running, stopping it before restarting.\")\n self.kill()\n if self.threads:\n self.clear()\n\n self.debugging_interface.reset()\n\n self.instanced = True\n self.is_debugging = True\n\n if not self.__polling_thread_command_queue.empty():\n raise RuntimeError(\"Polling thread command queue not empty.\")\n\n self.__polling_thread_command_queue.put((self.__threaded_run, (redirect_pipes,)))\n\n self._join_and_check_status()\n\n if self.escape_antidebug:\n liblog.debugger(\"Enabling anti-debugging escape mechanism.\")\n self._enable_antidebug_escaping()\n\n if redirect_pipes and not self.pipe_manager:\n raise RuntimeError(\"Something went wrong during pipe initialization.\")\n\n self._process_memory_manager.open(self.process_id)\n\n return self.pipe_manager\n"},{"location":"from_pydoc/generated/debugger/internal_debugger/#libdebug.debugger.internal_debugger.InternalDebugger.set_all_threads_as_dead","title":"set_all_threads_as_dead()","text":"Set all threads as dead.
Source code inlibdebug/debugger/internal_debugger.py def set_all_threads_as_dead(self: InternalDebugger) -> None:\n \"\"\"Set all threads as dead.\"\"\"\n for thread in self.threads:\n thread.set_as_dead()\n"},{"location":"from_pydoc/generated/debugger/internal_debugger/#libdebug.debugger.internal_debugger.InternalDebugger.set_child_debugger","title":"set_child_debugger(child_pid)","text":"Sets the child debugger after a fork.
Parameters:
Name Type Description Defaultchild_pid int The PID of the child process.
required Source code inlibdebug/debugger/internal_debugger.py def set_child_debugger(self: InternalDebugger, child_pid: int) -> None:\n \"\"\"Sets the child debugger after a fork.\n\n Args:\n child_pid (int): The PID of the child process.\n \"\"\"\n # Create a new InternalDebugger instance for the child process with the same configuration\n # of the parent debugger\n child_internal_debugger = InternalDebugger()\n child_internal_debugger.argv = self.argv\n child_internal_debugger.env = self.env\n child_internal_debugger.aslr_enabled = self.aslr_enabled\n child_internal_debugger.autoreach_entrypoint = self.autoreach_entrypoint\n child_internal_debugger.auto_interrupt_on_command = self.auto_interrupt_on_command\n child_internal_debugger.escape_antidebug = self.escape_antidebug\n child_internal_debugger.fast_memory = self.fast_memory\n child_internal_debugger.kill_on_exit = self.kill_on_exit\n child_internal_debugger.follow_children = self.follow_children\n\n # Create the new Debugger instance for the child process\n child_debugger = Debugger()\n child_debugger.post_init_(child_internal_debugger)\n child_internal_debugger.debugger = child_debugger\n child_debugger.arch = self.arch\n\n # Attach to the child process with the new debugger\n child_internal_debugger.attach(child_pid)\n self.children.append(child_debugger)\n liblog.debugger(\n \"Child process with pid %d registered to the parent debugger (pid %d)\",\n child_pid,\n self.process_id,\n )\n"},{"location":"from_pydoc/generated/debugger/internal_debugger/#libdebug.debugger.internal_debugger.InternalDebugger.set_running","title":"set_running()","text":"Set the state of the process to running.
Source code inlibdebug/debugger/internal_debugger.py def set_running(self: InternalDebugger) -> None:\n \"\"\"Set the state of the process to running.\"\"\"\n self._is_running = True\n"},{"location":"from_pydoc/generated/debugger/internal_debugger/#libdebug.debugger.internal_debugger.InternalDebugger.set_stopped","title":"set_stopped()","text":"Set the state of the process to stopped.
Source code inlibdebug/debugger/internal_debugger.py def set_stopped(self: InternalDebugger) -> None:\n \"\"\"Set the state of the process to stopped.\"\"\"\n self._is_running = False\n"},{"location":"from_pydoc/generated/debugger/internal_debugger/#libdebug.debugger.internal_debugger.InternalDebugger.set_thread_as_dead","title":"set_thread_as_dead(thread_id, exit_code, exit_signal)","text":"Set a thread as dead and update its exit code and exit signal.
Parameters:
Name Type Description Defaultthread_id int the ID of the thread to set as dead.
requiredexit_code int the exit code of the thread.
requiredexit_signal int the exit signal of the thread.
required Source code inlibdebug/debugger/internal_debugger.py def set_thread_as_dead(\n self: InternalDebugger,\n thread_id: int,\n exit_code: int | None,\n exit_signal: int | None,\n) -> None:\n \"\"\"Set a thread as dead and update its exit code and exit signal.\n\n Args:\n thread_id (int): the ID of the thread to set as dead.\n exit_code (int, optional): the exit code of the thread.\n exit_signal (int, optional): the exit signal of the thread.\n \"\"\"\n for thread in self.threads:\n if thread.thread_id == thread_id:\n thread.set_as_dead()\n thread._exit_code = exit_code\n thread._exit_signal = exit_signal\n break\n"},{"location":"from_pydoc/generated/debugger/internal_debugger/#libdebug.debugger.internal_debugger.InternalDebugger.start_processing_thread","title":"start_processing_thread()","text":"Starts the thread that will poll the traced process for state change.
Source code inlibdebug/debugger/internal_debugger.py def start_processing_thread(self: InternalDebugger) -> None:\n \"\"\"Starts the thread that will poll the traced process for state change.\"\"\"\n # Set as daemon so that the Python interpreter can exit even if the thread is still running\n self.__polling_thread = Thread(\n target=self.__polling_thread_function,\n name=\"libdebug__polling_thread\",\n daemon=True,\n )\n self.__polling_thread.start()\n"},{"location":"from_pydoc/generated/debugger/internal_debugger/#libdebug.debugger.internal_debugger.InternalDebugger.start_up","title":"start_up()","text":"Starts up the context.
Source code inlibdebug/debugger/internal_debugger.py def start_up(self: InternalDebugger) -> None:\n \"\"\"Starts up the context.\"\"\"\n # The context is linked to itself\n link_to_internal_debugger(self, self)\n\n self.start_processing_thread()\n with extend_internal_debugger(self):\n self.debugging_interface = provide_debugging_interface()\n self._fast_memory = DirectMemoryView(self._fast_read_memory, self._fast_write_memory)\n self._slow_memory = ChunkedMemoryView(\n self._peek_memory,\n self._poke_memory,\n unit_size=get_platform_gp_register_size(libcontext.platform),\n )\n"},{"location":"from_pydoc/generated/debugger/internal_debugger/#libdebug.debugger.internal_debugger.InternalDebugger.step","title":"step(thread)","text":"Executes a single instruction of the process.
Parameters:
Name Type Description Defaultthread ThreadContext The thread to step. Defaults to None.
required Source code inlibdebug/debugger/internal_debugger.py @background_alias(_background_step)\n@change_state_function_thread\ndef step(self: InternalDebugger, thread: ThreadContext) -> None:\n \"\"\"Executes a single instruction of the process.\n\n Args:\n thread (ThreadContext): The thread to step. Defaults to None.\n \"\"\"\n self._ensure_process_stopped()\n self.__polling_thread_command_queue.put((self.__threaded_step, (thread,)))\n self.__polling_thread_command_queue.put((self.__threaded_wait, ()))\n self._join_and_check_status()\n"},{"location":"from_pydoc/generated/debugger/internal_debugger/#libdebug.debugger.internal_debugger.InternalDebugger.step_until","title":"step_until(thread, position, max_steps=-1, file='hybrid')","text":"Executes instructions of the process until the specified location is reached.
Parameters:
Name Type Description Defaultthread ThreadContext The thread to step. Defaults to None.
requiredposition int | bytes The location to reach.
requiredmax_steps int The maximum number of steps to execute. Defaults to -1.
-1 file str The user-defined backing file to resolve the address in. Defaults to \"hybrid\" (libdebug will first try to solve the address as an absolute address, then as a relative address w.r.t. the \"binary\" map file).
'hybrid' Source code in libdebug/debugger/internal_debugger.py @background_alias(_background_step_until)\n@change_state_function_thread\ndef step_until(\n self: InternalDebugger,\n thread: ThreadContext,\n position: int | str,\n max_steps: int = -1,\n file: str = \"hybrid\",\n) -> None:\n \"\"\"Executes instructions of the process until the specified location is reached.\n\n Args:\n thread (ThreadContext): The thread to step. Defaults to None.\n position (int | bytes): The location to reach.\n max_steps (int, optional): The maximum number of steps to execute. Defaults to -1.\n file (str, optional): The user-defined backing file to resolve the address in. Defaults to \"hybrid\" (libdebug will first try to solve the address as an absolute address, then as a relative address w.r.t. the \"binary\" map file).\n \"\"\"\n if isinstance(position, str):\n address = self.resolve_symbol(position, file)\n else:\n address = self.resolve_address(position, file)\n\n arguments = (\n thread,\n address,\n max_steps,\n )\n\n self.__polling_thread_command_queue.put((self.__threaded_step_until, arguments))\n\n self._join_and_check_status()\n"},{"location":"from_pydoc/generated/debugger/internal_debugger/#libdebug.debugger.internal_debugger.InternalDebugger.terminate","title":"terminate()","text":"Interrupts the process, kills it and then terminates the background thread.
The debugger object will not be usable after this method is called. This method should only be called to free up resources when the debugger object is no longer needed.
Source code inlibdebug/debugger/internal_debugger.py def terminate(self: InternalDebugger) -> None:\n \"\"\"Interrupts the process, kills it and then terminates the background thread.\n\n The debugger object will not be usable after this method is called.\n This method should only be called to free up resources when the debugger object is no longer needed.\n \"\"\"\n if self.instanced and self.running:\n try:\n self.interrupt()\n except ProcessLookupError:\n # The process has already been killed by someone or something else\n liblog.debugger(\"Interrupting process failed: already terminated\")\n\n if self.instanced and self.is_debugging:\n try:\n self.kill()\n except ProcessLookupError:\n # The process has already been killed by someone or something else\n liblog.debugger(\"Killing process failed: already terminated\")\n\n self.instanced = False\n self.is_debugging = False\n\n if self.__polling_thread is not None:\n self.__polling_thread_command_queue.put((THREAD_TERMINATE, ()))\n self.__polling_thread.join()\n del self.__polling_thread\n self.__polling_thread = None\n\n # Remove elemement from internal_debugger_holder to avoid memleaks\n remove_internal_debugger_refs(self)\n\n # Clean up the register accessors\n for thread in self.threads:\n thread._register_holder.cleanup()\n"},{"location":"from_pydoc/generated/debugger/internal_debugger/#libdebug.debugger.internal_debugger.InternalDebugger.wait","title":"wait()","text":"Waits for the process to stop.
Source code inlibdebug/debugger/internal_debugger.py @background_alias(_background_invalid_call)\ndef wait(self: InternalDebugger) -> None:\n \"\"\"Waits for the process to stop.\"\"\"\n if not self.is_debugging:\n raise RuntimeError(\"Process not running, cannot wait.\")\n\n self._join_and_check_status()\n\n if self.threads[0].dead or not self.running:\n # Most of the time the function returns here, as there was a wait already\n # queued by the previous command\n return\n\n self.__polling_thread_command_queue.put((self.__threaded_wait, ()))\n\n self._join_and_check_status()\n"},{"location":"from_pydoc/generated/debugger/internal_debugger_holder/","title":"libdebug.debugger.internal_debugger_holder","text":""},{"location":"from_pydoc/generated/debugger/internal_debugger_holder/#libdebug.debugger.internal_debugger_holder.InternalDebuggerHolder","title":"InternalDebuggerHolder dataclass","text":"A holder for internal debuggers.
Source code inlibdebug/debugger/internal_debugger_holder.py @dataclass\nclass InternalDebuggerHolder:\n \"\"\"A holder for internal debuggers.\"\"\"\n\n internal_debuggers: WeakKeyDictionary = field(default_factory=WeakKeyDictionary)\n global_internal_debugger = None\n internal_debugger_lock = Lock()\n"},{"location":"from_pydoc/generated/debugger/internal_debugger_holder/#libdebug.debugger.internal_debugger_holder._cleanup_internal_debugger","title":"_cleanup_internal_debugger()","text":"Cleanup the internal debugger.
Source code inlibdebug/debugger/internal_debugger_holder.py def _cleanup_internal_debugger() -> None:\n \"\"\"Cleanup the internal debugger.\"\"\"\n for debugger in set(internal_debugger_holder.internal_debuggers.values()):\n debugger: InternalDebugger\n\n # Restore the original stdin settings, just in case\n try:\n if debugger.stdin_settings_backup:\n tcsetattr(sys.stdin.fileno(), TCSANOW, debugger.stdin_settings_backup)\n except Exception as e:\n liblog.debugger(f\"Error while restoring the original stdin settings: {e}\")\n\n if debugger.instanced and debugger.kill_on_exit:\n try:\n debugger.interrupt()\n except Exception as e:\n liblog.debugger(f\"Error while interrupting debuggee: {e}\")\n\n try:\n debugger.terminate()\n except Exception as e:\n liblog.debugger(f\"Error while terminating the debugger: {e}\")\n"},{"location":"from_pydoc/generated/debugger/internal_debugger_instance_manager/","title":"libdebug.debugger.internal_debugger_instance_manager","text":""},{"location":"from_pydoc/generated/debugger/internal_debugger_instance_manager/#libdebug.debugger.internal_debugger_instance_manager.extend_internal_debugger","title":"extend_internal_debugger(referrer)","text":"Extend the internal debugger.
Parameters:
Name Type Description Defaultreferrer object the referrer object.
requiredYields:
Name Type DescriptionInternalDebugger ... the internal debugger.
Source code inlibdebug/debugger/internal_debugger_instance_manager.py @contextmanager\ndef extend_internal_debugger(referrer: object) -> ...:\n \"\"\"Extend the internal debugger.\n\n Args:\n referrer (object): the referrer object.\n\n Yields:\n InternalDebugger: the internal debugger.\n \"\"\"\n with internal_debugger_holder.internal_debugger_lock:\n if referrer not in internal_debugger_holder.internal_debuggers:\n raise RuntimeError(\"Referrer isn't linked to any internal debugger.\")\n\n internal_debugger_holder.global_internal_debugger = internal_debugger_holder.internal_debuggers[referrer]\n yield\n internal_debugger_holder.global_internal_debugger = None\n"},{"location":"from_pydoc/generated/debugger/internal_debugger_instance_manager/#libdebug.debugger.internal_debugger_instance_manager.get_global_internal_debugger","title":"get_global_internal_debugger()","text":"Can be used to retrieve a temporarily-global internal debugger.
Source code inlibdebug/debugger/internal_debugger_instance_manager.py def get_global_internal_debugger() -> InternalDebugger:\n \"\"\"Can be used to retrieve a temporarily-global internal debugger.\"\"\"\n if internal_debugger_holder.global_internal_debugger is None:\n raise RuntimeError(\"No internal debugger available\")\n return internal_debugger_holder.global_internal_debugger\n"},{"location":"from_pydoc/generated/debugger/internal_debugger_instance_manager/#libdebug.debugger.internal_debugger_instance_manager.link_to_internal_debugger","title":"link_to_internal_debugger(reference, internal_debugger)","text":"Link a reference to a InternalDebugger.
Parameters:
Name Type Description Defaultreference object the object that needs the internal debugger.
requiredinternal_debugger InternalDebugger the internal debugger.
required Source code inlibdebug/debugger/internal_debugger_instance_manager.py def link_to_internal_debugger(reference: object, internal_debugger: InternalDebugger) -> None:\n \"\"\"Link a reference to a InternalDebugger.\n\n Args:\n reference (object): the object that needs the internal debugger.\n internal_debugger (InternalDebugger): the internal debugger.\n \"\"\"\n internal_debugger_holder.internal_debuggers[reference] = internal_debugger\n"},{"location":"from_pydoc/generated/debugger/internal_debugger_instance_manager/#libdebug.debugger.internal_debugger_instance_manager.provide_internal_debugger","title":"provide_internal_debugger(reference)","text":"Provide a internal debugger.
Parameters:
Name Type Description Defaultreference object the object that needs the internal debugger.
requiredReturns:
Name Type DescriptionInternalDebugger InternalDebugger the internal debugger.
Source code inlibdebug/debugger/internal_debugger_instance_manager.py def provide_internal_debugger(reference: object) -> InternalDebugger:\n \"\"\"Provide a internal debugger.\n\n Args:\n reference (object): the object that needs the internal debugger.\n\n Returns:\n InternalDebugger: the internal debugger.\n \"\"\"\n if reference in internal_debugger_holder.internal_debuggers:\n return internal_debugger_holder.internal_debuggers[reference]\n\n if internal_debugger_holder.global_internal_debugger is None:\n raise RuntimeError(\"No internal debugger available\")\n\n internal_debugger_holder.internal_debuggers[reference] = internal_debugger_holder.global_internal_debugger\n return internal_debugger_holder.global_internal_debugger\n"},{"location":"from_pydoc/generated/debugger/internal_debugger_instance_manager/#libdebug.debugger.internal_debugger_instance_manager.remove_internal_debugger_refs","title":"remove_internal_debugger_refs(internal_debugger)","text":"Remove all refs to passed internal debugger and connected objects.
Parameters:
Name Type Description DefaultInternalDebugger the internal debugger.
required Source code inlibdebug/debugger/internal_debugger_instance_manager.py def remove_internal_debugger_refs(internal_debugger: InternalDebugger) -> None:\n \"\"\"Remove all refs to passed internal debugger and connected objects.\n\n Args:\n InternalDebugger: the internal debugger.\n \"\"\"\n with internal_debugger_holder.internal_debugger_lock:\n for key in list(internal_debugger_holder.internal_debuggers):\n if internal_debugger_holder.internal_debuggers[key] == internal_debugger:\n del internal_debugger_holder.internal_debuggers[key]\n"},{"location":"from_pydoc/generated/interfaces/debugging_interface/","title":"libdebug.interfaces.debugging_interface","text":""},{"location":"from_pydoc/generated/interfaces/debugging_interface/#libdebug.interfaces.debugging_interface.DebuggingInterface","title":"DebuggingInterface","text":" Bases: ABC
The interface used by _InternalDebugger to communicate with the available debugging backends, such as ptrace or gdb.
libdebug/interfaces/debugging_interface.py class DebuggingInterface(ABC):\n \"\"\"The interface used by `_InternalDebugger` to communicate with the available debugging backends, such as `ptrace` or `gdb`.\"\"\"\n\n @abstractmethod\n def __init__(self: DebuggingInterface) -> None:\n \"\"\"Initializes the DebuggingInterface classs.\"\"\"\n\n @abstractmethod\n def reset(self: DebuggingInterface) -> None:\n \"\"\"Resets the state of the interface.\"\"\"\n\n @abstractmethod\n def run(self: DebuggingInterface, redirect_pipes: bool) -> None:\n \"\"\"Runs the specified process.\n\n Args:\n redirect_pipes (bool): Whether to hook and redirect the pipes of the process to a PipeManager.\n \"\"\"\n\n @abstractmethod\n def attach(self: DebuggingInterface, pid: int) -> None:\n \"\"\"Attaches to the specified process.\n\n Args:\n pid (int): the pid of the process to attach to.\n \"\"\"\n\n @abstractmethod\n def detach(self: DebuggingInterface) -> None:\n \"\"\"Detaches from the process.\"\"\"\n\n @abstractmethod\n def kill(self: DebuggingInterface) -> None:\n \"\"\"Instantly terminates the process.\"\"\"\n\n @abstractmethod\n def cont(self: DebuggingInterface) -> None:\n \"\"\"Continues the execution of the process.\"\"\"\n\n @abstractmethod\n def wait(self: DebuggingInterface) -> None:\n \"\"\"Waits for the process to stop.\"\"\"\n\n @abstractmethod\n def migrate_to_gdb(self: DebuggingInterface) -> None:\n \"\"\"Migrates the current process to GDB.\"\"\"\n\n @abstractmethod\n def migrate_from_gdb(self: DebuggingInterface) -> None:\n \"\"\"Migrates the current process from GDB.\"\"\"\n\n @abstractmethod\n def step(self: DebuggingInterface, thread: ThreadContext) -> None:\n \"\"\"Executes a single instruction of the specified thread.\n\n Args:\n thread (ThreadContext): The thread to step.\n \"\"\"\n\n @abstractmethod\n def step_until(self: DebuggingInterface, thread: ThreadContext, address: int, max_steps: int) -> None:\n \"\"\"Executes instructions of the specified thread until the specified address is reached.\n\n Args:\n thread (ThreadContext): The thread to step.\n address (int): The address to reach.\n max_steps (int): The maximum number of steps to execute.\n \"\"\"\n\n @abstractmethod\n def finish(self: DebuggingInterface, thread: ThreadContext, heuristic: str) -> None:\n \"\"\"Continues execution until the current function returns or the process stops.\n\n The command requires a heuristic to determine the end of the function. The available heuristics are:\n - `backtrace`: The debugger will place a breakpoint on the saved return address found on the stack and continue execution on all threads.\n - `step-mode`: The debugger will step on the specified thread until the current function returns. This will be slower.\n\n Args:\n thread (ThreadContext): The thread to finish.\n heuristic (str, optional): The heuristic to use. Defaults to \"backtrace\".\n \"\"\"\n\n @abstractmethod\n def next(self: DebuggingInterface, thread: ThreadContext) -> None:\n \"\"\"Executes the next instruction of the process. If the instruction is a call, the debugger will continue until the called function returns.\"\"\"\n\n @abstractmethod\n def get_maps(self: DebuggingInterface) -> MemoryMapList[MemoryMap]:\n \"\"\"Returns the memory maps of the process.\"\"\"\n\n @abstractmethod\n def set_breakpoint(self: DebuggingInterface, bp: Breakpoint) -> None:\n \"\"\"Sets a breakpoint at the specified address.\n\n Args:\n bp (Breakpoint): The breakpoint to set.\n \"\"\"\n\n @abstractmethod\n def unset_breakpoint(self: DebuggingInterface, bp: Breakpoint) -> None:\n \"\"\"Restores the original instruction flow at the specified address.\n\n Args:\n bp (Breakpoint): The breakpoint to restore.\n \"\"\"\n\n @abstractmethod\n def set_syscall_handler(self: DebuggingInterface, handler: SyscallHandler) -> None:\n \"\"\"Sets a handler for a syscall.\n\n Args:\n handler (HandledSyscall): The syscall to set.\n \"\"\"\n\n @abstractmethod\n def unset_syscall_handler(self: DebuggingInterface, handler: SyscallHandler) -> None:\n \"\"\"Unsets a handler for a syscall.\n\n Args:\n handler (HandledSyscall): The syscall to unset.\n \"\"\"\n\n @abstractmethod\n def set_signal_catcher(self: DebuggingInterface, catcher: SignalCatcher) -> None:\n \"\"\"Sets a catcher for a signal.\n\n Args:\n catcher (CaughtSignal): The signal to set.\n \"\"\"\n\n @abstractmethod\n def unset_signal_catcher(self: DebuggingInterface, catcher: SignalCatcher) -> None:\n \"\"\"Unset a catcher for a signal.\n\n Args:\n catcher (CaughtSignal): The signal to unset.\n \"\"\"\n\n @abstractmethod\n def peek_memory(self: DebuggingInterface, address: int) -> int:\n \"\"\"Reads the memory at the specified address.\n\n Args:\n address (int): The address to read.\n\n Returns:\n int: The read memory value.\n \"\"\"\n\n @abstractmethod\n def poke_memory(self: DebuggingInterface, address: int, data: int) -> None:\n \"\"\"Writes the memory at the specified address.\n\n Args:\n address (int): The address to write.\n data (int): The value to write.\n \"\"\"\n\n @abstractmethod\n def fetch_fp_registers(self: DebuggingInterface, registers: Registers) -> None:\n \"\"\"Fetches the floating-point registers of the specified thread.\n\n Args:\n registers (Registers): The registers instance to update.\n \"\"\"\n\n @abstractmethod\n def flush_fp_registers(self: DebuggingInterface, registers: Registers) -> None:\n \"\"\"Flushes the floating-point registers of the specified thread.\n\n Args:\n registers (Registers): The registers instance to flush.\n \"\"\"\n"},{"location":"from_pydoc/generated/interfaces/debugging_interface/#libdebug.interfaces.debugging_interface.DebuggingInterface.__init__","title":"__init__() abstractmethod","text":"Initializes the DebuggingInterface classs.
Source code inlibdebug/interfaces/debugging_interface.py @abstractmethod\ndef __init__(self: DebuggingInterface) -> None:\n \"\"\"Initializes the DebuggingInterface classs.\"\"\"\n"},{"location":"from_pydoc/generated/interfaces/debugging_interface/#libdebug.interfaces.debugging_interface.DebuggingInterface.attach","title":"attach(pid) abstractmethod","text":"Attaches to the specified process.
Parameters:
Name Type Description Defaultpid int the pid of the process to attach to.
required Source code inlibdebug/interfaces/debugging_interface.py @abstractmethod\ndef attach(self: DebuggingInterface, pid: int) -> None:\n \"\"\"Attaches to the specified process.\n\n Args:\n pid (int): the pid of the process to attach to.\n \"\"\"\n"},{"location":"from_pydoc/generated/interfaces/debugging_interface/#libdebug.interfaces.debugging_interface.DebuggingInterface.cont","title":"cont() abstractmethod","text":"Continues the execution of the process.
Source code inlibdebug/interfaces/debugging_interface.py @abstractmethod\ndef cont(self: DebuggingInterface) -> None:\n \"\"\"Continues the execution of the process.\"\"\"\n"},{"location":"from_pydoc/generated/interfaces/debugging_interface/#libdebug.interfaces.debugging_interface.DebuggingInterface.detach","title":"detach() abstractmethod","text":"Detaches from the process.
Source code inlibdebug/interfaces/debugging_interface.py @abstractmethod\ndef detach(self: DebuggingInterface) -> None:\n \"\"\"Detaches from the process.\"\"\"\n"},{"location":"from_pydoc/generated/interfaces/debugging_interface/#libdebug.interfaces.debugging_interface.DebuggingInterface.fetch_fp_registers","title":"fetch_fp_registers(registers) abstractmethod","text":"Fetches the floating-point registers of the specified thread.
Parameters:
Name Type Description Defaultregisters Registers The registers instance to update.
required Source code inlibdebug/interfaces/debugging_interface.py @abstractmethod\ndef fetch_fp_registers(self: DebuggingInterface, registers: Registers) -> None:\n \"\"\"Fetches the floating-point registers of the specified thread.\n\n Args:\n registers (Registers): The registers instance to update.\n \"\"\"\n"},{"location":"from_pydoc/generated/interfaces/debugging_interface/#libdebug.interfaces.debugging_interface.DebuggingInterface.finish","title":"finish(thread, heuristic) abstractmethod","text":"Continues execution until the current function returns or the process stops.
The command requires a heuristic to determine the end of the function. The available heuristics are: - backtrace: The debugger will place a breakpoint on the saved return address found on the stack and continue execution on all threads. - step-mode: The debugger will step on the specified thread until the current function returns. This will be slower.
Parameters:
Name Type Description Defaultthread ThreadContext The thread to finish.
requiredheuristic str The heuristic to use. Defaults to \"backtrace\".
required Source code inlibdebug/interfaces/debugging_interface.py @abstractmethod\ndef finish(self: DebuggingInterface, thread: ThreadContext, heuristic: str) -> None:\n \"\"\"Continues execution until the current function returns or the process stops.\n\n The command requires a heuristic to determine the end of the function. The available heuristics are:\n - `backtrace`: The debugger will place a breakpoint on the saved return address found on the stack and continue execution on all threads.\n - `step-mode`: The debugger will step on the specified thread until the current function returns. This will be slower.\n\n Args:\n thread (ThreadContext): The thread to finish.\n heuristic (str, optional): The heuristic to use. Defaults to \"backtrace\".\n \"\"\"\n"},{"location":"from_pydoc/generated/interfaces/debugging_interface/#libdebug.interfaces.debugging_interface.DebuggingInterface.flush_fp_registers","title":"flush_fp_registers(registers) abstractmethod","text":"Flushes the floating-point registers of the specified thread.
Parameters:
Name Type Description Defaultregisters Registers The registers instance to flush.
required Source code inlibdebug/interfaces/debugging_interface.py @abstractmethod\ndef flush_fp_registers(self: DebuggingInterface, registers: Registers) -> None:\n \"\"\"Flushes the floating-point registers of the specified thread.\n\n Args:\n registers (Registers): The registers instance to flush.\n \"\"\"\n"},{"location":"from_pydoc/generated/interfaces/debugging_interface/#libdebug.interfaces.debugging_interface.DebuggingInterface.get_maps","title":"get_maps() abstractmethod","text":"Returns the memory maps of the process.
Source code inlibdebug/interfaces/debugging_interface.py @abstractmethod\ndef get_maps(self: DebuggingInterface) -> MemoryMapList[MemoryMap]:\n \"\"\"Returns the memory maps of the process.\"\"\"\n"},{"location":"from_pydoc/generated/interfaces/debugging_interface/#libdebug.interfaces.debugging_interface.DebuggingInterface.kill","title":"kill() abstractmethod","text":"Instantly terminates the process.
Source code inlibdebug/interfaces/debugging_interface.py @abstractmethod\ndef kill(self: DebuggingInterface) -> None:\n \"\"\"Instantly terminates the process.\"\"\"\n"},{"location":"from_pydoc/generated/interfaces/debugging_interface/#libdebug.interfaces.debugging_interface.DebuggingInterface.migrate_from_gdb","title":"migrate_from_gdb() abstractmethod","text":"Migrates the current process from GDB.
Source code inlibdebug/interfaces/debugging_interface.py @abstractmethod\ndef migrate_from_gdb(self: DebuggingInterface) -> None:\n \"\"\"Migrates the current process from GDB.\"\"\"\n"},{"location":"from_pydoc/generated/interfaces/debugging_interface/#libdebug.interfaces.debugging_interface.DebuggingInterface.migrate_to_gdb","title":"migrate_to_gdb() abstractmethod","text":"Migrates the current process to GDB.
Source code inlibdebug/interfaces/debugging_interface.py @abstractmethod\ndef migrate_to_gdb(self: DebuggingInterface) -> None:\n \"\"\"Migrates the current process to GDB.\"\"\"\n"},{"location":"from_pydoc/generated/interfaces/debugging_interface/#libdebug.interfaces.debugging_interface.DebuggingInterface.next","title":"next(thread) abstractmethod","text":"Executes the next instruction of the process. If the instruction is a call, the debugger will continue until the called function returns.
Source code inlibdebug/interfaces/debugging_interface.py @abstractmethod\ndef next(self: DebuggingInterface, thread: ThreadContext) -> None:\n \"\"\"Executes the next instruction of the process. If the instruction is a call, the debugger will continue until the called function returns.\"\"\"\n"},{"location":"from_pydoc/generated/interfaces/debugging_interface/#libdebug.interfaces.debugging_interface.DebuggingInterface.peek_memory","title":"peek_memory(address) abstractmethod","text":"Reads the memory at the specified address.
Parameters:
Name Type Description Defaultaddress int The address to read.
requiredReturns:
Name Type Descriptionint int The read memory value.
Source code inlibdebug/interfaces/debugging_interface.py @abstractmethod\ndef peek_memory(self: DebuggingInterface, address: int) -> int:\n \"\"\"Reads the memory at the specified address.\n\n Args:\n address (int): The address to read.\n\n Returns:\n int: The read memory value.\n \"\"\"\n"},{"location":"from_pydoc/generated/interfaces/debugging_interface/#libdebug.interfaces.debugging_interface.DebuggingInterface.poke_memory","title":"poke_memory(address, data) abstractmethod","text":"Writes the memory at the specified address.
Parameters:
Name Type Description Defaultaddress int The address to write.
requireddata int The value to write.
required Source code inlibdebug/interfaces/debugging_interface.py @abstractmethod\ndef poke_memory(self: DebuggingInterface, address: int, data: int) -> None:\n \"\"\"Writes the memory at the specified address.\n\n Args:\n address (int): The address to write.\n data (int): The value to write.\n \"\"\"\n"},{"location":"from_pydoc/generated/interfaces/debugging_interface/#libdebug.interfaces.debugging_interface.DebuggingInterface.reset","title":"reset() abstractmethod","text":"Resets the state of the interface.
Source code inlibdebug/interfaces/debugging_interface.py @abstractmethod\ndef reset(self: DebuggingInterface) -> None:\n \"\"\"Resets the state of the interface.\"\"\"\n"},{"location":"from_pydoc/generated/interfaces/debugging_interface/#libdebug.interfaces.debugging_interface.DebuggingInterface.run","title":"run(redirect_pipes) abstractmethod","text":"Runs the specified process.
Parameters:
Name Type Description Defaultredirect_pipes bool Whether to hook and redirect the pipes of the process to a PipeManager.
required Source code inlibdebug/interfaces/debugging_interface.py @abstractmethod\ndef run(self: DebuggingInterface, redirect_pipes: bool) -> None:\n \"\"\"Runs the specified process.\n\n Args:\n redirect_pipes (bool): Whether to hook and redirect the pipes of the process to a PipeManager.\n \"\"\"\n"},{"location":"from_pydoc/generated/interfaces/debugging_interface/#libdebug.interfaces.debugging_interface.DebuggingInterface.set_breakpoint","title":"set_breakpoint(bp) abstractmethod","text":"Sets a breakpoint at the specified address.
Parameters:
Name Type Description Defaultbp Breakpoint The breakpoint to set.
required Source code inlibdebug/interfaces/debugging_interface.py @abstractmethod\ndef set_breakpoint(self: DebuggingInterface, bp: Breakpoint) -> None:\n \"\"\"Sets a breakpoint at the specified address.\n\n Args:\n bp (Breakpoint): The breakpoint to set.\n \"\"\"\n"},{"location":"from_pydoc/generated/interfaces/debugging_interface/#libdebug.interfaces.debugging_interface.DebuggingInterface.set_signal_catcher","title":"set_signal_catcher(catcher) abstractmethod","text":"Sets a catcher for a signal.
Parameters:
Name Type Description Defaultcatcher CaughtSignal The signal to set.
required Source code inlibdebug/interfaces/debugging_interface.py @abstractmethod\ndef set_signal_catcher(self: DebuggingInterface, catcher: SignalCatcher) -> None:\n \"\"\"Sets a catcher for a signal.\n\n Args:\n catcher (CaughtSignal): The signal to set.\n \"\"\"\n"},{"location":"from_pydoc/generated/interfaces/debugging_interface/#libdebug.interfaces.debugging_interface.DebuggingInterface.set_syscall_handler","title":"set_syscall_handler(handler) abstractmethod","text":"Sets a handler for a syscall.
Parameters:
Name Type Description Defaulthandler HandledSyscall The syscall to set.
required Source code inlibdebug/interfaces/debugging_interface.py @abstractmethod\ndef set_syscall_handler(self: DebuggingInterface, handler: SyscallHandler) -> None:\n \"\"\"Sets a handler for a syscall.\n\n Args:\n handler (HandledSyscall): The syscall to set.\n \"\"\"\n"},{"location":"from_pydoc/generated/interfaces/debugging_interface/#libdebug.interfaces.debugging_interface.DebuggingInterface.step","title":"step(thread) abstractmethod","text":"Executes a single instruction of the specified thread.
Parameters:
Name Type Description Defaultthread ThreadContext The thread to step.
required Source code inlibdebug/interfaces/debugging_interface.py @abstractmethod\ndef step(self: DebuggingInterface, thread: ThreadContext) -> None:\n \"\"\"Executes a single instruction of the specified thread.\n\n Args:\n thread (ThreadContext): The thread to step.\n \"\"\"\n"},{"location":"from_pydoc/generated/interfaces/debugging_interface/#libdebug.interfaces.debugging_interface.DebuggingInterface.step_until","title":"step_until(thread, address, max_steps) abstractmethod","text":"Executes instructions of the specified thread until the specified address is reached.
Parameters:
Name Type Description Defaultthread ThreadContext The thread to step.
requiredaddress int The address to reach.
requiredmax_steps int The maximum number of steps to execute.
required Source code inlibdebug/interfaces/debugging_interface.py @abstractmethod\ndef step_until(self: DebuggingInterface, thread: ThreadContext, address: int, max_steps: int) -> None:\n \"\"\"Executes instructions of the specified thread until the specified address is reached.\n\n Args:\n thread (ThreadContext): The thread to step.\n address (int): The address to reach.\n max_steps (int): The maximum number of steps to execute.\n \"\"\"\n"},{"location":"from_pydoc/generated/interfaces/debugging_interface/#libdebug.interfaces.debugging_interface.DebuggingInterface.unset_breakpoint","title":"unset_breakpoint(bp) abstractmethod","text":"Restores the original instruction flow at the specified address.
Parameters:
Name Type Description Defaultbp Breakpoint The breakpoint to restore.
required Source code inlibdebug/interfaces/debugging_interface.py @abstractmethod\ndef unset_breakpoint(self: DebuggingInterface, bp: Breakpoint) -> None:\n \"\"\"Restores the original instruction flow at the specified address.\n\n Args:\n bp (Breakpoint): The breakpoint to restore.\n \"\"\"\n"},{"location":"from_pydoc/generated/interfaces/debugging_interface/#libdebug.interfaces.debugging_interface.DebuggingInterface.unset_signal_catcher","title":"unset_signal_catcher(catcher) abstractmethod","text":"Unset a catcher for a signal.
Parameters:
Name Type Description Defaultcatcher CaughtSignal The signal to unset.
required Source code inlibdebug/interfaces/debugging_interface.py @abstractmethod\ndef unset_signal_catcher(self: DebuggingInterface, catcher: SignalCatcher) -> None:\n \"\"\"Unset a catcher for a signal.\n\n Args:\n catcher (CaughtSignal): The signal to unset.\n \"\"\"\n"},{"location":"from_pydoc/generated/interfaces/debugging_interface/#libdebug.interfaces.debugging_interface.DebuggingInterface.unset_syscall_handler","title":"unset_syscall_handler(handler) abstractmethod","text":"Unsets a handler for a syscall.
Parameters:
Name Type Description Defaulthandler HandledSyscall The syscall to unset.
required Source code inlibdebug/interfaces/debugging_interface.py @abstractmethod\ndef unset_syscall_handler(self: DebuggingInterface, handler: SyscallHandler) -> None:\n \"\"\"Unsets a handler for a syscall.\n\n Args:\n handler (HandledSyscall): The syscall to unset.\n \"\"\"\n"},{"location":"from_pydoc/generated/interfaces/debugging_interface/#libdebug.interfaces.debugging_interface.DebuggingInterface.wait","title":"wait() abstractmethod","text":"Waits for the process to stop.
Source code inlibdebug/interfaces/debugging_interface.py @abstractmethod\ndef wait(self: DebuggingInterface) -> None:\n \"\"\"Waits for the process to stop.\"\"\"\n"},{"location":"from_pydoc/generated/interfaces/interface_helper/","title":"libdebug.interfaces.interface_helper","text":""},{"location":"from_pydoc/generated/interfaces/interface_helper/#libdebug.interfaces.interface_helper.provide_debugging_interface","title":"provide_debugging_interface(interface=AvailableInterfaces.PTRACE)","text":"Returns an instance of the debugging interface to be used by the _InternalDebugger class.
libdebug/interfaces/interface_helper.py def provide_debugging_interface(\n interface: AvailableInterfaces = AvailableInterfaces.PTRACE,\n) -> DebuggingInterface:\n \"\"\"Returns an instance of the debugging interface to be used by the `_InternalDebugger` class.\"\"\"\n match interface:\n case AvailableInterfaces.PTRACE:\n return PtraceInterface()\n case _:\n raise NotImplementedError(f\"Interface {interface} not available.\")\n"},{"location":"from_pydoc/generated/interfaces/interfaces/","title":"libdebug.interfaces.interfaces","text":""},{"location":"from_pydoc/generated/interfaces/interfaces/#libdebug.interfaces.interfaces.AvailableInterfaces","title":"AvailableInterfaces","text":" Bases: Enum
An enumeration of the available backend interfaces.
Source code inlibdebug/interfaces/interfaces.py class AvailableInterfaces(Enum):\n \"\"\"An enumeration of the available backend interfaces.\"\"\"\n\n PTRACE = 1\n"},{"location":"from_pydoc/generated/memory/abstract_memory_view/","title":"libdebug.memory.abstract_memory_view","text":""},{"location":"from_pydoc/generated/memory/abstract_memory_view/#libdebug.memory.abstract_memory_view.AbstractMemoryView","title":"AbstractMemoryView","text":" Bases: MutableSequence, ABC
An abstract memory interface for the target process.
An implementation of class must be used to read and write memory of the target process.
Source code inlibdebug/memory/abstract_memory_view.py class AbstractMemoryView(MutableSequence, ABC):\n \"\"\"An abstract memory interface for the target process.\n\n An implementation of class must be used to read and write memory of the target process.\n \"\"\"\n\n def __init__(self: AbstractMemoryView) -> None:\n \"\"\"Initializes the MemoryView.\"\"\"\n self._internal_debugger = provide_internal_debugger(self)\n\n @abstractmethod\n def read(self: AbstractMemoryView, address: int, size: int) -> bytes:\n \"\"\"Reads memory from the target process.\n\n Args:\n address (int): The address to read from.\n size (int): The number of bytes to read.\n\n Returns:\n bytes: The read bytes.\n \"\"\"\n\n @abstractmethod\n def write(self: AbstractMemoryView, address: int, data: bytes) -> None:\n \"\"\"Writes memory to the target process.\n\n Args:\n address (int): The address to write to.\n data (bytes): The data to write.\n \"\"\"\n\n def find(\n self: AbstractMemoryView,\n value: bytes | str | int,\n file: str = \"all\",\n start: int | None = None,\n end: int | None = None,\n ) -> list[int]:\n \"\"\"Searches for the given value in the specified memory maps of the process.\n\n The start and end addresses can be used to limit the search to a specific range.\n If not specified, the search will be performed on the whole memory map.\n\n Args:\n value (bytes | str | int): The value to search for.\n file (str): The backing file to search the value in. Defaults to \"all\", which means all memory.\n start (int | None): The start address of the search. Defaults to None.\n end (int | None): The end address of the search. Defaults to None.\n\n Returns:\n list[int]: A list of offset where the value was found.\n \"\"\"\n if isinstance(value, str):\n value = value.encode()\n elif isinstance(value, int):\n value = value.to_bytes(1, sys.byteorder)\n\n occurrences = []\n if file == \"all\" and start is None and end is None:\n for vmap in self.maps:\n liblog.debugger(f\"Searching in {vmap.backing_file}...\")\n try:\n memory_content = self.read(vmap.start, vmap.end - vmap.start)\n except (OSError, OverflowError, ValueError):\n # There are some memory regions that cannot be read, such as [vvar], [vdso], etc.\n continue\n occurrences += find_all_overlapping_occurrences(value, memory_content, vmap.start)\n elif file == \"all\" and start is not None and end is None:\n for vmap in self.maps:\n if vmap.end > start:\n liblog.debugger(f\"Searching in {vmap.backing_file}...\")\n read_start = max(vmap.start, start)\n try:\n memory_content = self.read(read_start, vmap.end - read_start)\n except (OSError, OverflowError, ValueError):\n # There are some memory regions that cannot be read, such as [vvar], [vdso], etc.\n continue\n occurrences += find_all_overlapping_occurrences(value, memory_content, read_start)\n elif file == \"all\" and start is None and end is not None:\n for vmap in self.maps:\n if vmap.start < end:\n liblog.debugger(f\"Searching in {vmap.backing_file}...\")\n read_end = min(vmap.end, end)\n try:\n memory_content = self.read(vmap.start, read_end - vmap.start)\n except (OSError, OverflowError, ValueError):\n # There are some memory regions that cannot be read, such as [vvar], [vdso], etc.\n continue\n occurrences += find_all_overlapping_occurrences(value, memory_content, vmap.start)\n elif file == \"all\" and start is not None and end is not None:\n # Search in the specified range, hybrid mode\n start = self.resolve_address(start, \"hybrid\", True)\n end = self.resolve_address(end, \"hybrid\", True)\n liblog.debugger(f\"Searching in the range {start:#x}-{end:#x}...\")\n memory_content = self.read(start, end - start)\n occurrences = find_all_overlapping_occurrences(value, memory_content, start)\n else:\n maps = self.maps.filter(file)\n start = self.resolve_address(start, file, True) if start is not None else maps[0].start\n end = self.resolve_address(end, file, True) if end is not None else maps[-1].end - 1\n\n liblog.debugger(f\"Searching in the range {start:#x}-{end:#x}...\")\n memory_content = self.read(start, end - start)\n\n occurrences = find_all_overlapping_occurrences(value, memory_content, start)\n\n return occurrences\n\n def find_pointers(\n self: AbstractMemoryView,\n where: int | str = \"*\",\n target: int | str = \"*\",\n step: int = 1,\n ) -> list[tuple[int, int]]:\n \"\"\"\n Find all pointers in the specified memory map that point to the target memory map.\n\n If the where parameter or the target parameter is a string, it is treated as a backing file. If it is an integer, the memory map containing the address will be used.\n\n If \"*\", \"ALL\", \"all\" or -1 is passed, all memory maps will be considered.\n\n Args:\n where (int | str): Identifier of the memory map where we want to search for references. Defaults to \"*\", which means all memory maps.\n target (int | str): Identifier of the memory map whose pointers we want to find. Defaults to \"*\", which means all memory maps.\n step (int): The interval step size while iterating over the memory buffer. Defaults to 1.\n\n Returns:\n list[tuple[int, int]]: A list of tuples containing the address where the pointer was found and the pointer itself.\n \"\"\"\n # Filter memory maps that match the target\n if target in {\"*\", \"ALL\", \"all\", -1}:\n target_maps = self._internal_debugger.maps\n else:\n target_maps = self._internal_debugger.maps.filter(target)\n\n if not target_maps:\n raise ValueError(\"No memory map found for the specified target.\")\n\n target_backing_files = {vmap.backing_file for vmap in target_maps}\n\n # Filter memory maps that match the where parameter\n if where in {\"*\", \"ALL\", \"all\", -1}:\n where_maps = self._internal_debugger.maps\n else:\n where_maps = self._internal_debugger.maps.filter(where)\n\n if not where_maps:\n raise ValueError(\"No memory map found for the specified where parameter.\")\n\n where_backing_files = {vmap.backing_file for vmap in where_maps}\n\n if len(where_backing_files) == 1 and len(target_backing_files) == 1:\n return self.__internal_find_pointers(where_maps, target_maps, step)\n elif len(where_backing_files) == 1:\n found_pointers = []\n for target_backing_file in target_backing_files:\n found_pointers += self.__internal_find_pointers(\n where_maps,\n self._internal_debugger.maps.filter(target_backing_file),\n step,\n )\n return found_pointers\n elif len(target_backing_files) == 1:\n found_pointers = []\n for where_backing_file in where_backing_files:\n found_pointers += self.__internal_find_pointers(\n self._internal_debugger.maps.filter(where_backing_file),\n target_maps,\n step,\n )\n return found_pointers\n else:\n found_pointers = []\n for where_backing_file in where_backing_files:\n for target_backing_file in target_backing_files:\n found_pointers += self.__internal_find_pointers(\n self._internal_debugger.maps.filter(where_backing_file),\n self._internal_debugger.maps.filter(target_backing_file),\n step,\n )\n\n return found_pointers\n\n def __internal_find_pointers(\n self: AbstractMemoryView,\n where_maps: list[MemoryMap],\n target_maps: list[MemoryMap],\n stride: int,\n ) -> list[tuple[int, int]]:\n \"\"\"Find all pointers to a specific memory map within another memory map. Internal implementation.\n\n Args:\n where_maps (list[MemoryMap]): The memory maps where to search for pointers.\n target_maps (list[MemoryMap]): The memory maps for which to search for pointers.\n stride (int): The interval step size while iterating over the memory buffer.\n\n Returns:\n list[tuple[int, int]]: A list of tuples containing the address where the pointer was found and the pointer itself.\n \"\"\"\n found_pointers = []\n\n # Obtain the start/end of the target memory segment\n target_start_address = target_maps[0].start\n target_end_address = target_maps[-1].end\n\n # Obtain the start/end of the where memory segment\n where_start_address = where_maps[0].start\n where_end_address = where_maps[-1].end\n\n # Read the memory from the where memory segment\n if not self._internal_debugger.fast_memory:\n liblog.warning(\n \"Fast memory reading is disabled. Using find_pointers with fast_memory=False may be very slow.\",\n )\n try:\n where_memory_buffer = self.read(where_start_address, where_end_address - where_start_address)\n except (OSError, OverflowError):\n liblog.error(f\"Cannot read the target memory segment with backing file: {where_maps[0].backing_file}.\")\n return found_pointers\n\n # Get the size of a pointer in the target process\n pointer_size = get_platform_gp_register_size(self._internal_debugger.arch)\n\n # Get the byteorder of the target machine (endianness)\n byteorder = sys.byteorder\n\n # Search for references in the where memory segment\n append = found_pointers.append\n for i in range(0, len(where_memory_buffer), stride):\n reference = where_memory_buffer[i : i + pointer_size]\n reference = int.from_bytes(reference, byteorder=byteorder)\n if target_start_address <= reference < target_end_address:\n append((where_start_address + i, reference))\n\n return found_pointers\n\n def __getitem__(self: AbstractMemoryView, key: int | slice | str | tuple) -> bytes:\n \"\"\"Read from memory, either a single byte or a byte string.\n\n Args:\n key (int | slice | str | tuple): The key to read from memory.\n \"\"\"\n return self._manage_memory_read_type(key)\n\n def __setitem__(self: AbstractMemoryView, key: int | slice | str | tuple, value: bytes) -> None:\n \"\"\"Write to memory, either a single byte or a byte string.\n\n Args:\n key (int | slice | str | tuple): The key to write to memory.\n value (bytes): The value to write.\n \"\"\"\n if not isinstance(value, bytes):\n raise TypeError(\"Invalid type for the value to write to memory. Expected bytes.\")\n self._manage_memory_write_type(key, value)\n\n def _manage_memory_read_type(\n self: AbstractMemoryView,\n key: int | slice | str | tuple,\n file: str = \"hybrid\",\n ) -> bytes:\n \"\"\"Manage the read from memory, according to the typing.\n\n Args:\n key (int | slice | str | tuple): The key to read from memory.\n file (str, optional): The user-defined backing file to resolve the address in. Defaults to \"hybrid\" (libdebug will first try to solve the address as an absolute address, then as a relative address w.r.t. the \"binary\" map file).\n \"\"\"\n if isinstance(key, int):\n address = self.resolve_address(key, file, skip_absolute_address_validation=True)\n try:\n return self.read(address, 1)\n except OSError as e:\n raise ValueError(\"Invalid address.\") from e\n elif isinstance(key, slice):\n if isinstance(key.start, str):\n start = self.resolve_symbol(key.start, file)\n else:\n start = self.resolve_address(key.start, file, skip_absolute_address_validation=True)\n\n if isinstance(key.stop, str):\n stop = self.resolve_symbol(key.stop, file)\n else:\n stop = self.resolve_address(key.stop, file, skip_absolute_address_validation=True)\n\n if stop < start:\n raise ValueError(\"Invalid slice range.\")\n\n try:\n return self.read(start, stop - start)\n except OSError as e:\n raise ValueError(\"Invalid address.\") from e\n elif isinstance(key, str):\n address = self.resolve_symbol(key, file)\n\n return self.read(address, 1)\n elif isinstance(key, tuple):\n return self._manage_memory_read_tuple(key)\n else:\n raise TypeError(\"Invalid key type.\")\n\n def _manage_memory_read_tuple(self: AbstractMemoryView, key: tuple) -> bytes:\n \"\"\"Manage the read from memory, when the access is through a tuple.\n\n Args:\n key (tuple): The key to read from memory.\n \"\"\"\n if len(key) == 3:\n # It can only be a tuple of the type (address, size, file)\n address, size, file = key\n if not isinstance(file, str):\n raise TypeError(\"Invalid type for the backing file. Expected string.\")\n elif len(key) == 2:\n left, right = key\n if isinstance(right, str):\n # The right element can only be the backing file\n return self._manage_memory_read_type(left, right)\n elif isinstance(right, int):\n # The right element must be the size\n address = left\n size = right\n file = \"hybrid\"\n else:\n raise TypeError(\"Tuple must have 2 or 3 elements.\")\n\n if not isinstance(size, int):\n raise TypeError(\"Invalid type for the size. Expected int.\")\n\n if isinstance(address, str):\n address = self.resolve_symbol(address, file)\n elif isinstance(address, int):\n address = self.resolve_address(address, file, skip_absolute_address_validation=True)\n else:\n raise TypeError(\"Invalid type for the address. Expected int or string.\")\n\n try:\n return self.read(address, size)\n except OSError as e:\n raise ValueError(\"Invalid address.\") from e\n\n def _manage_memory_write_type(\n self: AbstractMemoryView,\n key: int | slice | str | tuple,\n value: bytes,\n file: str = \"hybrid\",\n ) -> None:\n \"\"\"Manage the write to memory, according to the typing.\n\n Args:\n key (int | slice | str | tuple): The key to read from memory.\n value (bytes): The value to write.\n file (str, optional): The user-defined backing file to resolve the address in. Defaults to \"hybrid\" (libdebug will first try to solve the address as an absolute address, then as a relative address w.r.t. the \"binary\" map file).\n \"\"\"\n if isinstance(key, int):\n address = self.resolve_address(key, file, skip_absolute_address_validation=True)\n try:\n self.write(address, value)\n except OSError as e:\n raise ValueError(\"Invalid address.\") from e\n elif isinstance(key, slice):\n if isinstance(key.start, str):\n start = self.resolve_symbol(key.start, file)\n else:\n start = self.resolve_address(key.start, file, skip_absolute_address_validation=True)\n\n if key.stop is not None:\n if isinstance(key.stop, str):\n stop = self.resolve_symbol(key.stop, file)\n else:\n stop = self.resolve_address(\n key.stop,\n file,\n skip_absolute_address_validation=True,\n )\n\n if stop < start:\n raise ValueError(\"Invalid slice range\")\n\n if len(value) != stop - start:\n liblog.warning(f\"Mismatch between slice width and value size, writing {len(value)} bytes.\")\n\n try:\n self.write(start, value)\n except OSError as e:\n raise ValueError(\"Invalid address.\") from e\n\n elif isinstance(key, str):\n address = self.resolve_symbol(key, file)\n\n self.write(address, value)\n elif isinstance(key, tuple):\n self._manage_memory_write_tuple(key, value)\n else:\n raise TypeError(\"Invalid key type.\")\n\n def _manage_memory_write_tuple(self: AbstractMemoryView, key: tuple, value: bytes) -> None:\n \"\"\"Manage the write to memory, when the access is through a tuple.\n\n Args:\n key (tuple): The key to read from memory.\n value (bytes): The value to write.\n \"\"\"\n if len(key) == 3:\n # It can only be a tuple of the type (address, size, file)\n address, size, file = key\n if not isinstance(file, str):\n raise TypeError(\"Invalid type for the backing file. Expected string.\")\n elif len(key) == 2:\n left, right = key\n if isinstance(right, str):\n # The right element can only be the backing file\n self._manage_memory_write_type(left, value, right)\n return\n elif isinstance(right, int):\n # The right element must be the size\n address = left\n size = right\n file = \"hybrid\"\n else:\n raise TypeError(\"Tuple must have 2 or 3 elements.\")\n\n if not isinstance(size, int):\n raise TypeError(\"Invalid type for the size. Expected int.\")\n\n if isinstance(address, str):\n address = self.resolve_symbol(address, file)\n elif isinstance(address, int):\n address = self.resolve_address(address, file, skip_absolute_address_validation=True)\n else:\n raise TypeError(\"Invalid type for the address. Expected int or string.\")\n\n if len(value) != size:\n liblog.warning(f\"Mismatch between specified size and actual value size, writing {len(value)} bytes.\")\n\n try:\n self.write(address, value)\n except OSError as e:\n raise ValueError(\"Invalid address.\") from e\n\n def __delitem__(self: AbstractMemoryView, key: int | slice | str | tuple) -> None:\n \"\"\"MemoryView doesn't support deletion.\"\"\"\n raise NotImplementedError(\"MemoryView doesn't support deletion\")\n\n def __len__(self: AbstractMemoryView) -> None:\n \"\"\"MemoryView doesn't support length.\"\"\"\n raise NotImplementedError(\"MemoryView doesn't support length\")\n\n def insert(self: AbstractMemoryView, index: int, value: int) -> None:\n \"\"\"MemoryView doesn't support insertion.\"\"\"\n raise NotImplementedError(\"MemoryView doesn't support insertion\")\n\n @property\n def maps(self: AbstractMemoryView) -> list:\n \"\"\"Returns the list of memory maps of the target process.\"\"\"\n raise NotImplementedError(\"The maps property must be implemented in the subclass.\")\n\n def resolve_address(\n self: AbstractMemoryView,\n address: int,\n backing_file: str,\n skip_absolute_address_validation: bool = False,\n ) -> int:\n \"\"\"Normalizes and validates the specified address.\n\n Args:\n address (int): The address to normalize and validate.\n backing_file (str): The backing file to resolve the address in.\n skip_absolute_address_validation (bool, optional): Whether to skip bounds checking for absolute addresses. Defaults to False.\n\n Returns:\n int: The normalized and validated address.\n\n Raises:\n ValueError: If the substring `backing_file` is present in multiple backing files.\n \"\"\"\n return self._internal_debugger.resolve_address(\n address, backing_file, skip_absolute_address_validation,\n )\n\n def resolve_symbol(self: AbstractMemoryView, symbol: str, backing_file: str) -> int:\n \"\"\"Resolves the address of the specified symbol.\n\n Args:\n symbol (str): The symbol to resolve.\n backing_file (str): The backing file to resolve the symbol in.\n\n Returns:\n int: The address of the symbol.\n \"\"\"\n return self._internal_debugger.resolve_symbol(symbol, backing_file)\n"},{"location":"from_pydoc/generated/memory/abstract_memory_view/#libdebug.memory.abstract_memory_view.AbstractMemoryView.maps","title":"maps property","text":"Returns the list of memory maps of the target process.
"},{"location":"from_pydoc/generated/memory/abstract_memory_view/#libdebug.memory.abstract_memory_view.AbstractMemoryView.__delitem__","title":"__delitem__(key)","text":"MemoryView doesn't support deletion.
Source code inlibdebug/memory/abstract_memory_view.py def __delitem__(self: AbstractMemoryView, key: int | slice | str | tuple) -> None:\n \"\"\"MemoryView doesn't support deletion.\"\"\"\n raise NotImplementedError(\"MemoryView doesn't support deletion\")\n"},{"location":"from_pydoc/generated/memory/abstract_memory_view/#libdebug.memory.abstract_memory_view.AbstractMemoryView.__getitem__","title":"__getitem__(key)","text":"Read from memory, either a single byte or a byte string.
Parameters:
Name Type Description Defaultkey int | slice | str | tuple The key to read from memory.
required Source code inlibdebug/memory/abstract_memory_view.py def __getitem__(self: AbstractMemoryView, key: int | slice | str | tuple) -> bytes:\n \"\"\"Read from memory, either a single byte or a byte string.\n\n Args:\n key (int | slice | str | tuple): The key to read from memory.\n \"\"\"\n return self._manage_memory_read_type(key)\n"},{"location":"from_pydoc/generated/memory/abstract_memory_view/#libdebug.memory.abstract_memory_view.AbstractMemoryView.__init__","title":"__init__()","text":"Initializes the MemoryView.
Source code inlibdebug/memory/abstract_memory_view.py def __init__(self: AbstractMemoryView) -> None:\n \"\"\"Initializes the MemoryView.\"\"\"\n self._internal_debugger = provide_internal_debugger(self)\n"},{"location":"from_pydoc/generated/memory/abstract_memory_view/#libdebug.memory.abstract_memory_view.AbstractMemoryView.__internal_find_pointers","title":"__internal_find_pointers(where_maps, target_maps, stride)","text":"Find all pointers to a specific memory map within another memory map. Internal implementation.
Parameters:
Name Type Description Defaultwhere_maps list[MemoryMap] The memory maps where to search for pointers.
requiredtarget_maps list[MemoryMap] The memory maps for which to search for pointers.
requiredstride int The interval step size while iterating over the memory buffer.
requiredReturns:
Type Descriptionlist[tuple[int, int]] list[tuple[int, int]]: A list of tuples containing the address where the pointer was found and the pointer itself.
Source code inlibdebug/memory/abstract_memory_view.py def __internal_find_pointers(\n self: AbstractMemoryView,\n where_maps: list[MemoryMap],\n target_maps: list[MemoryMap],\n stride: int,\n) -> list[tuple[int, int]]:\n \"\"\"Find all pointers to a specific memory map within another memory map. Internal implementation.\n\n Args:\n where_maps (list[MemoryMap]): The memory maps where to search for pointers.\n target_maps (list[MemoryMap]): The memory maps for which to search for pointers.\n stride (int): The interval step size while iterating over the memory buffer.\n\n Returns:\n list[tuple[int, int]]: A list of tuples containing the address where the pointer was found and the pointer itself.\n \"\"\"\n found_pointers = []\n\n # Obtain the start/end of the target memory segment\n target_start_address = target_maps[0].start\n target_end_address = target_maps[-1].end\n\n # Obtain the start/end of the where memory segment\n where_start_address = where_maps[0].start\n where_end_address = where_maps[-1].end\n\n # Read the memory from the where memory segment\n if not self._internal_debugger.fast_memory:\n liblog.warning(\n \"Fast memory reading is disabled. Using find_pointers with fast_memory=False may be very slow.\",\n )\n try:\n where_memory_buffer = self.read(where_start_address, where_end_address - where_start_address)\n except (OSError, OverflowError):\n liblog.error(f\"Cannot read the target memory segment with backing file: {where_maps[0].backing_file}.\")\n return found_pointers\n\n # Get the size of a pointer in the target process\n pointer_size = get_platform_gp_register_size(self._internal_debugger.arch)\n\n # Get the byteorder of the target machine (endianness)\n byteorder = sys.byteorder\n\n # Search for references in the where memory segment\n append = found_pointers.append\n for i in range(0, len(where_memory_buffer), stride):\n reference = where_memory_buffer[i : i + pointer_size]\n reference = int.from_bytes(reference, byteorder=byteorder)\n if target_start_address <= reference < target_end_address:\n append((where_start_address + i, reference))\n\n return found_pointers\n"},{"location":"from_pydoc/generated/memory/abstract_memory_view/#libdebug.memory.abstract_memory_view.AbstractMemoryView.__len__","title":"__len__()","text":"MemoryView doesn't support length.
Source code inlibdebug/memory/abstract_memory_view.py def __len__(self: AbstractMemoryView) -> None:\n \"\"\"MemoryView doesn't support length.\"\"\"\n raise NotImplementedError(\"MemoryView doesn't support length\")\n"},{"location":"from_pydoc/generated/memory/abstract_memory_view/#libdebug.memory.abstract_memory_view.AbstractMemoryView.__setitem__","title":"__setitem__(key, value)","text":"Write to memory, either a single byte or a byte string.
Parameters:
Name Type Description Defaultkey int | slice | str | tuple The key to write to memory.
requiredvalue bytes The value to write.
required Source code inlibdebug/memory/abstract_memory_view.py def __setitem__(self: AbstractMemoryView, key: int | slice | str | tuple, value: bytes) -> None:\n \"\"\"Write to memory, either a single byte or a byte string.\n\n Args:\n key (int | slice | str | tuple): The key to write to memory.\n value (bytes): The value to write.\n \"\"\"\n if not isinstance(value, bytes):\n raise TypeError(\"Invalid type for the value to write to memory. Expected bytes.\")\n self._manage_memory_write_type(key, value)\n"},{"location":"from_pydoc/generated/memory/abstract_memory_view/#libdebug.memory.abstract_memory_view.AbstractMemoryView._manage_memory_read_tuple","title":"_manage_memory_read_tuple(key)","text":"Manage the read from memory, when the access is through a tuple.
Parameters:
Name Type Description Defaultkey tuple The key to read from memory.
required Source code inlibdebug/memory/abstract_memory_view.py def _manage_memory_read_tuple(self: AbstractMemoryView, key: tuple) -> bytes:\n \"\"\"Manage the read from memory, when the access is through a tuple.\n\n Args:\n key (tuple): The key to read from memory.\n \"\"\"\n if len(key) == 3:\n # It can only be a tuple of the type (address, size, file)\n address, size, file = key\n if not isinstance(file, str):\n raise TypeError(\"Invalid type for the backing file. Expected string.\")\n elif len(key) == 2:\n left, right = key\n if isinstance(right, str):\n # The right element can only be the backing file\n return self._manage_memory_read_type(left, right)\n elif isinstance(right, int):\n # The right element must be the size\n address = left\n size = right\n file = \"hybrid\"\n else:\n raise TypeError(\"Tuple must have 2 or 3 elements.\")\n\n if not isinstance(size, int):\n raise TypeError(\"Invalid type for the size. Expected int.\")\n\n if isinstance(address, str):\n address = self.resolve_symbol(address, file)\n elif isinstance(address, int):\n address = self.resolve_address(address, file, skip_absolute_address_validation=True)\n else:\n raise TypeError(\"Invalid type for the address. Expected int or string.\")\n\n try:\n return self.read(address, size)\n except OSError as e:\n raise ValueError(\"Invalid address.\") from e\n"},{"location":"from_pydoc/generated/memory/abstract_memory_view/#libdebug.memory.abstract_memory_view.AbstractMemoryView._manage_memory_read_type","title":"_manage_memory_read_type(key, file='hybrid')","text":"Manage the read from memory, according to the typing.
Parameters:
Name Type Description Defaultkey int | slice | str | tuple The key to read from memory.
requiredfile str The user-defined backing file to resolve the address in. Defaults to \"hybrid\" (libdebug will first try to solve the address as an absolute address, then as a relative address w.r.t. the \"binary\" map file).
'hybrid' Source code in libdebug/memory/abstract_memory_view.py def _manage_memory_read_type(\n self: AbstractMemoryView,\n key: int | slice | str | tuple,\n file: str = \"hybrid\",\n) -> bytes:\n \"\"\"Manage the read from memory, according to the typing.\n\n Args:\n key (int | slice | str | tuple): The key to read from memory.\n file (str, optional): The user-defined backing file to resolve the address in. Defaults to \"hybrid\" (libdebug will first try to solve the address as an absolute address, then as a relative address w.r.t. the \"binary\" map file).\n \"\"\"\n if isinstance(key, int):\n address = self.resolve_address(key, file, skip_absolute_address_validation=True)\n try:\n return self.read(address, 1)\n except OSError as e:\n raise ValueError(\"Invalid address.\") from e\n elif isinstance(key, slice):\n if isinstance(key.start, str):\n start = self.resolve_symbol(key.start, file)\n else:\n start = self.resolve_address(key.start, file, skip_absolute_address_validation=True)\n\n if isinstance(key.stop, str):\n stop = self.resolve_symbol(key.stop, file)\n else:\n stop = self.resolve_address(key.stop, file, skip_absolute_address_validation=True)\n\n if stop < start:\n raise ValueError(\"Invalid slice range.\")\n\n try:\n return self.read(start, stop - start)\n except OSError as e:\n raise ValueError(\"Invalid address.\") from e\n elif isinstance(key, str):\n address = self.resolve_symbol(key, file)\n\n return self.read(address, 1)\n elif isinstance(key, tuple):\n return self._manage_memory_read_tuple(key)\n else:\n raise TypeError(\"Invalid key type.\")\n"},{"location":"from_pydoc/generated/memory/abstract_memory_view/#libdebug.memory.abstract_memory_view.AbstractMemoryView._manage_memory_write_tuple","title":"_manage_memory_write_tuple(key, value)","text":"Manage the write to memory, when the access is through a tuple.
Parameters:
Name Type Description Defaultkey tuple The key to read from memory.
requiredvalue bytes The value to write.
required Source code inlibdebug/memory/abstract_memory_view.py def _manage_memory_write_tuple(self: AbstractMemoryView, key: tuple, value: bytes) -> None:\n \"\"\"Manage the write to memory, when the access is through a tuple.\n\n Args:\n key (tuple): The key to read from memory.\n value (bytes): The value to write.\n \"\"\"\n if len(key) == 3:\n # It can only be a tuple of the type (address, size, file)\n address, size, file = key\n if not isinstance(file, str):\n raise TypeError(\"Invalid type for the backing file. Expected string.\")\n elif len(key) == 2:\n left, right = key\n if isinstance(right, str):\n # The right element can only be the backing file\n self._manage_memory_write_type(left, value, right)\n return\n elif isinstance(right, int):\n # The right element must be the size\n address = left\n size = right\n file = \"hybrid\"\n else:\n raise TypeError(\"Tuple must have 2 or 3 elements.\")\n\n if not isinstance(size, int):\n raise TypeError(\"Invalid type for the size. Expected int.\")\n\n if isinstance(address, str):\n address = self.resolve_symbol(address, file)\n elif isinstance(address, int):\n address = self.resolve_address(address, file, skip_absolute_address_validation=True)\n else:\n raise TypeError(\"Invalid type for the address. Expected int or string.\")\n\n if len(value) != size:\n liblog.warning(f\"Mismatch between specified size and actual value size, writing {len(value)} bytes.\")\n\n try:\n self.write(address, value)\n except OSError as e:\n raise ValueError(\"Invalid address.\") from e\n"},{"location":"from_pydoc/generated/memory/abstract_memory_view/#libdebug.memory.abstract_memory_view.AbstractMemoryView._manage_memory_write_type","title":"_manage_memory_write_type(key, value, file='hybrid')","text":"Manage the write to memory, according to the typing.
Parameters:
Name Type Description Defaultkey int | slice | str | tuple The key to read from memory.
requiredvalue bytes The value to write.
requiredfile str The user-defined backing file to resolve the address in. Defaults to \"hybrid\" (libdebug will first try to solve the address as an absolute address, then as a relative address w.r.t. the \"binary\" map file).
'hybrid' Source code in libdebug/memory/abstract_memory_view.py def _manage_memory_write_type(\n self: AbstractMemoryView,\n key: int | slice | str | tuple,\n value: bytes,\n file: str = \"hybrid\",\n) -> None:\n \"\"\"Manage the write to memory, according to the typing.\n\n Args:\n key (int | slice | str | tuple): The key to read from memory.\n value (bytes): The value to write.\n file (str, optional): The user-defined backing file to resolve the address in. Defaults to \"hybrid\" (libdebug will first try to solve the address as an absolute address, then as a relative address w.r.t. the \"binary\" map file).\n \"\"\"\n if isinstance(key, int):\n address = self.resolve_address(key, file, skip_absolute_address_validation=True)\n try:\n self.write(address, value)\n except OSError as e:\n raise ValueError(\"Invalid address.\") from e\n elif isinstance(key, slice):\n if isinstance(key.start, str):\n start = self.resolve_symbol(key.start, file)\n else:\n start = self.resolve_address(key.start, file, skip_absolute_address_validation=True)\n\n if key.stop is not None:\n if isinstance(key.stop, str):\n stop = self.resolve_symbol(key.stop, file)\n else:\n stop = self.resolve_address(\n key.stop,\n file,\n skip_absolute_address_validation=True,\n )\n\n if stop < start:\n raise ValueError(\"Invalid slice range\")\n\n if len(value) != stop - start:\n liblog.warning(f\"Mismatch between slice width and value size, writing {len(value)} bytes.\")\n\n try:\n self.write(start, value)\n except OSError as e:\n raise ValueError(\"Invalid address.\") from e\n\n elif isinstance(key, str):\n address = self.resolve_symbol(key, file)\n\n self.write(address, value)\n elif isinstance(key, tuple):\n self._manage_memory_write_tuple(key, value)\n else:\n raise TypeError(\"Invalid key type.\")\n"},{"location":"from_pydoc/generated/memory/abstract_memory_view/#libdebug.memory.abstract_memory_view.AbstractMemoryView.find","title":"find(value, file='all', start=None, end=None)","text":"Searches for the given value in the specified memory maps of the process.
The start and end addresses can be used to limit the search to a specific range. If not specified, the search will be performed on the whole memory map.
Parameters:
Name Type Description Defaultvalue bytes | str | int The value to search for.
requiredfile str The backing file to search the value in. Defaults to \"all\", which means all memory.
'all' start int | None The start address of the search. Defaults to None.
None end int | None The end address of the search. Defaults to None.
None Returns:
Type Descriptionlist[int] list[int]: A list of offset where the value was found.
Source code inlibdebug/memory/abstract_memory_view.py def find(\n self: AbstractMemoryView,\n value: bytes | str | int,\n file: str = \"all\",\n start: int | None = None,\n end: int | None = None,\n) -> list[int]:\n \"\"\"Searches for the given value in the specified memory maps of the process.\n\n The start and end addresses can be used to limit the search to a specific range.\n If not specified, the search will be performed on the whole memory map.\n\n Args:\n value (bytes | str | int): The value to search for.\n file (str): The backing file to search the value in. Defaults to \"all\", which means all memory.\n start (int | None): The start address of the search. Defaults to None.\n end (int | None): The end address of the search. Defaults to None.\n\n Returns:\n list[int]: A list of offset where the value was found.\n \"\"\"\n if isinstance(value, str):\n value = value.encode()\n elif isinstance(value, int):\n value = value.to_bytes(1, sys.byteorder)\n\n occurrences = []\n if file == \"all\" and start is None and end is None:\n for vmap in self.maps:\n liblog.debugger(f\"Searching in {vmap.backing_file}...\")\n try:\n memory_content = self.read(vmap.start, vmap.end - vmap.start)\n except (OSError, OverflowError, ValueError):\n # There are some memory regions that cannot be read, such as [vvar], [vdso], etc.\n continue\n occurrences += find_all_overlapping_occurrences(value, memory_content, vmap.start)\n elif file == \"all\" and start is not None and end is None:\n for vmap in self.maps:\n if vmap.end > start:\n liblog.debugger(f\"Searching in {vmap.backing_file}...\")\n read_start = max(vmap.start, start)\n try:\n memory_content = self.read(read_start, vmap.end - read_start)\n except (OSError, OverflowError, ValueError):\n # There are some memory regions that cannot be read, such as [vvar], [vdso], etc.\n continue\n occurrences += find_all_overlapping_occurrences(value, memory_content, read_start)\n elif file == \"all\" and start is None and end is not None:\n for vmap in self.maps:\n if vmap.start < end:\n liblog.debugger(f\"Searching in {vmap.backing_file}...\")\n read_end = min(vmap.end, end)\n try:\n memory_content = self.read(vmap.start, read_end - vmap.start)\n except (OSError, OverflowError, ValueError):\n # There are some memory regions that cannot be read, such as [vvar], [vdso], etc.\n continue\n occurrences += find_all_overlapping_occurrences(value, memory_content, vmap.start)\n elif file == \"all\" and start is not None and end is not None:\n # Search in the specified range, hybrid mode\n start = self.resolve_address(start, \"hybrid\", True)\n end = self.resolve_address(end, \"hybrid\", True)\n liblog.debugger(f\"Searching in the range {start:#x}-{end:#x}...\")\n memory_content = self.read(start, end - start)\n occurrences = find_all_overlapping_occurrences(value, memory_content, start)\n else:\n maps = self.maps.filter(file)\n start = self.resolve_address(start, file, True) if start is not None else maps[0].start\n end = self.resolve_address(end, file, True) if end is not None else maps[-1].end - 1\n\n liblog.debugger(f\"Searching in the range {start:#x}-{end:#x}...\")\n memory_content = self.read(start, end - start)\n\n occurrences = find_all_overlapping_occurrences(value, memory_content, start)\n\n return occurrences\n"},{"location":"from_pydoc/generated/memory/abstract_memory_view/#libdebug.memory.abstract_memory_view.AbstractMemoryView.find_pointers","title":"find_pointers(where='*', target='*', step=1)","text":"Find all pointers in the specified memory map that point to the target memory map.
If the where parameter or the target parameter is a string, it is treated as a backing file. If it is an integer, the memory map containing the address will be used.
If \"*\", \"ALL\", \"all\" or -1 is passed, all memory maps will be considered.
Parameters:
Name Type Description Defaultwhere int | str Identifier of the memory map where we want to search for references. Defaults to \"*\", which means all memory maps.
'*' target int | str Identifier of the memory map whose pointers we want to find. Defaults to \"*\", which means all memory maps.
'*' step int The interval step size while iterating over the memory buffer. Defaults to 1.
1 Returns:
Type Descriptionlist[tuple[int, int]] list[tuple[int, int]]: A list of tuples containing the address where the pointer was found and the pointer itself.
Source code inlibdebug/memory/abstract_memory_view.py def find_pointers(\n self: AbstractMemoryView,\n where: int | str = \"*\",\n target: int | str = \"*\",\n step: int = 1,\n) -> list[tuple[int, int]]:\n \"\"\"\n Find all pointers in the specified memory map that point to the target memory map.\n\n If the where parameter or the target parameter is a string, it is treated as a backing file. If it is an integer, the memory map containing the address will be used.\n\n If \"*\", \"ALL\", \"all\" or -1 is passed, all memory maps will be considered.\n\n Args:\n where (int | str): Identifier of the memory map where we want to search for references. Defaults to \"*\", which means all memory maps.\n target (int | str): Identifier of the memory map whose pointers we want to find. Defaults to \"*\", which means all memory maps.\n step (int): The interval step size while iterating over the memory buffer. Defaults to 1.\n\n Returns:\n list[tuple[int, int]]: A list of tuples containing the address where the pointer was found and the pointer itself.\n \"\"\"\n # Filter memory maps that match the target\n if target in {\"*\", \"ALL\", \"all\", -1}:\n target_maps = self._internal_debugger.maps\n else:\n target_maps = self._internal_debugger.maps.filter(target)\n\n if not target_maps:\n raise ValueError(\"No memory map found for the specified target.\")\n\n target_backing_files = {vmap.backing_file for vmap in target_maps}\n\n # Filter memory maps that match the where parameter\n if where in {\"*\", \"ALL\", \"all\", -1}:\n where_maps = self._internal_debugger.maps\n else:\n where_maps = self._internal_debugger.maps.filter(where)\n\n if not where_maps:\n raise ValueError(\"No memory map found for the specified where parameter.\")\n\n where_backing_files = {vmap.backing_file for vmap in where_maps}\n\n if len(where_backing_files) == 1 and len(target_backing_files) == 1:\n return self.__internal_find_pointers(where_maps, target_maps, step)\n elif len(where_backing_files) == 1:\n found_pointers = []\n for target_backing_file in target_backing_files:\n found_pointers += self.__internal_find_pointers(\n where_maps,\n self._internal_debugger.maps.filter(target_backing_file),\n step,\n )\n return found_pointers\n elif len(target_backing_files) == 1:\n found_pointers = []\n for where_backing_file in where_backing_files:\n found_pointers += self.__internal_find_pointers(\n self._internal_debugger.maps.filter(where_backing_file),\n target_maps,\n step,\n )\n return found_pointers\n else:\n found_pointers = []\n for where_backing_file in where_backing_files:\n for target_backing_file in target_backing_files:\n found_pointers += self.__internal_find_pointers(\n self._internal_debugger.maps.filter(where_backing_file),\n self._internal_debugger.maps.filter(target_backing_file),\n step,\n )\n\n return found_pointers\n"},{"location":"from_pydoc/generated/memory/abstract_memory_view/#libdebug.memory.abstract_memory_view.AbstractMemoryView.insert","title":"insert(index, value)","text":"MemoryView doesn't support insertion.
Source code inlibdebug/memory/abstract_memory_view.py def insert(self: AbstractMemoryView, index: int, value: int) -> None:\n \"\"\"MemoryView doesn't support insertion.\"\"\"\n raise NotImplementedError(\"MemoryView doesn't support insertion\")\n"},{"location":"from_pydoc/generated/memory/abstract_memory_view/#libdebug.memory.abstract_memory_view.AbstractMemoryView.read","title":"read(address, size) abstractmethod","text":"Reads memory from the target process.
Parameters:
Name Type Description Defaultaddress int The address to read from.
requiredsize int The number of bytes to read.
requiredReturns:
Name Type Descriptionbytes bytes The read bytes.
Source code inlibdebug/memory/abstract_memory_view.py @abstractmethod\ndef read(self: AbstractMemoryView, address: int, size: int) -> bytes:\n \"\"\"Reads memory from the target process.\n\n Args:\n address (int): The address to read from.\n size (int): The number of bytes to read.\n\n Returns:\n bytes: The read bytes.\n \"\"\"\n"},{"location":"from_pydoc/generated/memory/abstract_memory_view/#libdebug.memory.abstract_memory_view.AbstractMemoryView.resolve_address","title":"resolve_address(address, backing_file, skip_absolute_address_validation=False)","text":"Normalizes and validates the specified address.
Parameters:
Name Type Description Defaultaddress int The address to normalize and validate.
requiredbacking_file str The backing file to resolve the address in.
requiredskip_absolute_address_validation bool Whether to skip bounds checking for absolute addresses. Defaults to False.
False Returns:
Name Type Descriptionint int The normalized and validated address.
Raises:
Type DescriptionValueError If the substring backing_file is present in multiple backing files.
libdebug/memory/abstract_memory_view.py def resolve_address(\n self: AbstractMemoryView,\n address: int,\n backing_file: str,\n skip_absolute_address_validation: bool = False,\n) -> int:\n \"\"\"Normalizes and validates the specified address.\n\n Args:\n address (int): The address to normalize and validate.\n backing_file (str): The backing file to resolve the address in.\n skip_absolute_address_validation (bool, optional): Whether to skip bounds checking for absolute addresses. Defaults to False.\n\n Returns:\n int: The normalized and validated address.\n\n Raises:\n ValueError: If the substring `backing_file` is present in multiple backing files.\n \"\"\"\n return self._internal_debugger.resolve_address(\n address, backing_file, skip_absolute_address_validation,\n )\n"},{"location":"from_pydoc/generated/memory/abstract_memory_view/#libdebug.memory.abstract_memory_view.AbstractMemoryView.resolve_symbol","title":"resolve_symbol(symbol, backing_file)","text":"Resolves the address of the specified symbol.
Parameters:
Name Type Description Defaultsymbol str The symbol to resolve.
requiredbacking_file str The backing file to resolve the symbol in.
requiredReturns:
Name Type Descriptionint int The address of the symbol.
Source code inlibdebug/memory/abstract_memory_view.py def resolve_symbol(self: AbstractMemoryView, symbol: str, backing_file: str) -> int:\n \"\"\"Resolves the address of the specified symbol.\n\n Args:\n symbol (str): The symbol to resolve.\n backing_file (str): The backing file to resolve the symbol in.\n\n Returns:\n int: The address of the symbol.\n \"\"\"\n return self._internal_debugger.resolve_symbol(symbol, backing_file)\n"},{"location":"from_pydoc/generated/memory/abstract_memory_view/#libdebug.memory.abstract_memory_view.AbstractMemoryView.write","title":"write(address, data) abstractmethod","text":"Writes memory to the target process.
Parameters:
Name Type Description Defaultaddress int The address to write to.
requireddata bytes The data to write.
required Source code inlibdebug/memory/abstract_memory_view.py @abstractmethod\ndef write(self: AbstractMemoryView, address: int, data: bytes) -> None:\n \"\"\"Writes memory to the target process.\n\n Args:\n address (int): The address to write to.\n data (bytes): The data to write.\n \"\"\"\n"},{"location":"from_pydoc/generated/memory/chunked_memory_view/","title":"libdebug.memory.chunked_memory_view","text":""},{"location":"from_pydoc/generated/memory/chunked_memory_view/#libdebug.memory.chunked_memory_view.ChunkedMemoryView","title":"ChunkedMemoryView","text":" Bases: AbstractMemoryView
A memory interface for the target process, intended for chunk-based memory access.
Attributes:
Name Type Descriptiongetter Callable[[int], bytes] A function that reads a chunk of memory from the target process.
setter Callable[[int, bytes], None] A function that writes a chunk of memory to the target process.
unit_size int The chunk size used by the getter and setter functions. Defaults to 8.
align_to int The address alignment that must be used when reading and writing memory. Defaults to 1.
Source code inlibdebug/memory/chunked_memory_view.py class ChunkedMemoryView(AbstractMemoryView):\n \"\"\"A memory interface for the target process, intended for chunk-based memory access.\n\n Attributes:\n getter (Callable[[int], bytes]): A function that reads a chunk of memory from the target process.\n setter (Callable[[int, bytes], None]): A function that writes a chunk of memory to the target process.\n unit_size (int, optional): The chunk size used by the getter and setter functions. Defaults to 8.\n align_to (int, optional): The address alignment that must be used when reading and writing memory. Defaults to 1.\n \"\"\"\n\n def __init__(\n self: ChunkedMemoryView,\n getter: Callable[[int], bytes],\n setter: Callable[[int, bytes], None],\n unit_size: int = 8,\n align_to: int = 1,\n ) -> None:\n \"\"\"Initializes the MemoryView.\"\"\"\n super().__init__()\n self.getter = getter\n self.setter = setter\n self.unit_size = unit_size\n self.align_to = align_to\n\n def read(self: ChunkedMemoryView, address: int, size: int) -> bytes:\n \"\"\"Reads memory from the target process.\n\n Args:\n address (int): The address to read from.\n size (int): The number of bytes to read.\n\n Returns:\n bytes: The read bytes.\n \"\"\"\n if self.align_to == 1:\n data = b\"\"\n\n remainder = size % self.unit_size\n\n for i in range(address, address + size - remainder, self.unit_size):\n data += self.getter(i)\n\n if remainder:\n data += self.getter(address + size - remainder)[:remainder]\n\n return data\n else:\n prefix = address % self.align_to\n prefix_size = self.unit_size - prefix\n\n data = self.getter(address - prefix)[prefix:]\n\n remainder = (size - prefix_size) % self.unit_size\n\n for i in range(\n address + prefix_size,\n address + size - remainder,\n self.unit_size,\n ):\n data += self.getter(i)\n\n if remainder:\n data += self.getter(address + size - remainder)[:remainder]\n\n return data\n\n def write(self: ChunkedMemoryView, address: int, data: bytes) -> None:\n \"\"\"Writes memory to the target process.\n\n Args:\n address (int): The address to write to.\n data (bytes): The data to write.\n \"\"\"\n size = len(data)\n\n if self.align_to == 1:\n remainder = size % self.unit_size\n base = address\n else:\n prefix = address % self.align_to\n prefix_size = self.unit_size - prefix\n\n prev_data = self.getter(address - prefix)\n\n self.setter(address - prefix, prev_data[:prefix_size] + data[:prefix])\n\n remainder = (size - prefix_size) % self.unit_size\n base = address + prefix_size\n\n for i in range(base, address + size - remainder, self.unit_size):\n self.setter(i, data[i - address : i - address + self.unit_size])\n\n if remainder:\n prev_data = self.getter(address + size - remainder)\n self.setter(\n address + size - remainder,\n data[size - remainder :] + prev_data[remainder:],\n )\n\n @property\n def maps(self: ChunkedMemoryView) -> MemoryMapList:\n \"\"\"Returns a list of memory maps in the target process.\n\n Returns:\n MemoryMapList: The memory maps.\n \"\"\"\n return self._internal_debugger.maps\n"},{"location":"from_pydoc/generated/memory/chunked_memory_view/#libdebug.memory.chunked_memory_view.ChunkedMemoryView.maps","title":"maps property","text":"Returns a list of memory maps in the target process.
Returns:
Name Type DescriptionMemoryMapList MemoryMapList The memory maps.
"},{"location":"from_pydoc/generated/memory/chunked_memory_view/#libdebug.memory.chunked_memory_view.ChunkedMemoryView.__init__","title":"__init__(getter, setter, unit_size=8, align_to=1)","text":"Initializes the MemoryView.
Source code inlibdebug/memory/chunked_memory_view.py def __init__(\n self: ChunkedMemoryView,\n getter: Callable[[int], bytes],\n setter: Callable[[int, bytes], None],\n unit_size: int = 8,\n align_to: int = 1,\n) -> None:\n \"\"\"Initializes the MemoryView.\"\"\"\n super().__init__()\n self.getter = getter\n self.setter = setter\n self.unit_size = unit_size\n self.align_to = align_to\n"},{"location":"from_pydoc/generated/memory/chunked_memory_view/#libdebug.memory.chunked_memory_view.ChunkedMemoryView.read","title":"read(address, size)","text":"Reads memory from the target process.
Parameters:
Name Type Description Defaultaddress int The address to read from.
requiredsize int The number of bytes to read.
requiredReturns:
Name Type Descriptionbytes bytes The read bytes.
Source code inlibdebug/memory/chunked_memory_view.py def read(self: ChunkedMemoryView, address: int, size: int) -> bytes:\n \"\"\"Reads memory from the target process.\n\n Args:\n address (int): The address to read from.\n size (int): The number of bytes to read.\n\n Returns:\n bytes: The read bytes.\n \"\"\"\n if self.align_to == 1:\n data = b\"\"\n\n remainder = size % self.unit_size\n\n for i in range(address, address + size - remainder, self.unit_size):\n data += self.getter(i)\n\n if remainder:\n data += self.getter(address + size - remainder)[:remainder]\n\n return data\n else:\n prefix = address % self.align_to\n prefix_size = self.unit_size - prefix\n\n data = self.getter(address - prefix)[prefix:]\n\n remainder = (size - prefix_size) % self.unit_size\n\n for i in range(\n address + prefix_size,\n address + size - remainder,\n self.unit_size,\n ):\n data += self.getter(i)\n\n if remainder:\n data += self.getter(address + size - remainder)[:remainder]\n\n return data\n"},{"location":"from_pydoc/generated/memory/chunked_memory_view/#libdebug.memory.chunked_memory_view.ChunkedMemoryView.write","title":"write(address, data)","text":"Writes memory to the target process.
Parameters:
Name Type Description Defaultaddress int The address to write to.
requireddata bytes The data to write.
required Source code inlibdebug/memory/chunked_memory_view.py def write(self: ChunkedMemoryView, address: int, data: bytes) -> None:\n \"\"\"Writes memory to the target process.\n\n Args:\n address (int): The address to write to.\n data (bytes): The data to write.\n \"\"\"\n size = len(data)\n\n if self.align_to == 1:\n remainder = size % self.unit_size\n base = address\n else:\n prefix = address % self.align_to\n prefix_size = self.unit_size - prefix\n\n prev_data = self.getter(address - prefix)\n\n self.setter(address - prefix, prev_data[:prefix_size] + data[:prefix])\n\n remainder = (size - prefix_size) % self.unit_size\n base = address + prefix_size\n\n for i in range(base, address + size - remainder, self.unit_size):\n self.setter(i, data[i - address : i - address + self.unit_size])\n\n if remainder:\n prev_data = self.getter(address + size - remainder)\n self.setter(\n address + size - remainder,\n data[size - remainder :] + prev_data[remainder:],\n )\n"},{"location":"from_pydoc/generated/memory/direct_memory_view/","title":"libdebug.memory.direct_memory_view","text":""},{"location":"from_pydoc/generated/memory/direct_memory_view/#libdebug.memory.direct_memory_view.DirectMemoryView","title":"DirectMemoryView","text":" Bases: AbstractMemoryView
A memory interface for the target process, intended for direct memory access.
Attributes:
Name Type Descriptiongetter Callable[[int, int], bytes] A function that reads a variable amount of data from the target's memory.
setter Callable[[int, bytes], None] A function that writes memory to the target process.
align_to int The address alignment that must be used when reading and writing memory. Defaults to 1.
Source code inlibdebug/memory/direct_memory_view.py class DirectMemoryView(AbstractMemoryView):\n \"\"\"A memory interface for the target process, intended for direct memory access.\n\n Attributes:\n getter (Callable[[int, int], bytes]): A function that reads a variable amount of data from the target's memory.\n setter (Callable[[int, bytes], None]): A function that writes memory to the target process.\n align_to (int, optional): The address alignment that must be used when reading and writing memory. Defaults to 1.\n \"\"\"\n\n def __init__(\n self: DirectMemoryView,\n getter: Callable[[int, int], bytes],\n setter: Callable[[int, bytes], None],\n align_to: int = 1,\n ) -> None:\n \"\"\"Initializes the MemoryView.\"\"\"\n super().__init__()\n self.getter = getter\n self.setter = setter\n self.align_to = align_to\n\n def read(self: DirectMemoryView, address: int, size: int) -> bytes:\n \"\"\"Reads memory from the target process.\n\n Args:\n address (int): The address to read from.\n size (int): The number of bytes to read.\n\n Returns:\n bytes: The read bytes.\n \"\"\"\n if self.align_to == 1:\n return self.getter(address, size)\n else:\n prefix = address % self.align_to\n base_address = address - prefix\n new_size = size + prefix\n data = self.getter(base_address, new_size)\n return data[prefix : prefix + size]\n\n def write(self: DirectMemoryView, address: int, data: bytes) -> None:\n \"\"\"Writes memory to the target process.\n\n Args:\n address (int): The address to write to.\n data (bytes): The data to write.\n \"\"\"\n size = len(data)\n\n if self.align_to == 1:\n self.setter(address, data)\n else:\n prefix = address % self.align_to\n base_address = address - prefix\n new_size = size + prefix\n prefix_data = self.getter(base_address, new_size)\n new_data = prefix_data[:prefix] + data + prefix_data[prefix + size :]\n self.setter(base_address, new_data)\n\n @property\n def maps(self: DirectMemoryView) -> MemoryMapList:\n \"\"\"Returns a list of memory maps in the target process.\n\n Returns:\n MemoryMapList: The memory maps.\n \"\"\"\n return self._internal_debugger.maps\n"},{"location":"from_pydoc/generated/memory/direct_memory_view/#libdebug.memory.direct_memory_view.DirectMemoryView.maps","title":"maps property","text":"Returns a list of memory maps in the target process.
Returns:
Name Type DescriptionMemoryMapList MemoryMapList The memory maps.
"},{"location":"from_pydoc/generated/memory/direct_memory_view/#libdebug.memory.direct_memory_view.DirectMemoryView.__init__","title":"__init__(getter, setter, align_to=1)","text":"Initializes the MemoryView.
Source code inlibdebug/memory/direct_memory_view.py def __init__(\n self: DirectMemoryView,\n getter: Callable[[int, int], bytes],\n setter: Callable[[int, bytes], None],\n align_to: int = 1,\n) -> None:\n \"\"\"Initializes the MemoryView.\"\"\"\n super().__init__()\n self.getter = getter\n self.setter = setter\n self.align_to = align_to\n"},{"location":"from_pydoc/generated/memory/direct_memory_view/#libdebug.memory.direct_memory_view.DirectMemoryView.read","title":"read(address, size)","text":"Reads memory from the target process.
Parameters:
Name Type Description Defaultaddress int The address to read from.
requiredsize int The number of bytes to read.
requiredReturns:
Name Type Descriptionbytes bytes The read bytes.
Source code inlibdebug/memory/direct_memory_view.py def read(self: DirectMemoryView, address: int, size: int) -> bytes:\n \"\"\"Reads memory from the target process.\n\n Args:\n address (int): The address to read from.\n size (int): The number of bytes to read.\n\n Returns:\n bytes: The read bytes.\n \"\"\"\n if self.align_to == 1:\n return self.getter(address, size)\n else:\n prefix = address % self.align_to\n base_address = address - prefix\n new_size = size + prefix\n data = self.getter(base_address, new_size)\n return data[prefix : prefix + size]\n"},{"location":"from_pydoc/generated/memory/direct_memory_view/#libdebug.memory.direct_memory_view.DirectMemoryView.write","title":"write(address, data)","text":"Writes memory to the target process.
Parameters:
Name Type Description Defaultaddress int The address to write to.
requireddata bytes The data to write.
required Source code inlibdebug/memory/direct_memory_view.py def write(self: DirectMemoryView, address: int, data: bytes) -> None:\n \"\"\"Writes memory to the target process.\n\n Args:\n address (int): The address to write to.\n data (bytes): The data to write.\n \"\"\"\n size = len(data)\n\n if self.align_to == 1:\n self.setter(address, data)\n else:\n prefix = address % self.align_to\n base_address = address - prefix\n new_size = size + prefix\n prefix_data = self.getter(base_address, new_size)\n new_data = prefix_data[:prefix] + data + prefix_data[prefix + size :]\n self.setter(base_address, new_data)\n"},{"location":"from_pydoc/generated/memory/process_memory_manager/","title":"libdebug.memory.process_memory_manager","text":""},{"location":"from_pydoc/generated/memory/process_memory_manager/#libdebug.memory.process_memory_manager.ProcessMemoryManager","title":"ProcessMemoryManager","text":"A class that provides accessors to the memory of a process, through /proc/pid/mem.
Source code inlibdebug/memory/process_memory_manager.py class ProcessMemoryManager:\n \"\"\"A class that provides accessors to the memory of a process, through /proc/pid/mem.\"\"\"\n\n max_size = sys.maxsize\n\n def open(self: ProcessMemoryManager, process_id: int) -> None:\n \"\"\"Initializes the ProcessMemoryManager.\"\"\"\n self.process_id = process_id\n self._mem_file = None\n\n def _open(self: ProcessMemoryManager) -> None:\n self._mem_file = open(f\"/proc/{self.process_id}/mem\", \"r+b\", buffering=0)\n\n def _split_seek(self: ProcessMemoryManager, file_obj: FileIO, address: int) -> None:\n \"\"\"Seeks to an address in a file, splitting the seek if necessary to avoid overflow.\"\"\"\n if address > self.max_size:\n # We need to split the seek\n file_obj.seek(self.max_size, os.SEEK_SET)\n try:\n file_obj.seek(address - self.max_size, os.SEEK_CUR)\n except OverflowError as e:\n # The address must have been larger than 2 * max_size\n # This implies that it is invalid for the current architecture\n raise OSError(f\"Address {address:#x} is not valid for this architecture\") from e\n else:\n # We can seek directly\n file_obj.seek(address, os.SEEK_SET)\n\n def read(self: ProcessMemoryManager, address: int, size: int) -> bytes:\n \"\"\"Reads memory from the target process.\n\n Args:\n address (int): The address to read from.\n size (int): The number of bytes to read.\n\n Returns:\n bytes: The read bytes.\n \"\"\"\n if not self._mem_file:\n self._open()\n\n self._split_seek(self._mem_file, address)\n return self._mem_file.read(size)\n\n def write(self: ProcessMemoryManager, address: int, data: bytes) -> None:\n \"\"\"Writes memory to the target process.\n\n Args:\n address (int): The address to write to.\n data (bytes): The data to write.\n \"\"\"\n if not self._mem_file:\n self._open()\n\n self._split_seek(self._mem_file, address)\n self._mem_file.write(data)\n\n def close(self: ProcessMemoryManager) -> None:\n \"\"\"Closes the memory file.\"\"\"\n if self._mem_file:\n self._mem_file.close()\n self._mem_file = None\n"},{"location":"from_pydoc/generated/memory/process_memory_manager/#libdebug.memory.process_memory_manager.ProcessMemoryManager._split_seek","title":"_split_seek(file_obj, address)","text":"Seeks to an address in a file, splitting the seek if necessary to avoid overflow.
Source code inlibdebug/memory/process_memory_manager.py def _split_seek(self: ProcessMemoryManager, file_obj: FileIO, address: int) -> None:\n \"\"\"Seeks to an address in a file, splitting the seek if necessary to avoid overflow.\"\"\"\n if address > self.max_size:\n # We need to split the seek\n file_obj.seek(self.max_size, os.SEEK_SET)\n try:\n file_obj.seek(address - self.max_size, os.SEEK_CUR)\n except OverflowError as e:\n # The address must have been larger than 2 * max_size\n # This implies that it is invalid for the current architecture\n raise OSError(f\"Address {address:#x} is not valid for this architecture\") from e\n else:\n # We can seek directly\n file_obj.seek(address, os.SEEK_SET)\n"},{"location":"from_pydoc/generated/memory/process_memory_manager/#libdebug.memory.process_memory_manager.ProcessMemoryManager.close","title":"close()","text":"Closes the memory file.
Source code inlibdebug/memory/process_memory_manager.py def close(self: ProcessMemoryManager) -> None:\n \"\"\"Closes the memory file.\"\"\"\n if self._mem_file:\n self._mem_file.close()\n self._mem_file = None\n"},{"location":"from_pydoc/generated/memory/process_memory_manager/#libdebug.memory.process_memory_manager.ProcessMemoryManager.open","title":"open(process_id)","text":"Initializes the ProcessMemoryManager.
Source code inlibdebug/memory/process_memory_manager.py def open(self: ProcessMemoryManager, process_id: int) -> None:\n \"\"\"Initializes the ProcessMemoryManager.\"\"\"\n self.process_id = process_id\n self._mem_file = None\n"},{"location":"from_pydoc/generated/memory/process_memory_manager/#libdebug.memory.process_memory_manager.ProcessMemoryManager.read","title":"read(address, size)","text":"Reads memory from the target process.
Parameters:
Name Type Description Defaultaddress int The address to read from.
requiredsize int The number of bytes to read.
requiredReturns:
Name Type Descriptionbytes bytes The read bytes.
Source code inlibdebug/memory/process_memory_manager.py def read(self: ProcessMemoryManager, address: int, size: int) -> bytes:\n \"\"\"Reads memory from the target process.\n\n Args:\n address (int): The address to read from.\n size (int): The number of bytes to read.\n\n Returns:\n bytes: The read bytes.\n \"\"\"\n if not self._mem_file:\n self._open()\n\n self._split_seek(self._mem_file, address)\n return self._mem_file.read(size)\n"},{"location":"from_pydoc/generated/memory/process_memory_manager/#libdebug.memory.process_memory_manager.ProcessMemoryManager.write","title":"write(address, data)","text":"Writes memory to the target process.
Parameters:
Name Type Description Defaultaddress int The address to write to.
requireddata bytes The data to write.
required Source code inlibdebug/memory/process_memory_manager.py def write(self: ProcessMemoryManager, address: int, data: bytes) -> None:\n \"\"\"Writes memory to the target process.\n\n Args:\n address (int): The address to write to.\n data (bytes): The data to write.\n \"\"\"\n if not self._mem_file:\n self._open()\n\n self._split_seek(self._mem_file, address)\n self._mem_file.write(data)\n"},{"location":"from_pydoc/generated/ptrace/ptrace_constants/","title":"libdebug.ptrace.ptrace_constants","text":""},{"location":"from_pydoc/generated/ptrace/ptrace_constants/#libdebug.ptrace.ptrace_constants.Commands","title":"Commands","text":" Bases: IntEnum
An enumeration of the available ptrace commands.
Source code inlibdebug/ptrace/ptrace_constants.py class Commands(IntEnum):\n \"\"\"An enumeration of the available ptrace commands.\"\"\"\n\n PTRACE_TRACEME = 0\n PTRACE_PEEKTEXT = 1\n PTRACE_PEEKDATA = 2\n PTRACE_PEEKUSER = 3\n PTRACE_POKETEXT = 4\n PTRACE_POKEDATA = 5\n PTRACE_POKEUSER = 6\n PTRACE_CONT = 7\n PTRACE_KILL = 8\n PTRACE_SINGLESTEP = 9\n PTRACE_GETREGS = 12\n PTRACE_SETREGS = 13\n PTRACE_GETFPREGS = 14\n PTRACE_SETFPREGS = 15\n PTRACE_ATTACH = 16\n PTRACE_DETACH = 17\n PTRACE_GETFPXREGS = 18\n PTRACE_SETFPXREGS = 19\n PTRACE_SYSCALL = 24\n PTRACE_SETOPTIONS = 0x4200\n PTRACE_GETEVENTMSG = 0x4201\n PTRACE_GETSIGINFO = 0x4202\n PTRACE_SETSIGINFO = 0x4203\n PTRACE_GETREGSET = 0x4204\n PTRACE_SETREGSET = 0x4205\n PTRACE_SEIZE = 0x4206\n PTRACE_INTERRUPT = 0x4207\n PTRACE_LISTEN = 0x4208\n PTRACE_PEEKSIGINFO = 0x4209\n PTRACE_GETSIGMASK = 0x420A\n PTRACE_SETSIGMASK = 0x420B\n PTRACE_SECCOMP_GET_FILTER = 0x420C\n PTRACE_SECCOMP_GET_METADATA = 0x420D\n PTRACE_GET_SYSCALL_INFO = 0x420E\n"},{"location":"from_pydoc/generated/ptrace/ptrace_constants/#libdebug.ptrace.ptrace_constants.StopEvents","title":"StopEvents","text":" Bases: IntEnum
An enumeration of the stop events that ptrace can return.
Source code inlibdebug/ptrace/ptrace_constants.py class StopEvents(IntEnum):\n \"\"\"An enumeration of the stop events that ptrace can return.\"\"\"\n\n CLONE_EVENT = SIGTRAP | (PTRACE_EVENT_CLONE << 8)\n EXEC_EVENT = SIGTRAP | (PTRACE_EVENT_EXEC << 8)\n EXIT_EVENT = SIGTRAP | (PTRACE_EVENT_EXIT << 8)\n FORK_EVENT = SIGTRAP | (PTRACE_EVENT_FORK << 8)\n VFORK_EVENT = SIGTRAP | (PTRACE_EVENT_VFORK << 8)\n VFORK_DONE_EVENT = SIGTRAP | (PTRACE_EVENT_VFORK_DONE << 8)\n SECCOMP_EVENT = SIGTRAP | (PTRACE_EVENT_SECCOMP << 8)\n"},{"location":"from_pydoc/generated/ptrace/ptrace_interface/","title":"libdebug.ptrace.ptrace_interface","text":""},{"location":"from_pydoc/generated/ptrace/ptrace_interface/#libdebug.ptrace.ptrace_interface.PtraceInterface","title":"PtraceInterface","text":" Bases: DebuggingInterface
The interface used by _InternalDebugger to communicate with the ptrace debugging backend.
libdebug/ptrace/ptrace_interface.py class PtraceInterface(DebuggingInterface):\n \"\"\"The interface used by `_InternalDebugger` to communicate with the `ptrace` debugging backend.\"\"\"\n\n process_id: int | None\n \"\"\"The process ID of the debugged process.\"\"\"\n\n detached: bool\n \"\"\"Whether the process was detached or not.\"\"\"\n\n _internal_debugger: InternalDebugger\n \"\"\"The internal debugger instance.\"\"\"\n\n def __init__(self: PtraceInterface) -> None:\n \"\"\"Initializes the PtraceInterface.\"\"\"\n self.lib_trace = libdebug_ptrace_binding.LibdebugPtraceInterface()\n\n self._internal_debugger = provide_internal_debugger(self)\n self.process_id = 0\n self.detached = False\n self._disabled_aslr = False\n\n def reset(self: PtraceInterface) -> None:\n \"\"\"Resets the state of the interface.\"\"\"\n self.lib_trace.cleanup()\n\n def _set_options(self: PtraceInterface) -> None:\n \"\"\"Sets the tracer options.\"\"\"\n self.lib_trace.set_ptrace_options()\n\n def run(self: PtraceInterface, redirect_pipes: bool) -> None:\n \"\"\"Runs the specified process.\"\"\"\n if not self._disabled_aslr and not self._internal_debugger.aslr_enabled:\n disable_self_aslr()\n self._disabled_aslr = True\n\n argv = self._internal_debugger.argv\n env = self._internal_debugger.env\n\n liblog.debugger(\"Running %s\", argv)\n\n # Setup ptrace wait status handler after debugging_context has been properly initialized\n with extend_internal_debugger(self):\n self.status_handler = PtraceStatusHandler()\n\n file_actions = []\n\n if redirect_pipes:\n # Creating pipes for stdin, stdout, stderr\n self.stdin_read, self.stdin_write = os.pipe()\n self.stdout_read, self.stdout_write = pty.openpty()\n self.stderr_read, self.stderr_write = pty.openpty()\n\n # Setting stdout, stderr to raw mode to avoid terminal control codes interfering with the\n # output\n tty.setraw(self.stdout_read)\n tty.setraw(self.stderr_read)\n\n flags = fcntl(self.stdout_read, F_GETFL)\n fcntl(self.stdout_read, F_SETFL, flags | os.O_NONBLOCK)\n\n flags = fcntl(self.stderr_read, F_GETFL)\n fcntl(self.stderr_read, F_SETFL, flags | os.O_NONBLOCK)\n\n file_actions.extend(\n [\n (POSIX_SPAWN_CLOSE, self.stdin_write),\n (POSIX_SPAWN_CLOSE, self.stdout_read),\n (POSIX_SPAWN_CLOSE, self.stderr_read),\n (POSIX_SPAWN_DUP2, self.stdin_read, 0),\n (POSIX_SPAWN_DUP2, self.stdout_write, 1),\n (POSIX_SPAWN_DUP2, self.stderr_write, 2),\n (POSIX_SPAWN_CLOSE, self.stdin_read),\n (POSIX_SPAWN_CLOSE, self.stdout_write),\n (POSIX_SPAWN_CLOSE, self.stderr_write),\n ],\n )\n\n # argv[1] is the length of the custom environment variables\n # argv[2:2 + env_len] is the custom environment variables\n # argv[2 + env_len] should be NULL\n # argv[2 + env_len + 1:] is the new argv\n if env is None:\n env_len = -1\n env = {}\n else:\n env_len = len(env)\n\n argv = [\n JUMPSTART_LOCATION,\n str(env_len),\n *[f\"{key}={value}\" for key, value in env.items()],\n \"NULL\",\n *argv,\n ]\n\n child_pid = posix_spawn(\n JUMPSTART_LOCATION,\n argv,\n os.environ,\n file_actions=file_actions,\n setpgroup=0,\n )\n\n self.process_id = child_pid\n self.detached = False\n self._internal_debugger.process_id = child_pid\n self.register_new_thread(child_pid)\n continue_to_entry_point = self._internal_debugger.autoreach_entrypoint\n self._setup_parent(continue_to_entry_point)\n\n if redirect_pipes:\n self._internal_debugger.pipe_manager = self._setup_pipe()\n else:\n self._internal_debugger.pipe_manager = None\n\n # https://stackoverflow.com/questions/58918188/why-is-stdin-not-propagated-to-child-process-of-different-process-group\n # We need to set the foreground process group to the child process group, otherwise the child process\n # will not receive the input from the terminal\n try:\n os.tcsetpgrp(0, child_pid)\n except OSError as e:\n liblog.debugger(\"Failed to set the foreground process group: %r\", e)\n\n def attach(self: PtraceInterface, pid: int) -> None:\n \"\"\"Attaches to the specified process.\n\n Args:\n pid (int): the pid of the process to attach to.\n \"\"\"\n # Setup ptrace wait status handler after debugging_context has been properly initialized\n with extend_internal_debugger(self):\n self.status_handler = PtraceStatusHandler()\n\n # Attach to all the tasks of the process\n self._attach_to_all_tasks(pid)\n\n self.process_id = pid\n self.detached = False\n self._internal_debugger.process_id = pid\n # If we are attaching to a process, we don't want to continue to the entry point\n # which we have probably already passed\n self._setup_parent(False)\n\n def _attach_to_all_tasks(self: PtraceInterface, pid: int) -> None:\n \"\"\"Attach to all the tasks of the process.\"\"\"\n tids = get_process_tasks(pid)\n for tid in tids:\n errno_val = self.lib_trace.attach(tid)\n if errno_val == errno.EPERM:\n raise PermissionError(\n errno_val,\n errno.errorcode[errno_val],\n \"You don't have permission to attach to the process. Did you check the ptrace_scope?\",\n )\n if errno_val:\n raise OSError(errno_val, errno.errorcode[errno_val])\n self.register_new_thread(tid)\n\n def detach(self: PtraceInterface) -> None:\n \"\"\"Detaches from the process.\"\"\"\n # We must disable all breakpoints before detaching\n for bp in list(self._internal_debugger.breakpoints.values()):\n if bp.enabled:\n try:\n self.unset_breakpoint(bp, delete=True)\n except RuntimeError as e:\n liblog.debugger(\"Error unsetting breakpoint %r\", e)\n\n self.lib_trace.detach_and_cont()\n\n self.detached = True\n\n # Reset the event type\n self._internal_debugger.resume_context.event_type.clear()\n\n # Reset the breakpoint hit\n self._internal_debugger.resume_context.event_hit_ref.clear()\n\n def kill(self: PtraceInterface) -> None:\n \"\"\"Instantly terminates the process.\"\"\"\n if not self.detached:\n self.lib_trace.detach_for_kill()\n else:\n # If we detached from the process, there's no reason to attempt to detach again\n # We can just kill the process\n os.kill(self.process_id, 9)\n os.waitpid(self.process_id, 0)\n\n def cont(self: PtraceInterface) -> None:\n \"\"\"Continues the execution of the process.\"\"\"\n # Forward signals to the threads\n if self._internal_debugger.resume_context.threads_with_signals_to_forward:\n self.forward_signal()\n\n # Enable all breakpoints if they were disabled for a single step\n changed = []\n\n for bp in self._internal_debugger.breakpoints.values():\n bp._disabled_for_step = False\n if bp._changed:\n changed.append(bp)\n bp._changed = False\n\n for bp in changed:\n if bp.enabled:\n self.set_breakpoint(bp, insert=False)\n else:\n self.unset_breakpoint(bp, delete=False)\n\n handle_syscalls = any(\n handler.enabled or handler.on_enter_pprint or handler.on_exit_pprint\n for handler in self._internal_debugger.handled_syscalls.values()\n )\n\n # Reset the event type\n self._internal_debugger.resume_context.event_type.clear()\n\n # Reset the breakpoint hit\n self._internal_debugger.resume_context.event_hit_ref.clear()\n\n self.lib_trace.cont_all_and_set_bps(handle_syscalls)\n\n def step(self: PtraceInterface, thread: ThreadContext) -> None:\n \"\"\"Executes a single instruction of the process.\n\n Args:\n thread (ThreadContext): The thread to step.\n \"\"\"\n # Disable all breakpoints for the single step\n for bp in self._internal_debugger.breakpoints.values():\n bp._disabled_for_step = True\n\n # Reset the event type\n self._internal_debugger.resume_context.event_type.clear()\n\n # Reset the breakpoint hit\n self._internal_debugger.resume_context.event_hit_ref.clear()\n\n self.lib_trace.step(thread.thread_id)\n\n self._internal_debugger.resume_context.is_a_step = True\n\n def step_until(self: PtraceInterface, thread: ThreadContext, address: int, max_steps: int) -> None:\n \"\"\"Executes instructions of the specified thread until the specified address is reached.\n\n Args:\n thread (ThreadContext): The thread to step.\n address (int): The address to reach.\n max_steps (int): The maximum number of steps to execute.\n \"\"\"\n # Disable all breakpoints for the single step\n for bp in self._internal_debugger.breakpoints.values():\n bp._disabled_for_step = True\n\n # Reset the event type\n self._internal_debugger.resume_context.event_type.clear()\n\n # Reset the breakpoint hit\n self._internal_debugger.resume_context.event_hit_ref.clear()\n\n self.lib_trace.step_until(thread.thread_id, address, max_steps)\n\n # As the wait is done internally, we must invalidate the cache\n invalidate_process_cache()\n\n def finish(self: PtraceInterface, thread: ThreadContext, heuristic: str) -> None:\n \"\"\"Continues execution until the current function returns.\n\n Args:\n thread (ThreadContext): The thread to step.\n heuristic (str): The heuristic to use.\n \"\"\"\n # Reset the event type\n self._internal_debugger.resume_context.event_type.clear()\n\n # Reset the breakpoint hit\n self._internal_debugger.resume_context.event_hit_ref.clear()\n\n if heuristic == \"step-mode\":\n self.lib_trace.stepping_finish(thread.thread_id, self._internal_debugger.arch == \"i386\")\n # As the wait is done internally, we must invalidate the cache\n invalidate_process_cache()\n elif heuristic == \"backtrace\":\n # Breakpoint to return address\n last_saved_instruction_pointer = thread.saved_ip\n\n # If a breakpoint already exists at the return address, we don't need to set a new one\n found = False\n ip_breakpoint = None\n\n for bp in self._internal_debugger.breakpoints.values():\n if bp.address == last_saved_instruction_pointer:\n found = True\n ip_breakpoint = bp\n break\n\n # If we find an existing breakpoint that is disabled, we enable it\n # but we need to disable it back after the command\n should_disable = False\n\n if not found:\n # Check if we have enough hardware breakpoints available\n # Otherwise we use a software breakpoint\n install_hw_bp = self.lib_trace.get_remaining_hw_breakpoint_count(thread.thread_id) > 0\n\n ip_breakpoint = Breakpoint(last_saved_instruction_pointer, hardware=install_hw_bp)\n self.set_breakpoint(ip_breakpoint)\n elif not ip_breakpoint.enabled:\n self._enable_breakpoint(ip_breakpoint)\n should_disable = True\n\n self.cont()\n self.wait()\n\n # Remove the breakpoint if it was set by us\n if not found:\n self.unset_breakpoint(ip_breakpoint)\n # Disable the breakpoint if it was just enabled by us\n elif should_disable:\n self._disable_breakpoint(ip_breakpoint)\n else:\n raise ValueError(f\"Unimplemented heuristic {heuristic}\")\n\n def next(self: PtraceInterface, thread: ThreadContext) -> None:\n \"\"\"Executes the next instruction of the process. If the instruction is a call, the debugger will continue until the called function returns.\"\"\"\n # Reset the event type\n self._internal_debugger.resume_context.event_type.clear()\n\n # Reset the breakpoint hit\n self._internal_debugger.resume_context.event_hit_ref.clear()\n\n opcode_window = thread.memory.read(thread.instruction_pointer, 8)\n\n # Check if the current instruction is a call and its skip amount\n is_call, skip = call_utilities_provider(self._internal_debugger.arch).get_call_and_skip_amount(opcode_window)\n\n if is_call:\n skip_address = thread.instruction_pointer + skip\n\n # If a breakpoint already exists at the return address, we don't need to set a new one\n found = False\n ip_breakpoint = self._internal_debugger.breakpoints.get(skip_address)\n\n if ip_breakpoint is not None:\n found = True\n\n # If we find an existing breakpoint that is disabled, we enable it\n # but we need to disable it back after the command\n should_disable = False\n\n if not found:\n # Check if we have enough hardware breakpoints available\n # Otherwise we use a software breakpoint\n install_hw_bp = self.lib_trace.get_remaining_hw_breakpoint_count(thread.thread_id) > 0\n ip_breakpoint = Breakpoint(skip_address, hardware=install_hw_bp)\n self.set_breakpoint(ip_breakpoint)\n elif not ip_breakpoint.enabled:\n self._enable_breakpoint(ip_breakpoint)\n should_disable = True\n\n self.cont()\n self.wait()\n\n # Remove the breakpoint if it was set by us\n if not found:\n self.unset_breakpoint(ip_breakpoint)\n # Disable the breakpoint if it was just enabled by us\n elif should_disable:\n self._disable_breakpoint(ip_breakpoint)\n else:\n # Step forward\n self.step(thread)\n self.wait()\n\n def _setup_pipe(self: PtraceInterface) -> None:\n \"\"\"Sets up the pipe manager for the child process.\n\n Close the read end for stdin and the write ends for stdout and stderr\n in the parent process since we are going to write to stdin and read from\n stdout and stderr\n \"\"\"\n try:\n os.close(self.stdin_read)\n os.close(self.stdout_write)\n os.close(self.stderr_write)\n except Exception as e:\n raise Exception(\"Closing fds failed: %r\", e) from e\n with extend_internal_debugger(self):\n return PipeManager(self.stdin_write, self.stdout_read, self.stderr_read)\n\n def _setup_parent(self: PtraceInterface, continue_to_entry_point: bool) -> None:\n \"\"\"Sets up the parent process after the child process has been created or attached to.\"\"\"\n liblog.debugger(\"Polling child process status\")\n self._internal_debugger.resume_context.is_startup = True\n self.wait()\n self._internal_debugger.resume_context.is_startup = False\n liblog.debugger(\"Child process ready, setting options\")\n self._set_options()\n liblog.debugger(\"Options set\")\n\n if continue_to_entry_point:\n # Now that the process is running, we must continue until we have reached the entry point\n entry_point = get_entry_point(self._internal_debugger.argv[0])\n\n # For PIE binaries, the entry point is a relative address\n entry_point = normalize_and_validate_address(entry_point, self.get_maps())\n\n bp = Breakpoint(entry_point, hardware=True)\n self.set_breakpoint(bp)\n self.cont()\n self.wait()\n\n self.unset_breakpoint(bp)\n\n invalidate_process_cache()\n\n def wait(self: PtraceInterface) -> None:\n \"\"\"Waits for the process to stop. Returns True if the wait has to be repeated.\"\"\"\n all_zombies = all(thread.zombie for thread in self._internal_debugger.threads)\n\n statuses = self.lib_trace.wait_all_and_update_regs(all_zombies)\n\n invalidate_process_cache()\n\n # Check the result of the waitpid and handle the changes.\n self.status_handler.manage_change(statuses)\n\n def forward_signal(self: PtraceInterface) -> None:\n \"\"\"Set the signals to forward to the threads.\"\"\"\n # change the global_state\n threads_with_signals_to_forward = self._internal_debugger.resume_context.threads_with_signals_to_forward\n\n signals_to_forward = []\n\n for thread in self._internal_debugger.threads:\n if (\n thread.thread_id in threads_with_signals_to_forward\n and thread._signal_number != 0\n and thread._signal_number not in self._internal_debugger.signals_to_block\n ):\n liblog.debugger(\n f\"Forwarding signal {thread.signal_number} to thread {thread.thread_id}\",\n )\n # Add the signal to the list of signals to forward\n signals_to_forward.append((thread.thread_id, thread.signal_number))\n # Reset the signal number\n thread._signal_number = 0\n\n self.lib_trace.forward_signals(signals_to_forward)\n\n # Clear the list of threads with signals to forward\n self._internal_debugger.resume_context.threads_with_signals_to_forward.clear()\n\n def migrate_to_gdb(self: PtraceInterface) -> None:\n \"\"\"Migrates the current process to GDB.\"\"\"\n # Delete any hardware breakpoint\n for bp in self._internal_debugger.breakpoints.values():\n if bp.hardware:\n for thread in self._internal_debugger.threads:\n self.lib_trace.unregister_hw_breakpoint(\n thread.thread_id,\n bp.address,\n )\n\n self.lib_trace.detach_for_migration()\n\n def migrate_from_gdb(self: PtraceInterface) -> None:\n \"\"\"Migrates the current process from GDB.\"\"\"\n invalidate_process_cache()\n self.status_handler.check_for_changes_in_threads(self.process_id)\n\n self.lib_trace.reattach_from_migration()\n\n # We have to reinstall any hardware breakpoint\n for bp in self._internal_debugger.breakpoints.values():\n if bp.hardware:\n for thread in self._internal_debugger.threads:\n self.lib_trace.register_hw_breakpoint(\n thread.thread_id,\n bp.address,\n int.from_bytes(bp.condition.encode(), sys.byteorder),\n bp.length,\n )\n\n def register_new_thread(self: PtraceInterface, new_thread_id: int) -> None:\n \"\"\"Registers a new thread.\n\n Args:\n new_thread_id (int): The new thread ID.\n \"\"\"\n # The FFI implementation returns a pointer to the register file\n register_file, fp_register_file = self.lib_trace.register_thread(new_thread_id)\n\n register_holder = register_holder_provider(self._internal_debugger.arch, register_file, fp_register_file)\n thread_context_class = thread_context_class_provider(self._internal_debugger.arch)\n\n with extend_internal_debugger(self._internal_debugger):\n thread = thread_context_class(new_thread_id, register_holder)\n\n self._internal_debugger.insert_new_thread(thread)\n\n # For any hardware breakpoints, we need to reapply them to the new thread\n for bp in self._internal_debugger.breakpoints.values():\n if bp.hardware:\n self.lib_trace.register_hw_breakpoint(\n thread.thread_id,\n bp.address,\n int.from_bytes(bp.condition.encode(), sys.byteorder),\n bp.length,\n )\n\n def unregister_thread(\n self: PtraceInterface,\n thread_id: int,\n exit_code: int | None,\n exit_signal: int | None,\n ) -> None:\n \"\"\"Unregisters a thread.\n\n Args:\n thread_id (int): The thread ID.\n exit_code (int): The exit code of the thread.\n exit_signal (int): The exit signal of the thread.\n \"\"\"\n self.lib_trace.unregister_thread(thread_id)\n\n self._internal_debugger.set_thread_as_dead(thread_id, exit_code=exit_code, exit_signal=exit_signal)\n\n def mark_thread_as_zombie(self: PtraceInterface, thread_id: int) -> None:\n \"\"\"Marks a thread as a zombie.\n\n Args:\n thread_id (int): The thread ID.\n \"\"\"\n self.lib_trace.mark_thread_as_zombie(thread_id)\n\n def _set_sw_breakpoint(self: PtraceInterface, bp: Breakpoint) -> None:\n \"\"\"Sets a software breakpoint at the specified address.\n\n Args:\n bp (Breakpoint): The breakpoint to set.\n \"\"\"\n self.lib_trace.register_breakpoint(bp.address)\n\n def _unset_sw_breakpoint(self: PtraceInterface, bp: Breakpoint) -> None:\n \"\"\"Unsets a software breakpoint at the specified address.\n\n Args:\n bp (Breakpoint): The breakpoint to unset.\n \"\"\"\n self.lib_trace.unregister_breakpoint(bp.address)\n\n def _enable_breakpoint(self: PtraceInterface, bp: Breakpoint) -> None:\n \"\"\"Enables a breakpoint at the specified address.\n\n Args:\n bp (Breakpoint): The breakpoint to enable.\n \"\"\"\n self.lib_trace.enable_breakpoint(bp.address)\n\n def _disable_breakpoint(self: PtraceInterface, bp: Breakpoint) -> None:\n \"\"\"Disables a breakpoint at the specified address.\n\n Args:\n bp (Breakpoint): The breakpoint to disable.\n \"\"\"\n self.lib_trace.disable_breakpoint(bp.address)\n\n def set_breakpoint(self: PtraceInterface, bp: Breakpoint, insert: bool = True) -> None:\n \"\"\"Sets a breakpoint at the specified address.\n\n Args:\n bp (Breakpoint): The breakpoint to set.\n insert (bool): Whether the breakpoint has to be inserted or just enabled.\n \"\"\"\n if bp.hardware:\n for thread in self._internal_debugger.threads:\n if bp.condition == \"x\":\n remaining = self.lib_trace.get_remaining_hw_breakpoint_count(thread.thread_id)\n else:\n remaining = self.lib_trace.get_remaining_hw_watchpoint_count(thread.thread_id)\n\n if not remaining:\n raise ValueError(\"No more hardware breakpoints of this type available\")\n\n self.lib_trace.register_hw_breakpoint(\n thread.thread_id,\n bp.address,\n int.from_bytes(bp.condition.encode(), sys.byteorder),\n bp.length,\n )\n elif insert:\n self._set_sw_breakpoint(bp)\n else:\n self._enable_breakpoint(bp)\n\n if insert:\n self._internal_debugger.breakpoints[bp.address] = bp\n\n def unset_breakpoint(self: PtraceInterface, bp: Breakpoint, delete: bool = True) -> None:\n \"\"\"Restores the breakpoint at the specified address.\n\n Args:\n bp (Breakpoint): The breakpoint to unset.\n delete (bool): Whether the breakpoint has to be deleted or just disabled.\n \"\"\"\n if bp.hardware:\n for thread in self._internal_debugger.threads:\n self.lib_trace.unregister_hw_breakpoint(thread.thread_id, bp.address)\n elif delete:\n self._unset_sw_breakpoint(bp)\n else:\n self._disable_breakpoint(bp)\n\n if delete:\n del self._internal_debugger.breakpoints[bp.address]\n\n def set_syscall_handler(self: PtraceInterface, handler: SyscallHandler) -> None:\n \"\"\"Sets a handler for a syscall.\n\n Args:\n handler (HandledSyscall): The syscall to set.\n \"\"\"\n self._internal_debugger.handled_syscalls[handler.syscall_number] = handler\n\n def unset_syscall_handler(self: PtraceInterface, handler: SyscallHandler) -> None:\n \"\"\"Unsets a handler for a syscall.\n\n Args:\n handler (HandledSyscall): The syscall to unset.\n \"\"\"\n del self._internal_debugger.handled_syscalls[handler.syscall_number]\n\n def set_signal_catcher(self: PtraceInterface, catcher: SignalCatcher) -> None:\n \"\"\"Sets a catcher for a signal.\n\n Args:\n catcher (CaughtSignal): The signal to set.\n \"\"\"\n self._internal_debugger.caught_signals[catcher.signal_number] = catcher\n\n def unset_signal_catcher(self: PtraceInterface, catcher: SignalCatcher) -> None:\n \"\"\"Unset a catcher for a signal.\n\n Args:\n catcher (CaughtSignal): The signal to unset.\n \"\"\"\n del self._internal_debugger.caught_signals[catcher.signal_number]\n\n def peek_memory(self: PtraceInterface, address: int) -> int:\n \"\"\"Reads the memory at the specified address.\"\"\"\n try:\n result = self.lib_trace.peek_data(address)\n except RuntimeError as e:\n raise OSError(\"Invalid memory location\") from e\n except TypeError as e:\n # This is not equal to sys.maxsize, as the address is unsigned\n plat_ulong_max = 256 ** get_platform_gp_register_size(self._internal_debugger.arch) - 1\n\n if abs(address) > plat_ulong_max:\n # If we are here, the type conversion failed because\n # address > (256**sizeof(unsigned long)) on this platform\n # We raise this as OSError for consistency, as the\n # address is certainly invalid\n raise OSError(f\"Address {address:#x} is not valid for this architecture\") from e\n\n raise RuntimeError(\"Unexpected error\") from e\n\n liblog.debugger(\n \"PEEKDATA at address %d returned with result %x\",\n address,\n result,\n )\n return result\n\n def poke_memory(self: PtraceInterface, address: int, value: int) -> None:\n \"\"\"Writes the memory at the specified address.\"\"\"\n try:\n result = self.lib_trace.poke_data(address, value)\n except RuntimeError as e:\n raise OSError(\"Invalid memory location\") from e\n except TypeError as e:\n # This is not equal to sys.maxsize, as the address is unsigned\n plat_ulong_max = 256 ** get_platform_gp_register_size(self._internal_debugger.arch) - 1\n\n if abs(address) > plat_ulong_max:\n # See the comment in peek_memory\n raise OSError(f\"Address {address:#x} is not valid for this architecture\") from e\n\n if abs(value) > plat_ulong_max:\n # See the comment in peek_memory\n raise RuntimeError(\"Requested write %d does not fit in a single operation\", value) from e\n\n raise RuntimeError(\"Unexpected error\") from e\n\n liblog.debugger(\n \"POKEDATA at address %d returned with result %d\",\n address,\n result,\n )\n\n def fetch_fp_registers(self: PtraceInterface, registers: Registers) -> None:\n \"\"\"Fetches the floating-point registers of the specified thread.\n\n Args:\n registers (Registers): The registers instance to update.\n \"\"\"\n liblog.debugger(\"Fetching floating-point registers for thread %d\", registers._thread_id)\n self.lib_trace.get_fp_regs(registers._thread_id)\n\n def flush_fp_registers(self: PtraceInterface, _: Registers) -> None:\n \"\"\"Flushes the floating-point registers of the specified thread.\n\n Args:\n registers (Registers): The registers instance to update.\n \"\"\"\n raise NotImplementedError(\"Flushing floating-point registers is automatically handled by the native code.\")\n\n def _get_event_msg(self: PtraceInterface, thread_id: int) -> int:\n \"\"\"Returns the event message.\"\"\"\n return self.lib_trace.get_event_msg(thread_id)\n\n def get_maps(self: PtraceInterface) -> MemoryMapList[MemoryMap]:\n \"\"\"Returns the memory maps of the process.\"\"\"\n with extend_internal_debugger(self):\n return get_process_maps(self.process_id)\n\n def get_hit_watchpoint(self: PtraceInterface, thread_id: int) -> Breakpoint:\n \"\"\"Returns the watchpoint that has been hit.\"\"\"\n address = self.lib_trace.get_hit_hw_breakpoint(thread_id)\n\n if not address:\n return None\n\n bp = self._internal_debugger.breakpoints[address]\n\n if bp.condition != \"x\":\n return bp\n\n return None\n"},{"location":"from_pydoc/generated/ptrace/ptrace_interface/#libdebug.ptrace.ptrace_interface.PtraceInterface._internal_debugger","title":"_internal_debugger = provide_internal_debugger(self) instance-attribute","text":"The internal debugger instance.
"},{"location":"from_pydoc/generated/ptrace/ptrace_interface/#libdebug.ptrace.ptrace_interface.PtraceInterface.detached","title":"detached = False instance-attribute","text":"Whether the process was detached or not.
"},{"location":"from_pydoc/generated/ptrace/ptrace_interface/#libdebug.ptrace.ptrace_interface.PtraceInterface.process_id","title":"process_id = 0 instance-attribute","text":"The process ID of the debugged process.
"},{"location":"from_pydoc/generated/ptrace/ptrace_interface/#libdebug.ptrace.ptrace_interface.PtraceInterface.__init__","title":"__init__()","text":"Initializes the PtraceInterface.
Source code inlibdebug/ptrace/ptrace_interface.py def __init__(self: PtraceInterface) -> None:\n \"\"\"Initializes the PtraceInterface.\"\"\"\n self.lib_trace = libdebug_ptrace_binding.LibdebugPtraceInterface()\n\n self._internal_debugger = provide_internal_debugger(self)\n self.process_id = 0\n self.detached = False\n self._disabled_aslr = False\n"},{"location":"from_pydoc/generated/ptrace/ptrace_interface/#libdebug.ptrace.ptrace_interface.PtraceInterface._attach_to_all_tasks","title":"_attach_to_all_tasks(pid)","text":"Attach to all the tasks of the process.
Source code inlibdebug/ptrace/ptrace_interface.py def _attach_to_all_tasks(self: PtraceInterface, pid: int) -> None:\n \"\"\"Attach to all the tasks of the process.\"\"\"\n tids = get_process_tasks(pid)\n for tid in tids:\n errno_val = self.lib_trace.attach(tid)\n if errno_val == errno.EPERM:\n raise PermissionError(\n errno_val,\n errno.errorcode[errno_val],\n \"You don't have permission to attach to the process. Did you check the ptrace_scope?\",\n )\n if errno_val:\n raise OSError(errno_val, errno.errorcode[errno_val])\n self.register_new_thread(tid)\n"},{"location":"from_pydoc/generated/ptrace/ptrace_interface/#libdebug.ptrace.ptrace_interface.PtraceInterface._disable_breakpoint","title":"_disable_breakpoint(bp)","text":"Disables a breakpoint at the specified address.
Parameters:
Name Type Description Defaultbp Breakpoint The breakpoint to disable.
required Source code inlibdebug/ptrace/ptrace_interface.py def _disable_breakpoint(self: PtraceInterface, bp: Breakpoint) -> None:\n \"\"\"Disables a breakpoint at the specified address.\n\n Args:\n bp (Breakpoint): The breakpoint to disable.\n \"\"\"\n self.lib_trace.disable_breakpoint(bp.address)\n"},{"location":"from_pydoc/generated/ptrace/ptrace_interface/#libdebug.ptrace.ptrace_interface.PtraceInterface._enable_breakpoint","title":"_enable_breakpoint(bp)","text":"Enables a breakpoint at the specified address.
Parameters:
Name Type Description Defaultbp Breakpoint The breakpoint to enable.
required Source code inlibdebug/ptrace/ptrace_interface.py def _enable_breakpoint(self: PtraceInterface, bp: Breakpoint) -> None:\n \"\"\"Enables a breakpoint at the specified address.\n\n Args:\n bp (Breakpoint): The breakpoint to enable.\n \"\"\"\n self.lib_trace.enable_breakpoint(bp.address)\n"},{"location":"from_pydoc/generated/ptrace/ptrace_interface/#libdebug.ptrace.ptrace_interface.PtraceInterface._get_event_msg","title":"_get_event_msg(thread_id)","text":"Returns the event message.
Source code inlibdebug/ptrace/ptrace_interface.py def _get_event_msg(self: PtraceInterface, thread_id: int) -> int:\n \"\"\"Returns the event message.\"\"\"\n return self.lib_trace.get_event_msg(thread_id)\n"},{"location":"from_pydoc/generated/ptrace/ptrace_interface/#libdebug.ptrace.ptrace_interface.PtraceInterface._set_options","title":"_set_options()","text":"Sets the tracer options.
Source code inlibdebug/ptrace/ptrace_interface.py def _set_options(self: PtraceInterface) -> None:\n \"\"\"Sets the tracer options.\"\"\"\n self.lib_trace.set_ptrace_options()\n"},{"location":"from_pydoc/generated/ptrace/ptrace_interface/#libdebug.ptrace.ptrace_interface.PtraceInterface._set_sw_breakpoint","title":"_set_sw_breakpoint(bp)","text":"Sets a software breakpoint at the specified address.
Parameters:
Name Type Description Defaultbp Breakpoint The breakpoint to set.
required Source code inlibdebug/ptrace/ptrace_interface.py def _set_sw_breakpoint(self: PtraceInterface, bp: Breakpoint) -> None:\n \"\"\"Sets a software breakpoint at the specified address.\n\n Args:\n bp (Breakpoint): The breakpoint to set.\n \"\"\"\n self.lib_trace.register_breakpoint(bp.address)\n"},{"location":"from_pydoc/generated/ptrace/ptrace_interface/#libdebug.ptrace.ptrace_interface.PtraceInterface._setup_parent","title":"_setup_parent(continue_to_entry_point)","text":"Sets up the parent process after the child process has been created or attached to.
Source code inlibdebug/ptrace/ptrace_interface.py def _setup_parent(self: PtraceInterface, continue_to_entry_point: bool) -> None:\n \"\"\"Sets up the parent process after the child process has been created or attached to.\"\"\"\n liblog.debugger(\"Polling child process status\")\n self._internal_debugger.resume_context.is_startup = True\n self.wait()\n self._internal_debugger.resume_context.is_startup = False\n liblog.debugger(\"Child process ready, setting options\")\n self._set_options()\n liblog.debugger(\"Options set\")\n\n if continue_to_entry_point:\n # Now that the process is running, we must continue until we have reached the entry point\n entry_point = get_entry_point(self._internal_debugger.argv[0])\n\n # For PIE binaries, the entry point is a relative address\n entry_point = normalize_and_validate_address(entry_point, self.get_maps())\n\n bp = Breakpoint(entry_point, hardware=True)\n self.set_breakpoint(bp)\n self.cont()\n self.wait()\n\n self.unset_breakpoint(bp)\n\n invalidate_process_cache()\n"},{"location":"from_pydoc/generated/ptrace/ptrace_interface/#libdebug.ptrace.ptrace_interface.PtraceInterface._setup_pipe","title":"_setup_pipe()","text":"Sets up the pipe manager for the child process.
Close the read end for stdin and the write ends for stdout and stderr in the parent process since we are going to write to stdin and read from stdout and stderr
Source code inlibdebug/ptrace/ptrace_interface.py def _setup_pipe(self: PtraceInterface) -> None:\n \"\"\"Sets up the pipe manager for the child process.\n\n Close the read end for stdin and the write ends for stdout and stderr\n in the parent process since we are going to write to stdin and read from\n stdout and stderr\n \"\"\"\n try:\n os.close(self.stdin_read)\n os.close(self.stdout_write)\n os.close(self.stderr_write)\n except Exception as e:\n raise Exception(\"Closing fds failed: %r\", e) from e\n with extend_internal_debugger(self):\n return PipeManager(self.stdin_write, self.stdout_read, self.stderr_read)\n"},{"location":"from_pydoc/generated/ptrace/ptrace_interface/#libdebug.ptrace.ptrace_interface.PtraceInterface._unset_sw_breakpoint","title":"_unset_sw_breakpoint(bp)","text":"Unsets a software breakpoint at the specified address.
Parameters:
Name Type Description Defaultbp Breakpoint The breakpoint to unset.
required Source code inlibdebug/ptrace/ptrace_interface.py def _unset_sw_breakpoint(self: PtraceInterface, bp: Breakpoint) -> None:\n \"\"\"Unsets a software breakpoint at the specified address.\n\n Args:\n bp (Breakpoint): The breakpoint to unset.\n \"\"\"\n self.lib_trace.unregister_breakpoint(bp.address)\n"},{"location":"from_pydoc/generated/ptrace/ptrace_interface/#libdebug.ptrace.ptrace_interface.PtraceInterface.attach","title":"attach(pid)","text":"Attaches to the specified process.
Parameters:
Name Type Description Defaultpid int the pid of the process to attach to.
required Source code inlibdebug/ptrace/ptrace_interface.py def attach(self: PtraceInterface, pid: int) -> None:\n \"\"\"Attaches to the specified process.\n\n Args:\n pid (int): the pid of the process to attach to.\n \"\"\"\n # Setup ptrace wait status handler after debugging_context has been properly initialized\n with extend_internal_debugger(self):\n self.status_handler = PtraceStatusHandler()\n\n # Attach to all the tasks of the process\n self._attach_to_all_tasks(pid)\n\n self.process_id = pid\n self.detached = False\n self._internal_debugger.process_id = pid\n # If we are attaching to a process, we don't want to continue to the entry point\n # which we have probably already passed\n self._setup_parent(False)\n"},{"location":"from_pydoc/generated/ptrace/ptrace_interface/#libdebug.ptrace.ptrace_interface.PtraceInterface.cont","title":"cont()","text":"Continues the execution of the process.
Source code inlibdebug/ptrace/ptrace_interface.py def cont(self: PtraceInterface) -> None:\n \"\"\"Continues the execution of the process.\"\"\"\n # Forward signals to the threads\n if self._internal_debugger.resume_context.threads_with_signals_to_forward:\n self.forward_signal()\n\n # Enable all breakpoints if they were disabled for a single step\n changed = []\n\n for bp in self._internal_debugger.breakpoints.values():\n bp._disabled_for_step = False\n if bp._changed:\n changed.append(bp)\n bp._changed = False\n\n for bp in changed:\n if bp.enabled:\n self.set_breakpoint(bp, insert=False)\n else:\n self.unset_breakpoint(bp, delete=False)\n\n handle_syscalls = any(\n handler.enabled or handler.on_enter_pprint or handler.on_exit_pprint\n for handler in self._internal_debugger.handled_syscalls.values()\n )\n\n # Reset the event type\n self._internal_debugger.resume_context.event_type.clear()\n\n # Reset the breakpoint hit\n self._internal_debugger.resume_context.event_hit_ref.clear()\n\n self.lib_trace.cont_all_and_set_bps(handle_syscalls)\n"},{"location":"from_pydoc/generated/ptrace/ptrace_interface/#libdebug.ptrace.ptrace_interface.PtraceInterface.detach","title":"detach()","text":"Detaches from the process.
Source code inlibdebug/ptrace/ptrace_interface.py def detach(self: PtraceInterface) -> None:\n \"\"\"Detaches from the process.\"\"\"\n # We must disable all breakpoints before detaching\n for bp in list(self._internal_debugger.breakpoints.values()):\n if bp.enabled:\n try:\n self.unset_breakpoint(bp, delete=True)\n except RuntimeError as e:\n liblog.debugger(\"Error unsetting breakpoint %r\", e)\n\n self.lib_trace.detach_and_cont()\n\n self.detached = True\n\n # Reset the event type\n self._internal_debugger.resume_context.event_type.clear()\n\n # Reset the breakpoint hit\n self._internal_debugger.resume_context.event_hit_ref.clear()\n"},{"location":"from_pydoc/generated/ptrace/ptrace_interface/#libdebug.ptrace.ptrace_interface.PtraceInterface.fetch_fp_registers","title":"fetch_fp_registers(registers)","text":"Fetches the floating-point registers of the specified thread.
Parameters:
Name Type Description Defaultregisters Registers The registers instance to update.
required Source code inlibdebug/ptrace/ptrace_interface.py def fetch_fp_registers(self: PtraceInterface, registers: Registers) -> None:\n \"\"\"Fetches the floating-point registers of the specified thread.\n\n Args:\n registers (Registers): The registers instance to update.\n \"\"\"\n liblog.debugger(\"Fetching floating-point registers for thread %d\", registers._thread_id)\n self.lib_trace.get_fp_regs(registers._thread_id)\n"},{"location":"from_pydoc/generated/ptrace/ptrace_interface/#libdebug.ptrace.ptrace_interface.PtraceInterface.finish","title":"finish(thread, heuristic)","text":"Continues execution until the current function returns.
Parameters:
Name Type Description Defaultthread ThreadContext The thread to step.
requiredheuristic str The heuristic to use.
required Source code inlibdebug/ptrace/ptrace_interface.py def finish(self: PtraceInterface, thread: ThreadContext, heuristic: str) -> None:\n \"\"\"Continues execution until the current function returns.\n\n Args:\n thread (ThreadContext): The thread to step.\n heuristic (str): The heuristic to use.\n \"\"\"\n # Reset the event type\n self._internal_debugger.resume_context.event_type.clear()\n\n # Reset the breakpoint hit\n self._internal_debugger.resume_context.event_hit_ref.clear()\n\n if heuristic == \"step-mode\":\n self.lib_trace.stepping_finish(thread.thread_id, self._internal_debugger.arch == \"i386\")\n # As the wait is done internally, we must invalidate the cache\n invalidate_process_cache()\n elif heuristic == \"backtrace\":\n # Breakpoint to return address\n last_saved_instruction_pointer = thread.saved_ip\n\n # If a breakpoint already exists at the return address, we don't need to set a new one\n found = False\n ip_breakpoint = None\n\n for bp in self._internal_debugger.breakpoints.values():\n if bp.address == last_saved_instruction_pointer:\n found = True\n ip_breakpoint = bp\n break\n\n # If we find an existing breakpoint that is disabled, we enable it\n # but we need to disable it back after the command\n should_disable = False\n\n if not found:\n # Check if we have enough hardware breakpoints available\n # Otherwise we use a software breakpoint\n install_hw_bp = self.lib_trace.get_remaining_hw_breakpoint_count(thread.thread_id) > 0\n\n ip_breakpoint = Breakpoint(last_saved_instruction_pointer, hardware=install_hw_bp)\n self.set_breakpoint(ip_breakpoint)\n elif not ip_breakpoint.enabled:\n self._enable_breakpoint(ip_breakpoint)\n should_disable = True\n\n self.cont()\n self.wait()\n\n # Remove the breakpoint if it was set by us\n if not found:\n self.unset_breakpoint(ip_breakpoint)\n # Disable the breakpoint if it was just enabled by us\n elif should_disable:\n self._disable_breakpoint(ip_breakpoint)\n else:\n raise ValueError(f\"Unimplemented heuristic {heuristic}\")\n"},{"location":"from_pydoc/generated/ptrace/ptrace_interface/#libdebug.ptrace.ptrace_interface.PtraceInterface.flush_fp_registers","title":"flush_fp_registers(_)","text":"Flushes the floating-point registers of the specified thread.
Parameters:
Name Type Description Defaultregisters Registers The registers instance to update.
required Source code inlibdebug/ptrace/ptrace_interface.py def flush_fp_registers(self: PtraceInterface, _: Registers) -> None:\n \"\"\"Flushes the floating-point registers of the specified thread.\n\n Args:\n registers (Registers): The registers instance to update.\n \"\"\"\n raise NotImplementedError(\"Flushing floating-point registers is automatically handled by the native code.\")\n"},{"location":"from_pydoc/generated/ptrace/ptrace_interface/#libdebug.ptrace.ptrace_interface.PtraceInterface.forward_signal","title":"forward_signal()","text":"Set the signals to forward to the threads.
Source code inlibdebug/ptrace/ptrace_interface.py def forward_signal(self: PtraceInterface) -> None:\n \"\"\"Set the signals to forward to the threads.\"\"\"\n # change the global_state\n threads_with_signals_to_forward = self._internal_debugger.resume_context.threads_with_signals_to_forward\n\n signals_to_forward = []\n\n for thread in self._internal_debugger.threads:\n if (\n thread.thread_id in threads_with_signals_to_forward\n and thread._signal_number != 0\n and thread._signal_number not in self._internal_debugger.signals_to_block\n ):\n liblog.debugger(\n f\"Forwarding signal {thread.signal_number} to thread {thread.thread_id}\",\n )\n # Add the signal to the list of signals to forward\n signals_to_forward.append((thread.thread_id, thread.signal_number))\n # Reset the signal number\n thread._signal_number = 0\n\n self.lib_trace.forward_signals(signals_to_forward)\n\n # Clear the list of threads with signals to forward\n self._internal_debugger.resume_context.threads_with_signals_to_forward.clear()\n"},{"location":"from_pydoc/generated/ptrace/ptrace_interface/#libdebug.ptrace.ptrace_interface.PtraceInterface.get_hit_watchpoint","title":"get_hit_watchpoint(thread_id)","text":"Returns the watchpoint that has been hit.
Source code inlibdebug/ptrace/ptrace_interface.py def get_hit_watchpoint(self: PtraceInterface, thread_id: int) -> Breakpoint:\n \"\"\"Returns the watchpoint that has been hit.\"\"\"\n address = self.lib_trace.get_hit_hw_breakpoint(thread_id)\n\n if not address:\n return None\n\n bp = self._internal_debugger.breakpoints[address]\n\n if bp.condition != \"x\":\n return bp\n\n return None\n"},{"location":"from_pydoc/generated/ptrace/ptrace_interface/#libdebug.ptrace.ptrace_interface.PtraceInterface.get_maps","title":"get_maps()","text":"Returns the memory maps of the process.
Source code inlibdebug/ptrace/ptrace_interface.py def get_maps(self: PtraceInterface) -> MemoryMapList[MemoryMap]:\n \"\"\"Returns the memory maps of the process.\"\"\"\n with extend_internal_debugger(self):\n return get_process_maps(self.process_id)\n"},{"location":"from_pydoc/generated/ptrace/ptrace_interface/#libdebug.ptrace.ptrace_interface.PtraceInterface.kill","title":"kill()","text":"Instantly terminates the process.
Source code inlibdebug/ptrace/ptrace_interface.py def kill(self: PtraceInterface) -> None:\n \"\"\"Instantly terminates the process.\"\"\"\n if not self.detached:\n self.lib_trace.detach_for_kill()\n else:\n # If we detached from the process, there's no reason to attempt to detach again\n # We can just kill the process\n os.kill(self.process_id, 9)\n os.waitpid(self.process_id, 0)\n"},{"location":"from_pydoc/generated/ptrace/ptrace_interface/#libdebug.ptrace.ptrace_interface.PtraceInterface.mark_thread_as_zombie","title":"mark_thread_as_zombie(thread_id)","text":"Marks a thread as a zombie.
Parameters:
Name Type Description Defaultthread_id int The thread ID.
required Source code inlibdebug/ptrace/ptrace_interface.py def mark_thread_as_zombie(self: PtraceInterface, thread_id: int) -> None:\n \"\"\"Marks a thread as a zombie.\n\n Args:\n thread_id (int): The thread ID.\n \"\"\"\n self.lib_trace.mark_thread_as_zombie(thread_id)\n"},{"location":"from_pydoc/generated/ptrace/ptrace_interface/#libdebug.ptrace.ptrace_interface.PtraceInterface.migrate_from_gdb","title":"migrate_from_gdb()","text":"Migrates the current process from GDB.
Source code inlibdebug/ptrace/ptrace_interface.py def migrate_from_gdb(self: PtraceInterface) -> None:\n \"\"\"Migrates the current process from GDB.\"\"\"\n invalidate_process_cache()\n self.status_handler.check_for_changes_in_threads(self.process_id)\n\n self.lib_trace.reattach_from_migration()\n\n # We have to reinstall any hardware breakpoint\n for bp in self._internal_debugger.breakpoints.values():\n if bp.hardware:\n for thread in self._internal_debugger.threads:\n self.lib_trace.register_hw_breakpoint(\n thread.thread_id,\n bp.address,\n int.from_bytes(bp.condition.encode(), sys.byteorder),\n bp.length,\n )\n"},{"location":"from_pydoc/generated/ptrace/ptrace_interface/#libdebug.ptrace.ptrace_interface.PtraceInterface.migrate_to_gdb","title":"migrate_to_gdb()","text":"Migrates the current process to GDB.
Source code inlibdebug/ptrace/ptrace_interface.py def migrate_to_gdb(self: PtraceInterface) -> None:\n \"\"\"Migrates the current process to GDB.\"\"\"\n # Delete any hardware breakpoint\n for bp in self._internal_debugger.breakpoints.values():\n if bp.hardware:\n for thread in self._internal_debugger.threads:\n self.lib_trace.unregister_hw_breakpoint(\n thread.thread_id,\n bp.address,\n )\n\n self.lib_trace.detach_for_migration()\n"},{"location":"from_pydoc/generated/ptrace/ptrace_interface/#libdebug.ptrace.ptrace_interface.PtraceInterface.next","title":"next(thread)","text":"Executes the next instruction of the process. If the instruction is a call, the debugger will continue until the called function returns.
Source code inlibdebug/ptrace/ptrace_interface.py def next(self: PtraceInterface, thread: ThreadContext) -> None:\n \"\"\"Executes the next instruction of the process. If the instruction is a call, the debugger will continue until the called function returns.\"\"\"\n # Reset the event type\n self._internal_debugger.resume_context.event_type.clear()\n\n # Reset the breakpoint hit\n self._internal_debugger.resume_context.event_hit_ref.clear()\n\n opcode_window = thread.memory.read(thread.instruction_pointer, 8)\n\n # Check if the current instruction is a call and its skip amount\n is_call, skip = call_utilities_provider(self._internal_debugger.arch).get_call_and_skip_amount(opcode_window)\n\n if is_call:\n skip_address = thread.instruction_pointer + skip\n\n # If a breakpoint already exists at the return address, we don't need to set a new one\n found = False\n ip_breakpoint = self._internal_debugger.breakpoints.get(skip_address)\n\n if ip_breakpoint is not None:\n found = True\n\n # If we find an existing breakpoint that is disabled, we enable it\n # but we need to disable it back after the command\n should_disable = False\n\n if not found:\n # Check if we have enough hardware breakpoints available\n # Otherwise we use a software breakpoint\n install_hw_bp = self.lib_trace.get_remaining_hw_breakpoint_count(thread.thread_id) > 0\n ip_breakpoint = Breakpoint(skip_address, hardware=install_hw_bp)\n self.set_breakpoint(ip_breakpoint)\n elif not ip_breakpoint.enabled:\n self._enable_breakpoint(ip_breakpoint)\n should_disable = True\n\n self.cont()\n self.wait()\n\n # Remove the breakpoint if it was set by us\n if not found:\n self.unset_breakpoint(ip_breakpoint)\n # Disable the breakpoint if it was just enabled by us\n elif should_disable:\n self._disable_breakpoint(ip_breakpoint)\n else:\n # Step forward\n self.step(thread)\n self.wait()\n"},{"location":"from_pydoc/generated/ptrace/ptrace_interface/#libdebug.ptrace.ptrace_interface.PtraceInterface.peek_memory","title":"peek_memory(address)","text":"Reads the memory at the specified address.
Source code inlibdebug/ptrace/ptrace_interface.py def peek_memory(self: PtraceInterface, address: int) -> int:\n \"\"\"Reads the memory at the specified address.\"\"\"\n try:\n result = self.lib_trace.peek_data(address)\n except RuntimeError as e:\n raise OSError(\"Invalid memory location\") from e\n except TypeError as e:\n # This is not equal to sys.maxsize, as the address is unsigned\n plat_ulong_max = 256 ** get_platform_gp_register_size(self._internal_debugger.arch) - 1\n\n if abs(address) > plat_ulong_max:\n # If we are here, the type conversion failed because\n # address > (256**sizeof(unsigned long)) on this platform\n # We raise this as OSError for consistency, as the\n # address is certainly invalid\n raise OSError(f\"Address {address:#x} is not valid for this architecture\") from e\n\n raise RuntimeError(\"Unexpected error\") from e\n\n liblog.debugger(\n \"PEEKDATA at address %d returned with result %x\",\n address,\n result,\n )\n return result\n"},{"location":"from_pydoc/generated/ptrace/ptrace_interface/#libdebug.ptrace.ptrace_interface.PtraceInterface.poke_memory","title":"poke_memory(address, value)","text":"Writes the memory at the specified address.
Source code inlibdebug/ptrace/ptrace_interface.py def poke_memory(self: PtraceInterface, address: int, value: int) -> None:\n \"\"\"Writes the memory at the specified address.\"\"\"\n try:\n result = self.lib_trace.poke_data(address, value)\n except RuntimeError as e:\n raise OSError(\"Invalid memory location\") from e\n except TypeError as e:\n # This is not equal to sys.maxsize, as the address is unsigned\n plat_ulong_max = 256 ** get_platform_gp_register_size(self._internal_debugger.arch) - 1\n\n if abs(address) > plat_ulong_max:\n # See the comment in peek_memory\n raise OSError(f\"Address {address:#x} is not valid for this architecture\") from e\n\n if abs(value) > plat_ulong_max:\n # See the comment in peek_memory\n raise RuntimeError(\"Requested write %d does not fit in a single operation\", value) from e\n\n raise RuntimeError(\"Unexpected error\") from e\n\n liblog.debugger(\n \"POKEDATA at address %d returned with result %d\",\n address,\n result,\n )\n"},{"location":"from_pydoc/generated/ptrace/ptrace_interface/#libdebug.ptrace.ptrace_interface.PtraceInterface.register_new_thread","title":"register_new_thread(new_thread_id)","text":"Registers a new thread.
Parameters:
Name Type Description Defaultnew_thread_id int The new thread ID.
required Source code inlibdebug/ptrace/ptrace_interface.py def register_new_thread(self: PtraceInterface, new_thread_id: int) -> None:\n \"\"\"Registers a new thread.\n\n Args:\n new_thread_id (int): The new thread ID.\n \"\"\"\n # The FFI implementation returns a pointer to the register file\n register_file, fp_register_file = self.lib_trace.register_thread(new_thread_id)\n\n register_holder = register_holder_provider(self._internal_debugger.arch, register_file, fp_register_file)\n thread_context_class = thread_context_class_provider(self._internal_debugger.arch)\n\n with extend_internal_debugger(self._internal_debugger):\n thread = thread_context_class(new_thread_id, register_holder)\n\n self._internal_debugger.insert_new_thread(thread)\n\n # For any hardware breakpoints, we need to reapply them to the new thread\n for bp in self._internal_debugger.breakpoints.values():\n if bp.hardware:\n self.lib_trace.register_hw_breakpoint(\n thread.thread_id,\n bp.address,\n int.from_bytes(bp.condition.encode(), sys.byteorder),\n bp.length,\n )\n"},{"location":"from_pydoc/generated/ptrace/ptrace_interface/#libdebug.ptrace.ptrace_interface.PtraceInterface.reset","title":"reset()","text":"Resets the state of the interface.
Source code inlibdebug/ptrace/ptrace_interface.py def reset(self: PtraceInterface) -> None:\n \"\"\"Resets the state of the interface.\"\"\"\n self.lib_trace.cleanup()\n"},{"location":"from_pydoc/generated/ptrace/ptrace_interface/#libdebug.ptrace.ptrace_interface.PtraceInterface.run","title":"run(redirect_pipes)","text":"Runs the specified process.
Source code inlibdebug/ptrace/ptrace_interface.py def run(self: PtraceInterface, redirect_pipes: bool) -> None:\n \"\"\"Runs the specified process.\"\"\"\n if not self._disabled_aslr and not self._internal_debugger.aslr_enabled:\n disable_self_aslr()\n self._disabled_aslr = True\n\n argv = self._internal_debugger.argv\n env = self._internal_debugger.env\n\n liblog.debugger(\"Running %s\", argv)\n\n # Setup ptrace wait status handler after debugging_context has been properly initialized\n with extend_internal_debugger(self):\n self.status_handler = PtraceStatusHandler()\n\n file_actions = []\n\n if redirect_pipes:\n # Creating pipes for stdin, stdout, stderr\n self.stdin_read, self.stdin_write = os.pipe()\n self.stdout_read, self.stdout_write = pty.openpty()\n self.stderr_read, self.stderr_write = pty.openpty()\n\n # Setting stdout, stderr to raw mode to avoid terminal control codes interfering with the\n # output\n tty.setraw(self.stdout_read)\n tty.setraw(self.stderr_read)\n\n flags = fcntl(self.stdout_read, F_GETFL)\n fcntl(self.stdout_read, F_SETFL, flags | os.O_NONBLOCK)\n\n flags = fcntl(self.stderr_read, F_GETFL)\n fcntl(self.stderr_read, F_SETFL, flags | os.O_NONBLOCK)\n\n file_actions.extend(\n [\n (POSIX_SPAWN_CLOSE, self.stdin_write),\n (POSIX_SPAWN_CLOSE, self.stdout_read),\n (POSIX_SPAWN_CLOSE, self.stderr_read),\n (POSIX_SPAWN_DUP2, self.stdin_read, 0),\n (POSIX_SPAWN_DUP2, self.stdout_write, 1),\n (POSIX_SPAWN_DUP2, self.stderr_write, 2),\n (POSIX_SPAWN_CLOSE, self.stdin_read),\n (POSIX_SPAWN_CLOSE, self.stdout_write),\n (POSIX_SPAWN_CLOSE, self.stderr_write),\n ],\n )\n\n # argv[1] is the length of the custom environment variables\n # argv[2:2 + env_len] is the custom environment variables\n # argv[2 + env_len] should be NULL\n # argv[2 + env_len + 1:] is the new argv\n if env is None:\n env_len = -1\n env = {}\n else:\n env_len = len(env)\n\n argv = [\n JUMPSTART_LOCATION,\n str(env_len),\n *[f\"{key}={value}\" for key, value in env.items()],\n \"NULL\",\n *argv,\n ]\n\n child_pid = posix_spawn(\n JUMPSTART_LOCATION,\n argv,\n os.environ,\n file_actions=file_actions,\n setpgroup=0,\n )\n\n self.process_id = child_pid\n self.detached = False\n self._internal_debugger.process_id = child_pid\n self.register_new_thread(child_pid)\n continue_to_entry_point = self._internal_debugger.autoreach_entrypoint\n self._setup_parent(continue_to_entry_point)\n\n if redirect_pipes:\n self._internal_debugger.pipe_manager = self._setup_pipe()\n else:\n self._internal_debugger.pipe_manager = None\n\n # https://stackoverflow.com/questions/58918188/why-is-stdin-not-propagated-to-child-process-of-different-process-group\n # We need to set the foreground process group to the child process group, otherwise the child process\n # will not receive the input from the terminal\n try:\n os.tcsetpgrp(0, child_pid)\n except OSError as e:\n liblog.debugger(\"Failed to set the foreground process group: %r\", e)\n"},{"location":"from_pydoc/generated/ptrace/ptrace_interface/#libdebug.ptrace.ptrace_interface.PtraceInterface.set_breakpoint","title":"set_breakpoint(bp, insert=True)","text":"Sets a breakpoint at the specified address.
Parameters:
Name Type Description Defaultbp Breakpoint The breakpoint to set.
requiredinsert bool Whether the breakpoint has to be inserted or just enabled.
True Source code in libdebug/ptrace/ptrace_interface.py def set_breakpoint(self: PtraceInterface, bp: Breakpoint, insert: bool = True) -> None:\n \"\"\"Sets a breakpoint at the specified address.\n\n Args:\n bp (Breakpoint): The breakpoint to set.\n insert (bool): Whether the breakpoint has to be inserted or just enabled.\n \"\"\"\n if bp.hardware:\n for thread in self._internal_debugger.threads:\n if bp.condition == \"x\":\n remaining = self.lib_trace.get_remaining_hw_breakpoint_count(thread.thread_id)\n else:\n remaining = self.lib_trace.get_remaining_hw_watchpoint_count(thread.thread_id)\n\n if not remaining:\n raise ValueError(\"No more hardware breakpoints of this type available\")\n\n self.lib_trace.register_hw_breakpoint(\n thread.thread_id,\n bp.address,\n int.from_bytes(bp.condition.encode(), sys.byteorder),\n bp.length,\n )\n elif insert:\n self._set_sw_breakpoint(bp)\n else:\n self._enable_breakpoint(bp)\n\n if insert:\n self._internal_debugger.breakpoints[bp.address] = bp\n"},{"location":"from_pydoc/generated/ptrace/ptrace_interface/#libdebug.ptrace.ptrace_interface.PtraceInterface.set_signal_catcher","title":"set_signal_catcher(catcher)","text":"Sets a catcher for a signal.
Parameters:
Name Type Description Defaultcatcher CaughtSignal The signal to set.
required Source code inlibdebug/ptrace/ptrace_interface.py def set_signal_catcher(self: PtraceInterface, catcher: SignalCatcher) -> None:\n \"\"\"Sets a catcher for a signal.\n\n Args:\n catcher (CaughtSignal): The signal to set.\n \"\"\"\n self._internal_debugger.caught_signals[catcher.signal_number] = catcher\n"},{"location":"from_pydoc/generated/ptrace/ptrace_interface/#libdebug.ptrace.ptrace_interface.PtraceInterface.set_syscall_handler","title":"set_syscall_handler(handler)","text":"Sets a handler for a syscall.
Parameters:
Name Type Description Defaulthandler HandledSyscall The syscall to set.
required Source code inlibdebug/ptrace/ptrace_interface.py def set_syscall_handler(self: PtraceInterface, handler: SyscallHandler) -> None:\n \"\"\"Sets a handler for a syscall.\n\n Args:\n handler (HandledSyscall): The syscall to set.\n \"\"\"\n self._internal_debugger.handled_syscalls[handler.syscall_number] = handler\n"},{"location":"from_pydoc/generated/ptrace/ptrace_interface/#libdebug.ptrace.ptrace_interface.PtraceInterface.step","title":"step(thread)","text":"Executes a single instruction of the process.
Parameters:
Name Type Description Defaultthread ThreadContext The thread to step.
required Source code inlibdebug/ptrace/ptrace_interface.py def step(self: PtraceInterface, thread: ThreadContext) -> None:\n \"\"\"Executes a single instruction of the process.\n\n Args:\n thread (ThreadContext): The thread to step.\n \"\"\"\n # Disable all breakpoints for the single step\n for bp in self._internal_debugger.breakpoints.values():\n bp._disabled_for_step = True\n\n # Reset the event type\n self._internal_debugger.resume_context.event_type.clear()\n\n # Reset the breakpoint hit\n self._internal_debugger.resume_context.event_hit_ref.clear()\n\n self.lib_trace.step(thread.thread_id)\n\n self._internal_debugger.resume_context.is_a_step = True\n"},{"location":"from_pydoc/generated/ptrace/ptrace_interface/#libdebug.ptrace.ptrace_interface.PtraceInterface.step_until","title":"step_until(thread, address, max_steps)","text":"Executes instructions of the specified thread until the specified address is reached.
Parameters:
Name Type Description Defaultthread ThreadContext The thread to step.
requiredaddress int The address to reach.
requiredmax_steps int The maximum number of steps to execute.
required Source code inlibdebug/ptrace/ptrace_interface.py def step_until(self: PtraceInterface, thread: ThreadContext, address: int, max_steps: int) -> None:\n \"\"\"Executes instructions of the specified thread until the specified address is reached.\n\n Args:\n thread (ThreadContext): The thread to step.\n address (int): The address to reach.\n max_steps (int): The maximum number of steps to execute.\n \"\"\"\n # Disable all breakpoints for the single step\n for bp in self._internal_debugger.breakpoints.values():\n bp._disabled_for_step = True\n\n # Reset the event type\n self._internal_debugger.resume_context.event_type.clear()\n\n # Reset the breakpoint hit\n self._internal_debugger.resume_context.event_hit_ref.clear()\n\n self.lib_trace.step_until(thread.thread_id, address, max_steps)\n\n # As the wait is done internally, we must invalidate the cache\n invalidate_process_cache()\n"},{"location":"from_pydoc/generated/ptrace/ptrace_interface/#libdebug.ptrace.ptrace_interface.PtraceInterface.unregister_thread","title":"unregister_thread(thread_id, exit_code, exit_signal)","text":"Unregisters a thread.
Parameters:
Name Type Description Defaultthread_id int The thread ID.
requiredexit_code int The exit code of the thread.
requiredexit_signal int The exit signal of the thread.
required Source code inlibdebug/ptrace/ptrace_interface.py def unregister_thread(\n self: PtraceInterface,\n thread_id: int,\n exit_code: int | None,\n exit_signal: int | None,\n) -> None:\n \"\"\"Unregisters a thread.\n\n Args:\n thread_id (int): The thread ID.\n exit_code (int): The exit code of the thread.\n exit_signal (int): The exit signal of the thread.\n \"\"\"\n self.lib_trace.unregister_thread(thread_id)\n\n self._internal_debugger.set_thread_as_dead(thread_id, exit_code=exit_code, exit_signal=exit_signal)\n"},{"location":"from_pydoc/generated/ptrace/ptrace_interface/#libdebug.ptrace.ptrace_interface.PtraceInterface.unset_breakpoint","title":"unset_breakpoint(bp, delete=True)","text":"Restores the breakpoint at the specified address.
Parameters:
Name Type Description Defaultbp Breakpoint The breakpoint to unset.
requireddelete bool Whether the breakpoint has to be deleted or just disabled.
True Source code in libdebug/ptrace/ptrace_interface.py def unset_breakpoint(self: PtraceInterface, bp: Breakpoint, delete: bool = True) -> None:\n \"\"\"Restores the breakpoint at the specified address.\n\n Args:\n bp (Breakpoint): The breakpoint to unset.\n delete (bool): Whether the breakpoint has to be deleted or just disabled.\n \"\"\"\n if bp.hardware:\n for thread in self._internal_debugger.threads:\n self.lib_trace.unregister_hw_breakpoint(thread.thread_id, bp.address)\n elif delete:\n self._unset_sw_breakpoint(bp)\n else:\n self._disable_breakpoint(bp)\n\n if delete:\n del self._internal_debugger.breakpoints[bp.address]\n"},{"location":"from_pydoc/generated/ptrace/ptrace_interface/#libdebug.ptrace.ptrace_interface.PtraceInterface.unset_signal_catcher","title":"unset_signal_catcher(catcher)","text":"Unset a catcher for a signal.
Parameters:
Name Type Description Defaultcatcher CaughtSignal The signal to unset.
required Source code inlibdebug/ptrace/ptrace_interface.py def unset_signal_catcher(self: PtraceInterface, catcher: SignalCatcher) -> None:\n \"\"\"Unset a catcher for a signal.\n\n Args:\n catcher (CaughtSignal): The signal to unset.\n \"\"\"\n del self._internal_debugger.caught_signals[catcher.signal_number]\n"},{"location":"from_pydoc/generated/ptrace/ptrace_interface/#libdebug.ptrace.ptrace_interface.PtraceInterface.unset_syscall_handler","title":"unset_syscall_handler(handler)","text":"Unsets a handler for a syscall.
Parameters:
Name Type Description Defaulthandler HandledSyscall The syscall to unset.
required Source code inlibdebug/ptrace/ptrace_interface.py def unset_syscall_handler(self: PtraceInterface, handler: SyscallHandler) -> None:\n \"\"\"Unsets a handler for a syscall.\n\n Args:\n handler (HandledSyscall): The syscall to unset.\n \"\"\"\n del self._internal_debugger.handled_syscalls[handler.syscall_number]\n"},{"location":"from_pydoc/generated/ptrace/ptrace_interface/#libdebug.ptrace.ptrace_interface.PtraceInterface.wait","title":"wait()","text":"Waits for the process to stop. Returns True if the wait has to be repeated.
Source code inlibdebug/ptrace/ptrace_interface.py def wait(self: PtraceInterface) -> None:\n \"\"\"Waits for the process to stop. Returns True if the wait has to be repeated.\"\"\"\n all_zombies = all(thread.zombie for thread in self._internal_debugger.threads)\n\n statuses = self.lib_trace.wait_all_and_update_regs(all_zombies)\n\n invalidate_process_cache()\n\n # Check the result of the waitpid and handle the changes.\n self.status_handler.manage_change(statuses)\n"},{"location":"from_pydoc/generated/ptrace/ptrace_register_holder/","title":"libdebug.ptrace.ptrace_register_holder","text":""},{"location":"from_pydoc/generated/ptrace/ptrace_register_holder/#libdebug.ptrace.ptrace_register_holder.PtraceRegisterHolder","title":"PtraceRegisterHolder dataclass","text":" Bases: RegisterHolder
An abstract class that holds the state of the registers of a process, providing setters and getters for them.
Intended for use with the Ptrace debugging backend.
Source code inlibdebug/ptrace/ptrace_register_holder.py @dataclass\nclass PtraceRegisterHolder(RegisterHolder):\n \"\"\"An abstract class that holds the state of the registers of a process, providing setters and getters for them.\n\n Intended for use with the Ptrace debugging backend.\n \"\"\"\n\n register_file: object\n \"\"\"The register file of the target process, as returned by ptrace.\"\"\"\n\n fp_register_file: object\n \"\"\"The floating-point register file of the target process, as returned by ptrace.\"\"\"\n\n def poll(self: PtraceRegisterHolder, target: ThreadContext) -> None:\n \"\"\"Poll the register values from the specified target.\"\"\"\n raise NotImplementedError(\"Do not call this method.\")\n\n def flush(self: PtraceRegisterHolder, source: ThreadContext) -> None:\n \"\"\"Flush the register values from the specified source.\"\"\"\n raise NotImplementedError(\"Do not call this method.\")\n"},{"location":"from_pydoc/generated/ptrace/ptrace_register_holder/#libdebug.ptrace.ptrace_register_holder.PtraceRegisterHolder.fp_register_file","title":"fp_register_file instance-attribute","text":"The floating-point register file of the target process, as returned by ptrace.
"},{"location":"from_pydoc/generated/ptrace/ptrace_register_holder/#libdebug.ptrace.ptrace_register_holder.PtraceRegisterHolder.register_file","title":"register_file instance-attribute","text":"The register file of the target process, as returned by ptrace.
"},{"location":"from_pydoc/generated/ptrace/ptrace_register_holder/#libdebug.ptrace.ptrace_register_holder.PtraceRegisterHolder.flush","title":"flush(source)","text":"Flush the register values from the specified source.
Source code inlibdebug/ptrace/ptrace_register_holder.py def flush(self: PtraceRegisterHolder, source: ThreadContext) -> None:\n \"\"\"Flush the register values from the specified source.\"\"\"\n raise NotImplementedError(\"Do not call this method.\")\n"},{"location":"from_pydoc/generated/ptrace/ptrace_register_holder/#libdebug.ptrace.ptrace_register_holder.PtraceRegisterHolder.poll","title":"poll(target)","text":"Poll the register values from the specified target.
Source code inlibdebug/ptrace/ptrace_register_holder.py def poll(self: PtraceRegisterHolder, target: ThreadContext) -> None:\n \"\"\"Poll the register values from the specified target.\"\"\"\n raise NotImplementedError(\"Do not call this method.\")\n"},{"location":"from_pydoc/generated/ptrace/ptrace_status_handler/","title":"libdebug.ptrace.ptrace_status_handler","text":""},{"location":"from_pydoc/generated/ptrace/ptrace_status_handler/#libdebug.ptrace.ptrace_status_handler.PtraceStatusHandler","title":"PtraceStatusHandler","text":"This class handles the states return by the waitpid calls on the debugger process.
Source code inlibdebug/ptrace/ptrace_status_handler.py class PtraceStatusHandler:\n \"\"\"This class handles the states return by the waitpid calls on the debugger process.\"\"\"\n\n def __init__(self: PtraceStatusHandler) -> None:\n \"\"\"Initializes the PtraceStatusHandler class.\"\"\"\n self.internal_debugger = provide_internal_debugger(self)\n self.ptrace_interface: DebuggingInterface = self.internal_debugger.debugging_interface\n self.forward_signal: bool = True\n self._assume_race_sigstop: bool = (\n True # Assume the stop is due to a race condition with SIGSTOP sent by the debugger\n )\n\n def _handle_clone(self: PtraceStatusHandler, thread_id: int, results: list) -> None:\n # https://go.googlesource.com/debug/+/a09ead70f05c87ad67bd9a131ff8352cf39a6082/doc/ptrace-nptl.txt\n # \"At this time, the new thread will exist, but will initially\n # be stopped with a SIGSTOP. The new thread will automatically be\n # traced and will inherit the PTRACE_O_TRACECLONE option from its\n # parent. The attached process should wait on the new thread to receive\n # the SIGSTOP notification.\"\n\n # Check if we received the SIGSTOP notification for the new thread\n # If not, we need to wait for it\n # 4991 == (WIFSTOPPED && WSTOPSIG(status) == SIGSTOP)\n if (thread_id, 4991) not in results:\n os.waitpid(thread_id, 0)\n self.ptrace_interface.register_new_thread(thread_id)\n\n def _handle_exit(\n self: PtraceStatusHandler,\n thread_id: int,\n exit_code: int | None,\n exit_signal: int | None,\n ) -> None:\n if self.internal_debugger.get_thread_by_id(thread_id):\n self.ptrace_interface.unregister_thread(thread_id, exit_code=exit_code, exit_signal=exit_signal)\n\n def _handle_breakpoints(self: PtraceStatusHandler, thread_id: int) -> None:\n thread = self.internal_debugger.get_thread_by_id(thread_id)\n\n if not hasattr(thread, \"instruction_pointer\"):\n # This is a signal trap hit on process startup\n # Do not resume the process until the user decides to do so\n self.internal_debugger.resume_context.event_type[thread_id] = EventType.STARTUP\n self.internal_debugger.resume_context.resume = False\n self.forward_signal = False\n return\n\n ip = thread.instruction_pointer\n\n bp: None | Breakpoint\n\n bp = self.internal_debugger.breakpoints.get(ip)\n if bp and bp.enabled and not bp._disabled_for_step:\n # Hardware breakpoint hit\n liblog.debugger(\"Hardware breakpoint hit at 0x%x\", ip)\n else:\n # If the trap was caused by a software breakpoint, we need to restore the original instruction\n # and set the instruction pointer to the previous instruction.\n ip -= software_breakpoint_byte_size(self.internal_debugger.arch)\n\n bp = self.internal_debugger.breakpoints.get(ip)\n if bp and bp.enabled and not bp._disabled_for_step:\n # Software breakpoint hit\n liblog.debugger(\"Software breakpoint hit at 0x%x\", ip)\n\n # Set the instruction pointer to the previous instruction\n thread.instruction_pointer = ip\n\n # Link the breakpoint to the thread, so that we can step over it\n bp._linked_thread_ids.append(thread_id)\n else:\n # If the breakpoint has been hit but is not enabled, we need to reset the bp variable\n bp = None\n\n # Manage watchpoints\n if not bp:\n bp = self.ptrace_interface.get_hit_watchpoint(thread_id)\n if bp:\n liblog.debugger(\"Watchpoint hit at 0x%x\", bp.address)\n if bp:\n self.internal_debugger.resume_context.event_hit_ref[thread_id] = bp\n self.internal_debugger.resume_context.event_type[thread_id] = EventType.BREAKPOINT\n self.forward_signal = False\n bp.hit_count += 1\n\n if bp.callback:\n bp.callback(thread, bp)\n else:\n # If the breakpoint has no callback, we need to stop the process despite the other signals\n self.internal_debugger.resume_context.resume = False\n\n def _manage_syscall_on_enter(\n self: PtraceStatusHandler,\n handler: SyscallHandler,\n thread: ThreadContext,\n syscall_number: int,\n hijacked_set: set[int],\n ) -> None:\n \"\"\"Manage the on_enter callback of a syscall.\"\"\"\n # Call the user-defined callback if it exists\n if handler.on_enter_user and handler.enabled:\n old_args = [\n thread.syscall_arg0,\n thread.syscall_arg1,\n thread.syscall_arg2,\n thread.syscall_arg3,\n thread.syscall_arg4,\n thread.syscall_arg5,\n ]\n handler.on_enter_user(thread, handler)\n\n # Check if the syscall number has changed\n syscall_number_after_callback = thread.syscall_number\n\n if syscall_number_after_callback != syscall_number:\n # The syscall number has changed\n # Pretty print the syscall number before the callback\n if handler.on_enter_pprint:\n handler.on_enter_pprint(\n thread,\n syscall_number,\n hijacked=True,\n old_args=old_args,\n )\n if syscall_number_after_callback in self.internal_debugger.handled_syscalls:\n callback_hijack = self.internal_debugger.handled_syscalls[syscall_number_after_callback]\n\n # Check if the new syscall has to be handled recursively\n if handler.recursive:\n if syscall_number_after_callback not in hijacked_set:\n hijacked_set.add(syscall_number_after_callback)\n else:\n # The syscall has already been hijacked in the current chain\n raise RuntimeError(\n \"Syscall hijacking loop detected. Check your code to avoid infinite loops.\",\n )\n\n # Call recursively the function to manage the new syscall\n self._manage_syscall_on_enter(\n callback_hijack,\n thread,\n syscall_number_after_callback,\n hijacked_set,\n )\n elif callback_hijack.on_enter_pprint:\n # Pretty print the syscall number\n callback_hijack.on_enter_pprint(thread, syscall_number_after_callback, hijacker=True)\n callback_hijack._has_entered = True\n callback_hijack._skip_exit = True\n else:\n # Skip the exit callback of the syscall that has been hijacked\n callback_hijack._has_entered = True\n callback_hijack._skip_exit = True\n elif handler.on_enter_pprint:\n # Pretty print the syscall number\n handler.on_enter_pprint(thread, syscall_number, callback=True, old_args=old_args)\n handler._has_entered = True\n else:\n handler._has_entered = True\n elif handler.on_enter_pprint:\n # Pretty print the syscall number\n handler.on_enter_pprint(thread, syscall_number, callback=(handler.on_exit_user is not None))\n handler._has_entered = True\n elif handler.on_exit_pprint or handler.on_exit_user:\n # The syscall has been entered but the user did not define an on_enter callback\n handler._has_entered = True\n if not handler.on_enter_user and not handler.on_exit_user and handler.enabled:\n # If the syscall has no callback, we need to stop the process despite the other signals\n self.internal_debugger.resume_context.event_type[thread.thread_id] = EventType.SYSCALL\n handler._has_entered = True\n self.internal_debugger.resume_context.resume = False\n\n def _handle_syscall(self: PtraceStatusHandler, thread_id: int) -> bool:\n \"\"\"Handle a syscall trap.\"\"\"\n thread = self.internal_debugger.get_thread_by_id(thread_id)\n if not hasattr(thread, \"syscall_number\"):\n # This is another spurious trap, we don't know what to do with it\n return\n\n syscall_number = thread.syscall_number\n\n if syscall_number in self.internal_debugger.handled_syscalls:\n handler = self.internal_debugger.handled_syscalls[syscall_number]\n elif -1 in self.internal_debugger.handled_syscalls:\n # Handle all syscalls is enabled\n handler = self.internal_debugger.handled_syscalls[-1]\n else:\n # This is a syscall we don't care about\n # Resume the execution\n return\n\n self.internal_debugger.resume_context.event_hit_ref[thread_id] = handler\n\n if not handler._has_entered:\n # The syscall is being entered\n liblog.debugger(\n \"Syscall %d entered on thread %d\",\n syscall_number,\n thread_id,\n )\n\n self._manage_syscall_on_enter(\n handler,\n thread,\n syscall_number,\n {syscall_number},\n )\n\n else:\n # The syscall is being exited\n liblog.debugger(\"Syscall %d exited on thread %d\", syscall_number, thread_id)\n\n if handler.enabled and not handler._skip_exit:\n # Increment the hit count only if the syscall has been handled\n handler.hit_count += 1\n\n # Call the user-defined callback if it exists\n if handler.on_exit_user and handler.enabled and not handler._skip_exit:\n # Pretty print the return value before the callback\n if handler.on_exit_pprint:\n return_value_before_callback = thread.syscall_return\n handler.on_exit_user(thread, handler)\n if handler.on_exit_pprint:\n return_value_after_callback = thread.syscall_return\n if return_value_after_callback != return_value_before_callback:\n handler.on_exit_pprint(\n (return_value_before_callback, return_value_after_callback),\n )\n else:\n handler.on_exit_pprint(return_value_after_callback)\n elif handler.on_exit_pprint:\n # Pretty print the return value\n handler.on_exit_pprint(thread.syscall_return)\n\n handler._has_entered = False\n handler._skip_exit = False\n if not handler.on_enter_user and not handler.on_exit_user and handler.enabled:\n # If the syscall has no callback, we need to stop the process despite the other signals\n self.internal_debugger.resume_context.event_type[thread_id] = EventType.SYSCALL\n self.internal_debugger.resume_context.resume = False\n\n def _manage_caught_signal(\n self: PtraceStatusHandler,\n catcher: SignalCatcher,\n thread: ThreadContext,\n signal_number: int,\n hijacked_set: set[int],\n ) -> None:\n if catcher.enabled:\n catcher.hit_count += 1\n liblog.debugger(\n \"Caught signal %s (%d) hit on thread %d\",\n resolve_signal_name(signal_number),\n signal_number,\n thread.thread_id,\n )\n if catcher.callback:\n # Execute the user-defined callback\n catcher.callback(thread, catcher)\n\n new_signal_number = thread._signal_number\n\n if new_signal_number != signal_number:\n # The signal number has changed\n liblog.debugger(\n \"Signal %s (%d) has been hijacked to %s (%d)\",\n resolve_signal_name(signal_number),\n signal_number,\n resolve_signal_name(new_signal_number),\n new_signal_number,\n )\n\n if catcher.recursive and new_signal_number in self.internal_debugger.caught_signals:\n hijack_cath_signal = self.internal_debugger.caught_signals[new_signal_number]\n if new_signal_number not in hijacked_set:\n hijacked_set.add(new_signal_number)\n else:\n # The signal has already been replaced in the current chain\n raise RuntimeError(\n \"Signal hijacking loop detected. Check your script to avoid infinite loops.\",\n )\n # Call recursively the function to manage the new signal\n self._manage_caught_signal(\n hijack_cath_signal,\n thread,\n new_signal_number,\n hijacked_set,\n )\n else:\n # If the caught signal has no callback, we need to stop the process despite the other signals\n self.internal_debugger.resume_context.event = EventType.SIGNAL\n self.internal_debugger.resume_context.resume = False\n\n def _handle_signal(self: PtraceStatusHandler, thread: ThreadContext) -> bool:\n \"\"\"Handle the signal trap.\"\"\"\n signal_number = thread._signal_number\n\n if signal_number in self.internal_debugger.caught_signals:\n catcher = self.internal_debugger.caught_signals[signal_number]\n\n self._manage_caught_signal(catcher, thread, signal_number, {signal_number})\n elif -1 in self.internal_debugger.caught_signals and signal_number not in (\n signal.SIGSTOP,\n signal.SIGKILL,\n ):\n # Handle all signals is enabled\n catcher = self.internal_debugger.caught_signals[-1]\n\n self.internal_debugger.resume_context.event_hit_ref[thread.thread_id] = catcher\n\n self._manage_caught_signal(catcher, thread, signal_number, {signal_number})\n\n def _internal_signal_handler(\n self: PtraceStatusHandler,\n pid: int,\n signum: int,\n results: list,\n status: int,\n thread: ThreadContext,\n ) -> None:\n \"\"\"Internal handler for signals used by the debugger.\"\"\"\n if signum == SYSCALL_SIGTRAP:\n # We hit a syscall\n liblog.debugger(\"Child thread %d stopped on syscall\", pid)\n self._handle_syscall(pid)\n self.forward_signal = False\n elif signum == signal.SIGSTOP and self.internal_debugger.resume_context.force_interrupt:\n # The user has requested an interrupt, we need to stop the process despite the ohter signals\n liblog.debugger(\n \"Child thread %d stopped with signal %s\",\n pid,\n resolve_signal_name(signum),\n )\n self.internal_debugger.resume_context.event_type[pid] = EventType.USER_INTERRUPT\n self.internal_debugger.resume_context.resume = False\n self.internal_debugger.resume_context.force_interrupt = False\n self.forward_signal = False\n elif signum == signal.SIGTRAP:\n # The trap decides if we hit a breakpoint. If so, it decides whether we should stop or\n # continue the execution and wait for the next trap\n self._handle_breakpoints(pid)\n\n if self.internal_debugger.resume_context.is_a_step:\n # The process is stepping, we need to stop the execution\n self.internal_debugger.resume_context.event_type[pid] = EventType.STEP\n self.internal_debugger.resume_context.resume = False\n self.internal_debugger.resume_context.is_a_step = False\n self.forward_signal = False\n\n event = status >> 8\n match event:\n case StopEvents.CLONE_EVENT:\n # The process has been cloned\n message = self.ptrace_interface._get_event_msg(pid)\n liblog.debugger(\n f\"Process {pid} cloned, new thread_id: {message}\",\n )\n self._handle_clone(message, results)\n self.forward_signal = False\n self.internal_debugger.resume_context.event_type[pid] = EventType.CLONE\n case StopEvents.SECCOMP_EVENT:\n # The process has installed a seccomp\n liblog.debugger(f\"Process {pid} installed a seccomp\")\n self.forward_signal = False\n self.internal_debugger.resume_context.event_type[pid] = EventType.SECCOMP\n case StopEvents.EXIT_EVENT:\n # The tracee is still alive; it needs\n # to be PTRACE_CONTed or PTRACE_DETACHed to finish exiting.\n # so we don't call self._handle_exit(pid) here\n # it will be called at the next wait (hopefully)\n message = self.ptrace_interface._get_event_msg(pid)\n # Mark the thread as a zombie\n thread._zombie = True\n liblog.debugger(\n f\"Thread {pid} exited with status: {message}\",\n )\n self.forward_signal = False\n self.internal_debugger.resume_context.event_type[pid] = EventType.EXIT\n case StopEvents.FORK_EVENT:\n # The process has been forked\n message = self.ptrace_interface._get_event_msg(pid)\n liblog.debugger(\n f\"Process {pid} forked with new pid: {message}\",\n )\n # We need to detach from the child process and attach to it again with a new debugger\n self.ptrace_interface.lib_trace.detach_from_child(message, self.internal_debugger.follow_children)\n if self.internal_debugger.follow_children:\n self.internal_debugger.set_child_debugger(message)\n self.forward_signal = False\n self.internal_debugger.resume_context.event_type[pid] = EventType.FORK\n\n def _handle_change(self: PtraceStatusHandler, pid: int, status: int, results: list) -> None:\n \"\"\"Handle a change in the status of a traced process.\"\"\"\n # Initialize the forward_signal flag\n self.forward_signal = True\n\n if os.WIFSTOPPED(status):\n if self.internal_debugger.resume_context.is_startup:\n # The process has just started\n return\n signum = os.WSTOPSIG(status)\n\n if signum != signal.SIGSTOP:\n self._assume_race_sigstop = False\n\n thread = self.internal_debugger.get_thread_by_id(pid)\n\n # Check if the debugger needs to handle the signal\n self._internal_signal_handler(pid, signum, results, status, thread)\n\n if signum != SYSCALL_SIGTRAP and thread is not None:\n thread._signal_number = signum & 0x7F\n\n # Handle the signal\n if self.internal_debugger.resume_context.event_type.get(pid, None) is None:\n self._handle_signal(thread)\n\n if self.forward_signal and signum != signal.SIGSTOP:\n # We have to forward the signal to the thread\n self.internal_debugger.resume_context.threads_with_signals_to_forward.append(pid)\n\n if os.WIFEXITED(status):\n # The thread has exited normally\n exit_code = os.WEXITSTATUS(status)\n liblog.debugger(\"Child process %d exited with exit code %d\", pid, exit_code)\n self._handle_exit(pid, exit_code=exit_code, exit_signal=None)\n\n if os.WIFSIGNALED(status):\n # The thread has exited with a signal\n exit_signal = os.WTERMSIG(status)\n liblog.debugger(\"Child process %d exited with signal %d\", pid, exit_signal)\n self._handle_exit(pid, exit_code=None, exit_signal=exit_signal)\n\n def manage_change(self: PtraceStatusHandler, result: list[tuple]) -> None:\n \"\"\"Manage the result of the waitpid and handle the changes.\"\"\"\n # Assume that the stop depends on SIGSTOP sent by the debugger\n # This is a workaround for some race conditions that may happen\n self._assume_race_sigstop = True\n\n for pid, status in result:\n if pid != -1:\n # Otherwise, this is a spurious trap\n self._handle_change(pid, status, result)\n\n if self._assume_race_sigstop:\n # Resume the process if the stop was due to a race condition with SIGSTOP sent by the debugger\n return\n\n def check_for_changes_in_threads(self: PtraceStatusHandler, pid: int) -> None:\n \"\"\"Check for new threads in the process and register them.\"\"\"\n tids = get_process_tasks(pid)\n for tid in tids:\n if not self.internal_debugger.get_thread_by_id(tid):\n self.ptrace_interface.register_new_thread(tid)\n liblog.debugger(\"Manually registered new thread %d\" % tid)\n\n for thread in self.internal_debugger.threads:\n if not thread.dead and thread.thread_id not in tids:\n self.ptrace_interface.unregister_thread(thread.thread_id, None, None)\n liblog.debugger(\"Manually unregistered thread %d\" % thread.thread_id)\n"},{"location":"from_pydoc/generated/ptrace/ptrace_status_handler/#libdebug.ptrace.ptrace_status_handler.PtraceStatusHandler.__init__","title":"__init__()","text":"Initializes the PtraceStatusHandler class.
Source code inlibdebug/ptrace/ptrace_status_handler.py def __init__(self: PtraceStatusHandler) -> None:\n \"\"\"Initializes the PtraceStatusHandler class.\"\"\"\n self.internal_debugger = provide_internal_debugger(self)\n self.ptrace_interface: DebuggingInterface = self.internal_debugger.debugging_interface\n self.forward_signal: bool = True\n self._assume_race_sigstop: bool = (\n True # Assume the stop is due to a race condition with SIGSTOP sent by the debugger\n )\n"},{"location":"from_pydoc/generated/ptrace/ptrace_status_handler/#libdebug.ptrace.ptrace_status_handler.PtraceStatusHandler._handle_change","title":"_handle_change(pid, status, results)","text":"Handle a change in the status of a traced process.
Source code inlibdebug/ptrace/ptrace_status_handler.py def _handle_change(self: PtraceStatusHandler, pid: int, status: int, results: list) -> None:\n \"\"\"Handle a change in the status of a traced process.\"\"\"\n # Initialize the forward_signal flag\n self.forward_signal = True\n\n if os.WIFSTOPPED(status):\n if self.internal_debugger.resume_context.is_startup:\n # The process has just started\n return\n signum = os.WSTOPSIG(status)\n\n if signum != signal.SIGSTOP:\n self._assume_race_sigstop = False\n\n thread = self.internal_debugger.get_thread_by_id(pid)\n\n # Check if the debugger needs to handle the signal\n self._internal_signal_handler(pid, signum, results, status, thread)\n\n if signum != SYSCALL_SIGTRAP and thread is not None:\n thread._signal_number = signum & 0x7F\n\n # Handle the signal\n if self.internal_debugger.resume_context.event_type.get(pid, None) is None:\n self._handle_signal(thread)\n\n if self.forward_signal and signum != signal.SIGSTOP:\n # We have to forward the signal to the thread\n self.internal_debugger.resume_context.threads_with_signals_to_forward.append(pid)\n\n if os.WIFEXITED(status):\n # The thread has exited normally\n exit_code = os.WEXITSTATUS(status)\n liblog.debugger(\"Child process %d exited with exit code %d\", pid, exit_code)\n self._handle_exit(pid, exit_code=exit_code, exit_signal=None)\n\n if os.WIFSIGNALED(status):\n # The thread has exited with a signal\n exit_signal = os.WTERMSIG(status)\n liblog.debugger(\"Child process %d exited with signal %d\", pid, exit_signal)\n self._handle_exit(pid, exit_code=None, exit_signal=exit_signal)\n"},{"location":"from_pydoc/generated/ptrace/ptrace_status_handler/#libdebug.ptrace.ptrace_status_handler.PtraceStatusHandler._handle_signal","title":"_handle_signal(thread)","text":"Handle the signal trap.
Source code inlibdebug/ptrace/ptrace_status_handler.py def _handle_signal(self: PtraceStatusHandler, thread: ThreadContext) -> bool:\n \"\"\"Handle the signal trap.\"\"\"\n signal_number = thread._signal_number\n\n if signal_number in self.internal_debugger.caught_signals:\n catcher = self.internal_debugger.caught_signals[signal_number]\n\n self._manage_caught_signal(catcher, thread, signal_number, {signal_number})\n elif -1 in self.internal_debugger.caught_signals and signal_number not in (\n signal.SIGSTOP,\n signal.SIGKILL,\n ):\n # Handle all signals is enabled\n catcher = self.internal_debugger.caught_signals[-1]\n\n self.internal_debugger.resume_context.event_hit_ref[thread.thread_id] = catcher\n\n self._manage_caught_signal(catcher, thread, signal_number, {signal_number})\n"},{"location":"from_pydoc/generated/ptrace/ptrace_status_handler/#libdebug.ptrace.ptrace_status_handler.PtraceStatusHandler._handle_syscall","title":"_handle_syscall(thread_id)","text":"Handle a syscall trap.
Source code inlibdebug/ptrace/ptrace_status_handler.py def _handle_syscall(self: PtraceStatusHandler, thread_id: int) -> bool:\n \"\"\"Handle a syscall trap.\"\"\"\n thread = self.internal_debugger.get_thread_by_id(thread_id)\n if not hasattr(thread, \"syscall_number\"):\n # This is another spurious trap, we don't know what to do with it\n return\n\n syscall_number = thread.syscall_number\n\n if syscall_number in self.internal_debugger.handled_syscalls:\n handler = self.internal_debugger.handled_syscalls[syscall_number]\n elif -1 in self.internal_debugger.handled_syscalls:\n # Handle all syscalls is enabled\n handler = self.internal_debugger.handled_syscalls[-1]\n else:\n # This is a syscall we don't care about\n # Resume the execution\n return\n\n self.internal_debugger.resume_context.event_hit_ref[thread_id] = handler\n\n if not handler._has_entered:\n # The syscall is being entered\n liblog.debugger(\n \"Syscall %d entered on thread %d\",\n syscall_number,\n thread_id,\n )\n\n self._manage_syscall_on_enter(\n handler,\n thread,\n syscall_number,\n {syscall_number},\n )\n\n else:\n # The syscall is being exited\n liblog.debugger(\"Syscall %d exited on thread %d\", syscall_number, thread_id)\n\n if handler.enabled and not handler._skip_exit:\n # Increment the hit count only if the syscall has been handled\n handler.hit_count += 1\n\n # Call the user-defined callback if it exists\n if handler.on_exit_user and handler.enabled and not handler._skip_exit:\n # Pretty print the return value before the callback\n if handler.on_exit_pprint:\n return_value_before_callback = thread.syscall_return\n handler.on_exit_user(thread, handler)\n if handler.on_exit_pprint:\n return_value_after_callback = thread.syscall_return\n if return_value_after_callback != return_value_before_callback:\n handler.on_exit_pprint(\n (return_value_before_callback, return_value_after_callback),\n )\n else:\n handler.on_exit_pprint(return_value_after_callback)\n elif handler.on_exit_pprint:\n # Pretty print the return value\n handler.on_exit_pprint(thread.syscall_return)\n\n handler._has_entered = False\n handler._skip_exit = False\n if not handler.on_enter_user and not handler.on_exit_user and handler.enabled:\n # If the syscall has no callback, we need to stop the process despite the other signals\n self.internal_debugger.resume_context.event_type[thread_id] = EventType.SYSCALL\n self.internal_debugger.resume_context.resume = False\n"},{"location":"from_pydoc/generated/ptrace/ptrace_status_handler/#libdebug.ptrace.ptrace_status_handler.PtraceStatusHandler._internal_signal_handler","title":"_internal_signal_handler(pid, signum, results, status, thread)","text":"Internal handler for signals used by the debugger.
Source code inlibdebug/ptrace/ptrace_status_handler.py def _internal_signal_handler(\n self: PtraceStatusHandler,\n pid: int,\n signum: int,\n results: list,\n status: int,\n thread: ThreadContext,\n) -> None:\n \"\"\"Internal handler for signals used by the debugger.\"\"\"\n if signum == SYSCALL_SIGTRAP:\n # We hit a syscall\n liblog.debugger(\"Child thread %d stopped on syscall\", pid)\n self._handle_syscall(pid)\n self.forward_signal = False\n elif signum == signal.SIGSTOP and self.internal_debugger.resume_context.force_interrupt:\n # The user has requested an interrupt, we need to stop the process despite the ohter signals\n liblog.debugger(\n \"Child thread %d stopped with signal %s\",\n pid,\n resolve_signal_name(signum),\n )\n self.internal_debugger.resume_context.event_type[pid] = EventType.USER_INTERRUPT\n self.internal_debugger.resume_context.resume = False\n self.internal_debugger.resume_context.force_interrupt = False\n self.forward_signal = False\n elif signum == signal.SIGTRAP:\n # The trap decides if we hit a breakpoint. If so, it decides whether we should stop or\n # continue the execution and wait for the next trap\n self._handle_breakpoints(pid)\n\n if self.internal_debugger.resume_context.is_a_step:\n # The process is stepping, we need to stop the execution\n self.internal_debugger.resume_context.event_type[pid] = EventType.STEP\n self.internal_debugger.resume_context.resume = False\n self.internal_debugger.resume_context.is_a_step = False\n self.forward_signal = False\n\n event = status >> 8\n match event:\n case StopEvents.CLONE_EVENT:\n # The process has been cloned\n message = self.ptrace_interface._get_event_msg(pid)\n liblog.debugger(\n f\"Process {pid} cloned, new thread_id: {message}\",\n )\n self._handle_clone(message, results)\n self.forward_signal = False\n self.internal_debugger.resume_context.event_type[pid] = EventType.CLONE\n case StopEvents.SECCOMP_EVENT:\n # The process has installed a seccomp\n liblog.debugger(f\"Process {pid} installed a seccomp\")\n self.forward_signal = False\n self.internal_debugger.resume_context.event_type[pid] = EventType.SECCOMP\n case StopEvents.EXIT_EVENT:\n # The tracee is still alive; it needs\n # to be PTRACE_CONTed or PTRACE_DETACHed to finish exiting.\n # so we don't call self._handle_exit(pid) here\n # it will be called at the next wait (hopefully)\n message = self.ptrace_interface._get_event_msg(pid)\n # Mark the thread as a zombie\n thread._zombie = True\n liblog.debugger(\n f\"Thread {pid} exited with status: {message}\",\n )\n self.forward_signal = False\n self.internal_debugger.resume_context.event_type[pid] = EventType.EXIT\n case StopEvents.FORK_EVENT:\n # The process has been forked\n message = self.ptrace_interface._get_event_msg(pid)\n liblog.debugger(\n f\"Process {pid} forked with new pid: {message}\",\n )\n # We need to detach from the child process and attach to it again with a new debugger\n self.ptrace_interface.lib_trace.detach_from_child(message, self.internal_debugger.follow_children)\n if self.internal_debugger.follow_children:\n self.internal_debugger.set_child_debugger(message)\n self.forward_signal = False\n self.internal_debugger.resume_context.event_type[pid] = EventType.FORK\n"},{"location":"from_pydoc/generated/ptrace/ptrace_status_handler/#libdebug.ptrace.ptrace_status_handler.PtraceStatusHandler._manage_syscall_on_enter","title":"_manage_syscall_on_enter(handler, thread, syscall_number, hijacked_set)","text":"Manage the on_enter callback of a syscall.
Source code inlibdebug/ptrace/ptrace_status_handler.py def _manage_syscall_on_enter(\n self: PtraceStatusHandler,\n handler: SyscallHandler,\n thread: ThreadContext,\n syscall_number: int,\n hijacked_set: set[int],\n) -> None:\n \"\"\"Manage the on_enter callback of a syscall.\"\"\"\n # Call the user-defined callback if it exists\n if handler.on_enter_user and handler.enabled:\n old_args = [\n thread.syscall_arg0,\n thread.syscall_arg1,\n thread.syscall_arg2,\n thread.syscall_arg3,\n thread.syscall_arg4,\n thread.syscall_arg5,\n ]\n handler.on_enter_user(thread, handler)\n\n # Check if the syscall number has changed\n syscall_number_after_callback = thread.syscall_number\n\n if syscall_number_after_callback != syscall_number:\n # The syscall number has changed\n # Pretty print the syscall number before the callback\n if handler.on_enter_pprint:\n handler.on_enter_pprint(\n thread,\n syscall_number,\n hijacked=True,\n old_args=old_args,\n )\n if syscall_number_after_callback in self.internal_debugger.handled_syscalls:\n callback_hijack = self.internal_debugger.handled_syscalls[syscall_number_after_callback]\n\n # Check if the new syscall has to be handled recursively\n if handler.recursive:\n if syscall_number_after_callback not in hijacked_set:\n hijacked_set.add(syscall_number_after_callback)\n else:\n # The syscall has already been hijacked in the current chain\n raise RuntimeError(\n \"Syscall hijacking loop detected. Check your code to avoid infinite loops.\",\n )\n\n # Call recursively the function to manage the new syscall\n self._manage_syscall_on_enter(\n callback_hijack,\n thread,\n syscall_number_after_callback,\n hijacked_set,\n )\n elif callback_hijack.on_enter_pprint:\n # Pretty print the syscall number\n callback_hijack.on_enter_pprint(thread, syscall_number_after_callback, hijacker=True)\n callback_hijack._has_entered = True\n callback_hijack._skip_exit = True\n else:\n # Skip the exit callback of the syscall that has been hijacked\n callback_hijack._has_entered = True\n callback_hijack._skip_exit = True\n elif handler.on_enter_pprint:\n # Pretty print the syscall number\n handler.on_enter_pprint(thread, syscall_number, callback=True, old_args=old_args)\n handler._has_entered = True\n else:\n handler._has_entered = True\n elif handler.on_enter_pprint:\n # Pretty print the syscall number\n handler.on_enter_pprint(thread, syscall_number, callback=(handler.on_exit_user is not None))\n handler._has_entered = True\n elif handler.on_exit_pprint or handler.on_exit_user:\n # The syscall has been entered but the user did not define an on_enter callback\n handler._has_entered = True\n if not handler.on_enter_user and not handler.on_exit_user and handler.enabled:\n # If the syscall has no callback, we need to stop the process despite the other signals\n self.internal_debugger.resume_context.event_type[thread.thread_id] = EventType.SYSCALL\n handler._has_entered = True\n self.internal_debugger.resume_context.resume = False\n"},{"location":"from_pydoc/generated/ptrace/ptrace_status_handler/#libdebug.ptrace.ptrace_status_handler.PtraceStatusHandler.check_for_changes_in_threads","title":"check_for_changes_in_threads(pid)","text":"Check for new threads in the process and register them.
Source code inlibdebug/ptrace/ptrace_status_handler.py def check_for_changes_in_threads(self: PtraceStatusHandler, pid: int) -> None:\n \"\"\"Check for new threads in the process and register them.\"\"\"\n tids = get_process_tasks(pid)\n for tid in tids:\n if not self.internal_debugger.get_thread_by_id(tid):\n self.ptrace_interface.register_new_thread(tid)\n liblog.debugger(\"Manually registered new thread %d\" % tid)\n\n for thread in self.internal_debugger.threads:\n if not thread.dead and thread.thread_id not in tids:\n self.ptrace_interface.unregister_thread(thread.thread_id, None, None)\n liblog.debugger(\"Manually unregistered thread %d\" % thread.thread_id)\n"},{"location":"from_pydoc/generated/ptrace/ptrace_status_handler/#libdebug.ptrace.ptrace_status_handler.PtraceStatusHandler.manage_change","title":"manage_change(result)","text":"Manage the result of the waitpid and handle the changes.
Source code inlibdebug/ptrace/ptrace_status_handler.py def manage_change(self: PtraceStatusHandler, result: list[tuple]) -> None:\n \"\"\"Manage the result of the waitpid and handle the changes.\"\"\"\n # Assume that the stop depends on SIGSTOP sent by the debugger\n # This is a workaround for some race conditions that may happen\n self._assume_race_sigstop = True\n\n for pid, status in result:\n if pid != -1:\n # Otherwise, this is a spurious trap\n self._handle_change(pid, status, result)\n\n if self._assume_race_sigstop:\n # Resume the process if the stop was due to a race condition with SIGSTOP sent by the debugger\n return\n"},{"location":"from_pydoc/generated/snapshots/diff/","title":"libdebug.snapshots.diff","text":""},{"location":"from_pydoc/generated/snapshots/diff/#libdebug.snapshots.diff.Diff","title":"Diff","text":"This object represents a diff between two snapshots.
Source code inlibdebug/snapshots/diff.py class Diff:\n \"\"\"This object represents a diff between two snapshots.\"\"\"\n\n def __init__(self: Diff, snapshot1: Snapshot, snapshot2: Snapshot) -> None:\n \"\"\"Initialize the Diff object with two snapshots.\n\n Args:\n snapshot1 (Snapshot): The first snapshot.\n snapshot2 (Snapshot): The second snapshot.\n \"\"\"\n if snapshot1.snapshot_id < snapshot2.snapshot_id:\n self.snapshot1 = snapshot1\n self.snapshot2 = snapshot2\n else:\n self.snapshot1 = snapshot2\n self.snapshot2 = snapshot1\n\n # The level of the diff is the lowest level among the two snapshots\n if snapshot1.level == \"base\" or snapshot2.level == \"base\":\n self.level = \"base\"\n elif snapshot1.level == \"writable\" or snapshot2.level == \"writable\":\n self.level = \"writable\"\n else:\n self.level = \"full\"\n\n if self.snapshot1.arch != self.snapshot2.arch:\n raise ValueError(\"Snapshots have different architectures. Automatic diff is not supported.\")\n\n def _save_reg_diffs(self: Snapshot) -> None:\n self.regs = RegisterDiffAccessor(\n self.snapshot1.regs._generic_regs,\n self.snapshot1.regs._special_regs,\n self.snapshot1.regs._vec_fp_regs,\n )\n\n all_regs = dir(self.snapshot1.regs)\n all_regs = [reg for reg in all_regs if isinstance(self.snapshot1.regs.__getattribute__(reg), int | float)]\n\n for reg_name in all_regs:\n old_value = self.snapshot1.regs.__getattribute__(reg_name)\n new_value = self.snapshot2.regs.__getattribute__(reg_name)\n has_changed = old_value != new_value\n\n diff = RegisterDiff(\n old_value=old_value,\n new_value=new_value,\n has_changed=has_changed,\n )\n\n # Create diff object\n self.regs.__setattr__(reg_name, diff)\n\n def _resolve_maps_diff(self: Diff) -> None:\n # Handle memory maps\n all_maps_diffs = []\n handled_map2_indices = []\n\n for map1 in self.snapshot1.maps:\n # Find the corresponding map in the second snapshot\n map2 = None\n\n for map2_index, candidate in enumerate(self.snapshot2.maps):\n if map1.is_same_identity(candidate):\n map2 = candidate\n handled_map2_indices.append(map2_index)\n break\n\n if map2 is None:\n diff = MemoryMapDiff(\n old_map_state=map1,\n new_map_state=None,\n has_changed=True,\n )\n else:\n diff = MemoryMapDiff(\n old_map_state=map1,\n new_map_state=map2,\n has_changed=(map1 != map2),\n )\n\n all_maps_diffs.append(diff)\n\n new_pages = [self.snapshot2.maps[i] for i in range(len(self.snapshot2.maps)) if i not in handled_map2_indices]\n\n for new_page in new_pages:\n diff = MemoryMapDiff(\n old_map_state=None,\n new_map_state=new_page,\n has_changed=True,\n )\n\n all_maps_diffs.append(diff)\n\n # Convert the list to a MemoryMapDiffList\n self.maps = MemoryMapDiffList(\n all_maps_diffs,\n self.snapshot1._process_name,\n self.snapshot1._process_full_path,\n )\n\n @property\n def registers(self: Snapshot) -> SnapshotRegisters:\n \"\"\"Alias for regs.\"\"\"\n return self.regs\n\n def pprint_maps(self: Diff) -> None:\n \"\"\"Pretty print the memory maps diff.\"\"\"\n has_prev_changed = False\n\n for diff in self.maps:\n ref = diff.old_map_state if diff.old_map_state is not None else diff.new_map_state\n\n map_state_str = \"\"\n map_state_str += \"Memory Map:\\n\"\n map_state_str += f\" start: {ref.start:#x}\\n\"\n map_state_str += f\" end: {ref.end:#x}\\n\"\n map_state_str += f\" permissions: {ref.permissions}\\n\"\n map_state_str += f\" size: {ref.size:#x}\\n\"\n map_state_str += f\" offset: {ref.offset:#x}\\n\"\n map_state_str += f\" backing_file: {ref.backing_file}\\n\"\n\n # If is added\n if diff.old_map_state is None:\n pprint_diff_line(map_state_str, is_added=True)\n\n has_prev_changed = True\n # If is removed\n elif diff.new_map_state is None:\n pprint_diff_line(map_state_str, is_added=False)\n\n has_prev_changed = True\n elif diff.old_map_state.end != diff.new_map_state.end:\n printed_line = map_state_str\n\n new_map_end = diff.new_map_state.end\n\n start_strike = printed_line.find(\"end:\") + 4\n end_strike = printed_line.find(\"\\n\", start_strike)\n\n pprint_inline_diff(printed_line, start_strike, end_strike, f\"{hex(new_map_end)}\")\n\n has_prev_changed = True\n elif diff.old_map_state.permissions != diff.new_map_state.permissions:\n printed_line = map_state_str\n\n new_map_permissions = diff.new_map_state.permissions\n\n start_strike = printed_line.find(\"permissions:\") + 12\n end_strike = printed_line.find(\"\\n\", start_strike)\n\n pprint_inline_diff(printed_line, start_strike, end_strike, new_map_permissions)\n\n has_prev_changed = True\n elif diff.old_map_state.content != diff.new_map_state.content:\n printed_line = map_state_str + \" [content changed]\\n\"\n color_start = printed_line.find(\"[content changed]\")\n\n pprint_diff_substring(printed_line, color_start, color_start + len(\"[content changed]\"))\n\n has_prev_changed = True\n else:\n if has_prev_changed:\n print(\"\\n[...]\\n\")\n\n has_prev_changed = False\n\n def pprint_memory(\n self: Diff,\n start: int,\n end: int,\n file: str = \"hybrid\",\n override_word_size: int = None,\n integer_mode: bool = False,\n ) -> None:\n \"\"\"Pretty print the memory diff.\n\n Args:\n start (int): The start address of the memory diff.\n end (int): The end address of the memory diff.\n file (str, optional): The backing file for relative / absolute addressing. Defaults to \"hybrid\".\n override_word_size (int, optional): The word size to use for the diff in place of the ISA word size. Defaults to None.\n integer_mode (bool, optional): If True, the diff will be printed as hex integers (system endianness applies). Defaults to False.\n \"\"\"\n if self.level == \"base\":\n raise ValueError(\"Memory diff is not available at base snapshot level.\")\n\n if start > end:\n tmp = start\n start = end\n end = tmp\n\n word_size = (\n get_platform_gp_register_size(self.snapshot1.arch) if override_word_size is None else override_word_size\n )\n\n # Resolve the address\n if file == \"absolute\":\n address_start = start\n elif file == \"hybrid\":\n try:\n # Try to resolve the address as absolute\n self.snapshot1.memory[start, 1, \"absolute\"]\n address_start = start\n except ValueError:\n # If the address is not in the maps, we use the binary file\n address_start = start + self.snapshot1.maps.filter(\"binary\")[0].start\n file = \"binary\"\n else:\n map_file = self.snapshot1.maps.filter(file)[0]\n address_start = start + map_file.base\n file = map_file.backing_file if file != \"binary\" else \"binary\"\n\n extract_before = self.snapshot1.memory[start:end, file]\n extract_after = self.snapshot2.memory[start:end, file]\n\n file_info = f\" (file: {file})\" if file not in (\"absolute\", \"hybrid\") else \"\"\n print(f\"Memory diff from {start:#x} to {end:#x}{file_info}:\")\n\n pprint_memory_diff_util(\n address_start,\n extract_before,\n extract_after,\n word_size,\n self.snapshot1.maps,\n integer_mode=integer_mode,\n )\n\n def pprint_regs(self: Diff) -> None:\n \"\"\"Pretty print the general_purpose registers diffs.\"\"\"\n # Header with column alignment\n print(\"{:<19} {:<24} {:<20}\\n\".format(\"Register\", \"Old Value\", \"New Value\"))\n print(\"-\" * 58 + \"\")\n\n # Log all integer changes\n for attr_name in self.regs._generic_regs:\n attr = self.regs.__getattribute__(attr_name)\n\n if attr.has_changed:\n pprint_reg_diff_util(\n attr_name,\n self.snapshot1.maps,\n self.snapshot2.maps,\n attr.old_value,\n attr.new_value,\n )\n\n def pprint_regs_all(self: Diff) -> None:\n \"\"\"Pretty print the registers diffs (including special and vector registers).\"\"\"\n # Header with column alignment\n print(\"{:<19} {:<24} {:<20}\\n\".format(\"Register\", \"Old Value\", \"New Value\"))\n print(\"-\" * 58 + \"\")\n\n # Log all integer changes\n for attr_name in self.regs._generic_regs + self.regs._special_regs:\n attr = self.regs.__getattribute__(attr_name)\n\n if attr.has_changed:\n pprint_reg_diff_util(\n attr_name,\n self.snapshot1.maps,\n self.snapshot2.maps,\n attr.old_value,\n attr.new_value,\n )\n\n print()\n\n # Log all vector changes\n for attr1_name, attr2_name in self.regs._vec_fp_regs:\n attr1 = self.regs.__getattribute__(attr1_name)\n attr2 = self.regs.__getattribute__(attr2_name)\n\n if attr1.has_changed or attr2.has_changed:\n pprint_reg_diff_large_util(\n (attr1_name, attr2_name),\n (attr1.old_value, attr2.old_value),\n (attr1.new_value, attr2.new_value),\n )\n\n def pprint_registers(self: Diff) -> None:\n \"\"\"Alias afor pprint_regs.\"\"\"\n self.pprint_regs()\n\n def pprint_registers_all(self: Diff) -> None:\n \"\"\"Alias for pprint_regs_all.\"\"\"\n self.pprint_regs_all()\n\n def pprint_backtrace(self: Diff) -> None:\n \"\"\"Pretty print the backtrace diff.\"\"\"\n if self.level == \"base\":\n raise ValueError(\"Backtrace is not available at base level. Stack is not available\")\n\n prev_log_level = libcontext.general_logger\n libcontext.general_logger = \"SILENT\"\n stack_unwinder = stack_unwinding_provider(self.snapshot1.arch)\n backtrace1 = stack_unwinder.unwind(self.snapshot1)\n backtrace2 = stack_unwinder.unwind(self.snapshot2)\n\n maps1 = self.snapshot1.maps\n maps2 = self.snapshot2.maps\n\n symbols = self.snapshot1.memory._symbol_ref\n\n # Columns are Before, Unchanged, After\n # __ __\n # |__| |__|\n # |__| |__|\n # |__|__|__|\n # |__|__|__|\n # |__|__|__|\n column1 = []\n column2 = []\n column3 = []\n\n for addr1, addr2 in zip_longest(reversed(backtrace1), reversed(backtrace2)):\n col1 = get_colored_saved_address_util(addr1, maps1, symbols).strip() if addr1 else None\n col2 = None\n col3 = None\n\n if addr2:\n if addr1 == addr2:\n col2 = col1\n col1 = None\n else:\n col3 = get_colored_saved_address_util(addr2, maps2, symbols).strip()\n\n column1.append(col1)\n column2.append(col2)\n column3.append(col3)\n\n max_str_len = max([len(x) if x else 0 for x in column1 + column2 + column3])\n\n print(\"Backtrace diff:\")\n print(\"-\" * (max_str_len * 3 + 6))\n print(f\"{'Before':<{max_str_len}} | {'Unchanged':<{max_str_len}} | {'After':<{max_str_len}}\")\n for col1_val, col2_val, col3_val in zip(reversed(column1), reversed(column2), reversed(column3), strict=False):\n col1 = pad_colored_string(col1_val, max_str_len) if col1_val else \" \" * max_str_len\n col2 = pad_colored_string(col2_val, max_str_len) if col2_val else \" \" * max_str_len\n col3 = pad_colored_string(col3_val, max_str_len) if col3_val else \" \" * max_str_len\n\n print(f\"{col1} | {col2} | {col3}\")\n\n print(\"-\" * (max_str_len * 3 + 6))\n\n libcontext.general_logger = prev_log_level\n"},{"location":"from_pydoc/generated/snapshots/diff/#libdebug.snapshots.diff.Diff.registers","title":"registers property","text":"Alias for regs.
"},{"location":"from_pydoc/generated/snapshots/diff/#libdebug.snapshots.diff.Diff.__init__","title":"__init__(snapshot1, snapshot2)","text":"Initialize the Diff object with two snapshots.
Parameters:
Name Type Description Defaultsnapshot1 Snapshot The first snapshot.
requiredsnapshot2 Snapshot The second snapshot.
required Source code inlibdebug/snapshots/diff.py def __init__(self: Diff, snapshot1: Snapshot, snapshot2: Snapshot) -> None:\n \"\"\"Initialize the Diff object with two snapshots.\n\n Args:\n snapshot1 (Snapshot): The first snapshot.\n snapshot2 (Snapshot): The second snapshot.\n \"\"\"\n if snapshot1.snapshot_id < snapshot2.snapshot_id:\n self.snapshot1 = snapshot1\n self.snapshot2 = snapshot2\n else:\n self.snapshot1 = snapshot2\n self.snapshot2 = snapshot1\n\n # The level of the diff is the lowest level among the two snapshots\n if snapshot1.level == \"base\" or snapshot2.level == \"base\":\n self.level = \"base\"\n elif snapshot1.level == \"writable\" or snapshot2.level == \"writable\":\n self.level = \"writable\"\n else:\n self.level = \"full\"\n\n if self.snapshot1.arch != self.snapshot2.arch:\n raise ValueError(\"Snapshots have different architectures. Automatic diff is not supported.\")\n"},{"location":"from_pydoc/generated/snapshots/diff/#libdebug.snapshots.diff.Diff.pprint_backtrace","title":"pprint_backtrace()","text":"Pretty print the backtrace diff.
Source code inlibdebug/snapshots/diff.py def pprint_backtrace(self: Diff) -> None:\n \"\"\"Pretty print the backtrace diff.\"\"\"\n if self.level == \"base\":\n raise ValueError(\"Backtrace is not available at base level. Stack is not available\")\n\n prev_log_level = libcontext.general_logger\n libcontext.general_logger = \"SILENT\"\n stack_unwinder = stack_unwinding_provider(self.snapshot1.arch)\n backtrace1 = stack_unwinder.unwind(self.snapshot1)\n backtrace2 = stack_unwinder.unwind(self.snapshot2)\n\n maps1 = self.snapshot1.maps\n maps2 = self.snapshot2.maps\n\n symbols = self.snapshot1.memory._symbol_ref\n\n # Columns are Before, Unchanged, After\n # __ __\n # |__| |__|\n # |__| |__|\n # |__|__|__|\n # |__|__|__|\n # |__|__|__|\n column1 = []\n column2 = []\n column3 = []\n\n for addr1, addr2 in zip_longest(reversed(backtrace1), reversed(backtrace2)):\n col1 = get_colored_saved_address_util(addr1, maps1, symbols).strip() if addr1 else None\n col2 = None\n col3 = None\n\n if addr2:\n if addr1 == addr2:\n col2 = col1\n col1 = None\n else:\n col3 = get_colored_saved_address_util(addr2, maps2, symbols).strip()\n\n column1.append(col1)\n column2.append(col2)\n column3.append(col3)\n\n max_str_len = max([len(x) if x else 0 for x in column1 + column2 + column3])\n\n print(\"Backtrace diff:\")\n print(\"-\" * (max_str_len * 3 + 6))\n print(f\"{'Before':<{max_str_len}} | {'Unchanged':<{max_str_len}} | {'After':<{max_str_len}}\")\n for col1_val, col2_val, col3_val in zip(reversed(column1), reversed(column2), reversed(column3), strict=False):\n col1 = pad_colored_string(col1_val, max_str_len) if col1_val else \" \" * max_str_len\n col2 = pad_colored_string(col2_val, max_str_len) if col2_val else \" \" * max_str_len\n col3 = pad_colored_string(col3_val, max_str_len) if col3_val else \" \" * max_str_len\n\n print(f\"{col1} | {col2} | {col3}\")\n\n print(\"-\" * (max_str_len * 3 + 6))\n\n libcontext.general_logger = prev_log_level\n"},{"location":"from_pydoc/generated/snapshots/diff/#libdebug.snapshots.diff.Diff.pprint_maps","title":"pprint_maps()","text":"Pretty print the memory maps diff.
Source code inlibdebug/snapshots/diff.py def pprint_maps(self: Diff) -> None:\n \"\"\"Pretty print the memory maps diff.\"\"\"\n has_prev_changed = False\n\n for diff in self.maps:\n ref = diff.old_map_state if diff.old_map_state is not None else diff.new_map_state\n\n map_state_str = \"\"\n map_state_str += \"Memory Map:\\n\"\n map_state_str += f\" start: {ref.start:#x}\\n\"\n map_state_str += f\" end: {ref.end:#x}\\n\"\n map_state_str += f\" permissions: {ref.permissions}\\n\"\n map_state_str += f\" size: {ref.size:#x}\\n\"\n map_state_str += f\" offset: {ref.offset:#x}\\n\"\n map_state_str += f\" backing_file: {ref.backing_file}\\n\"\n\n # If is added\n if diff.old_map_state is None:\n pprint_diff_line(map_state_str, is_added=True)\n\n has_prev_changed = True\n # If is removed\n elif diff.new_map_state is None:\n pprint_diff_line(map_state_str, is_added=False)\n\n has_prev_changed = True\n elif diff.old_map_state.end != diff.new_map_state.end:\n printed_line = map_state_str\n\n new_map_end = diff.new_map_state.end\n\n start_strike = printed_line.find(\"end:\") + 4\n end_strike = printed_line.find(\"\\n\", start_strike)\n\n pprint_inline_diff(printed_line, start_strike, end_strike, f\"{hex(new_map_end)}\")\n\n has_prev_changed = True\n elif diff.old_map_state.permissions != diff.new_map_state.permissions:\n printed_line = map_state_str\n\n new_map_permissions = diff.new_map_state.permissions\n\n start_strike = printed_line.find(\"permissions:\") + 12\n end_strike = printed_line.find(\"\\n\", start_strike)\n\n pprint_inline_diff(printed_line, start_strike, end_strike, new_map_permissions)\n\n has_prev_changed = True\n elif diff.old_map_state.content != diff.new_map_state.content:\n printed_line = map_state_str + \" [content changed]\\n\"\n color_start = printed_line.find(\"[content changed]\")\n\n pprint_diff_substring(printed_line, color_start, color_start + len(\"[content changed]\"))\n\n has_prev_changed = True\n else:\n if has_prev_changed:\n print(\"\\n[...]\\n\")\n\n has_prev_changed = False\n"},{"location":"from_pydoc/generated/snapshots/diff/#libdebug.snapshots.diff.Diff.pprint_memory","title":"pprint_memory(start, end, file='hybrid', override_word_size=None, integer_mode=False)","text":"Pretty print the memory diff.
Parameters:
Name Type Description Defaultstart int The start address of the memory diff.
requiredend int The end address of the memory diff.
requiredfile str The backing file for relative / absolute addressing. Defaults to \"hybrid\".
'hybrid' override_word_size int The word size to use for the diff in place of the ISA word size. Defaults to None.
None integer_mode bool If True, the diff will be printed as hex integers (system endianness applies). Defaults to False.
False Source code in libdebug/snapshots/diff.py def pprint_memory(\n self: Diff,\n start: int,\n end: int,\n file: str = \"hybrid\",\n override_word_size: int = None,\n integer_mode: bool = False,\n) -> None:\n \"\"\"Pretty print the memory diff.\n\n Args:\n start (int): The start address of the memory diff.\n end (int): The end address of the memory diff.\n file (str, optional): The backing file for relative / absolute addressing. Defaults to \"hybrid\".\n override_word_size (int, optional): The word size to use for the diff in place of the ISA word size. Defaults to None.\n integer_mode (bool, optional): If True, the diff will be printed as hex integers (system endianness applies). Defaults to False.\n \"\"\"\n if self.level == \"base\":\n raise ValueError(\"Memory diff is not available at base snapshot level.\")\n\n if start > end:\n tmp = start\n start = end\n end = tmp\n\n word_size = (\n get_platform_gp_register_size(self.snapshot1.arch) if override_word_size is None else override_word_size\n )\n\n # Resolve the address\n if file == \"absolute\":\n address_start = start\n elif file == \"hybrid\":\n try:\n # Try to resolve the address as absolute\n self.snapshot1.memory[start, 1, \"absolute\"]\n address_start = start\n except ValueError:\n # If the address is not in the maps, we use the binary file\n address_start = start + self.snapshot1.maps.filter(\"binary\")[0].start\n file = \"binary\"\n else:\n map_file = self.snapshot1.maps.filter(file)[0]\n address_start = start + map_file.base\n file = map_file.backing_file if file != \"binary\" else \"binary\"\n\n extract_before = self.snapshot1.memory[start:end, file]\n extract_after = self.snapshot2.memory[start:end, file]\n\n file_info = f\" (file: {file})\" if file not in (\"absolute\", \"hybrid\") else \"\"\n print(f\"Memory diff from {start:#x} to {end:#x}{file_info}:\")\n\n pprint_memory_diff_util(\n address_start,\n extract_before,\n extract_after,\n word_size,\n self.snapshot1.maps,\n integer_mode=integer_mode,\n )\n"},{"location":"from_pydoc/generated/snapshots/diff/#libdebug.snapshots.diff.Diff.pprint_registers","title":"pprint_registers()","text":"Alias afor pprint_regs.
Source code inlibdebug/snapshots/diff.py def pprint_registers(self: Diff) -> None:\n \"\"\"Alias afor pprint_regs.\"\"\"\n self.pprint_regs()\n"},{"location":"from_pydoc/generated/snapshots/diff/#libdebug.snapshots.diff.Diff.pprint_registers_all","title":"pprint_registers_all()","text":"Alias for pprint_regs_all.
Source code inlibdebug/snapshots/diff.py def pprint_registers_all(self: Diff) -> None:\n \"\"\"Alias for pprint_regs_all.\"\"\"\n self.pprint_regs_all()\n"},{"location":"from_pydoc/generated/snapshots/diff/#libdebug.snapshots.diff.Diff.pprint_regs","title":"pprint_regs()","text":"Pretty print the general_purpose registers diffs.
Source code inlibdebug/snapshots/diff.py def pprint_regs(self: Diff) -> None:\n \"\"\"Pretty print the general_purpose registers diffs.\"\"\"\n # Header with column alignment\n print(\"{:<19} {:<24} {:<20}\\n\".format(\"Register\", \"Old Value\", \"New Value\"))\n print(\"-\" * 58 + \"\")\n\n # Log all integer changes\n for attr_name in self.regs._generic_regs:\n attr = self.regs.__getattribute__(attr_name)\n\n if attr.has_changed:\n pprint_reg_diff_util(\n attr_name,\n self.snapshot1.maps,\n self.snapshot2.maps,\n attr.old_value,\n attr.new_value,\n )\n"},{"location":"from_pydoc/generated/snapshots/diff/#libdebug.snapshots.diff.Diff.pprint_regs_all","title":"pprint_regs_all()","text":"Pretty print the registers diffs (including special and vector registers).
Source code inlibdebug/snapshots/diff.py def pprint_regs_all(self: Diff) -> None:\n \"\"\"Pretty print the registers diffs (including special and vector registers).\"\"\"\n # Header with column alignment\n print(\"{:<19} {:<24} {:<20}\\n\".format(\"Register\", \"Old Value\", \"New Value\"))\n print(\"-\" * 58 + \"\")\n\n # Log all integer changes\n for attr_name in self.regs._generic_regs + self.regs._special_regs:\n attr = self.regs.__getattribute__(attr_name)\n\n if attr.has_changed:\n pprint_reg_diff_util(\n attr_name,\n self.snapshot1.maps,\n self.snapshot2.maps,\n attr.old_value,\n attr.new_value,\n )\n\n print()\n\n # Log all vector changes\n for attr1_name, attr2_name in self.regs._vec_fp_regs:\n attr1 = self.regs.__getattribute__(attr1_name)\n attr2 = self.regs.__getattribute__(attr2_name)\n\n if attr1.has_changed or attr2.has_changed:\n pprint_reg_diff_large_util(\n (attr1_name, attr2_name),\n (attr1.old_value, attr2.old_value),\n (attr1.new_value, attr2.new_value),\n )\n"},{"location":"from_pydoc/generated/snapshots/snapshot/","title":"libdebug.snapshots.snapshot","text":""},{"location":"from_pydoc/generated/snapshots/snapshot/#libdebug.snapshots.snapshot.Snapshot","title":"Snapshot","text":"This object represents a snapshot of a system task.
Snapshot levels: - base: Registers - writable: Registers, writable memory contents - full: Registers, all readable memory contents
Source code inlibdebug/snapshots/snapshot.py class Snapshot:\n \"\"\"This object represents a snapshot of a system task.\n\n Snapshot levels:\n - base: Registers\n - writable: Registers, writable memory contents\n - full: Registers, all readable memory contents\n \"\"\"\n\n def _save_regs(self: Snapshot, thread: ThreadContext) -> None:\n # Create a register field for the snapshot\n self.regs = SnapshotRegisters(\n thread.thread_id,\n thread._register_holder.provide_regs(),\n thread._register_holder.provide_special_regs(),\n thread._register_holder.provide_vector_fp_regs(),\n )\n\n # Set all registers in the field\n all_regs = dir(thread.regs)\n all_regs = [reg for reg in all_regs if isinstance(thread.regs.__getattribute__(reg), int | float)]\n\n for reg_name in all_regs:\n reg_value = thread.regs.__getattribute__(reg_name)\n self.regs.__setattr__(reg_name, reg_value)\n\n def _save_memory_maps(self: Snapshot, debugger: InternalDebugger, writable_only: bool) -> None:\n \"\"\"Saves memory maps of the process to the snapshot.\"\"\"\n process_name = debugger._process_name\n full_process_path = debugger._process_full_path\n self.maps = MemoryMapSnapshotList([], process_name, full_process_path)\n\n for curr_map in debugger.maps:\n # Skip non-writable maps if requested\n # Always skip maps that fail on read\n if not writable_only or \"w\" in curr_map.permissions:\n try:\n contents = debugger.memory[curr_map.start : curr_map.end, \"absolute\"]\n except (ValueError, OSError, OverflowError):\n # There are some memory regions that cannot be read, such as [vvar], [vdso], etc.\n contents = None\n else:\n contents = None\n\n saved_map = MemoryMapSnapshot(\n curr_map.start,\n curr_map.end,\n curr_map.permissions,\n curr_map.size,\n curr_map.offset,\n curr_map.backing_file,\n contents,\n )\n self.maps.append(saved_map)\n\n @property\n def registers(self: Snapshot) -> SnapshotRegisters:\n \"\"\"Alias for regs.\"\"\"\n return self.regs\n\n @property\n def memory(self: Snapshot) -> SnapshotMemoryView:\n \"\"\"Returns a view of the memory of the thread.\"\"\"\n if self._memory is None:\n if self.level != \"base\":\n liblog.error(\"Inconsistent snapshot state: memory snapshot is not available.\")\n\n raise ValueError(\"Memory snapshot is not available at base level.\")\n\n return self._memory\n\n @property\n def mem(self: Snapshot) -> SnapshotMemoryView:\n \"\"\"Alias for memory.\"\"\"\n return self.memory\n\n @abstractmethod\n def diff(self: Snapshot, other: Snapshot) -> Diff:\n \"\"\"Creates a diff object between two snapshots.\"\"\"\n\n def save(self: Snapshot, file_path: str) -> None:\n \"\"\"Saves the snapshot object to a file.\"\"\"\n self._serialization_helper.save(self, file_path)\n\n def backtrace(self: Snapshot) -> list[int]:\n \"\"\"Returns the current backtrace of the thread.\"\"\"\n if self.level == \"base\":\n raise ValueError(\"Backtrace is not available at base level. Stack is not available.\")\n\n stack_unwinder = stack_unwinding_provider(self.arch)\n return stack_unwinder.unwind(self)\n\n def pprint_registers(self: Snapshot) -> None:\n \"\"\"Pretty prints the thread's registers.\"\"\"\n pprint_registers_util(self.regs, self.maps, self.regs._generic_regs)\n\n def pprint_regs(self: Snapshot) -> None:\n \"\"\"Alias for the `pprint_registers` method.\n\n Pretty prints the thread's registers.\n \"\"\"\n self.pprint_registers()\n\n def pprint_registers_all(self: Snapshot) -> None:\n \"\"\"Pretty prints all the thread's registers.\"\"\"\n pprint_registers_all_util(\n self.regs,\n self.maps,\n self.regs._generic_regs,\n self.regs._special_regs,\n self.regs._vec_fp_regs,\n )\n\n def pprint_regs_all(self: Snapshot) -> None:\n \"\"\"Alias for the `pprint_registers_all` method.\n\n Pretty prints all the thread's registers.\n \"\"\"\n self.pprint_registers_all()\n\n def pprint_backtrace(self: ThreadContext) -> None:\n \"\"\"Pretty prints the current backtrace of the thread.\"\"\"\n if self.level == \"base\":\n raise ValueError(\"Backtrace is not available at base level. Stack is not available.\")\n\n stack_unwinder = stack_unwinding_provider(self.arch)\n backtrace = stack_unwinder.unwind(self)\n pprint_backtrace_util(backtrace, self.maps, self._memory._symbol_ref)\n\n def pprint_maps(self: Snapshot) -> None:\n \"\"\"Prints the memory maps of the process.\"\"\"\n pprint_maps_util(self.maps)\n\n def pprint_memory(\n self: Snapshot,\n start: int,\n end: int,\n file: str = \"hybrid\",\n override_word_size: int | None = None,\n integer_mode: bool = False,\n ) -> None:\n \"\"\"Pretty print the memory diff.\n\n Args:\n start (int): The start address of the memory diff.\n end (int): The end address of the memory diff.\n file (str, optional): The backing file for relative / absolute addressing. Defaults to \"hybrid\".\n override_word_size (int, optional): The word size to use for the diff in place of the ISA word size. Defaults to None.\n integer_mode (bool, optional): If True, the diff will be printed as hex integers (system endianness applies). Defaults to False.\n \"\"\"\n if self.level == \"base\":\n raise ValueError(\"Memory is not available at base level.\")\n\n if start > end:\n tmp = start\n start = end\n end = tmp\n\n word_size = get_platform_gp_register_size(self.arch) if override_word_size is None else override_word_size\n\n # Resolve the address\n if file == \"absolute\":\n address_start = start\n elif file == \"hybrid\":\n try:\n # Try to resolve the address as absolute\n self.memory[start, 1, \"absolute\"]\n address_start = start\n except ValueError:\n # If the address is not in the maps, we use the binary file\n address_start = start + self.maps.filter(\"binary\")[0].start\n file = \"binary\"\n else:\n map_file = self.maps.filter(file)[0]\n address_start = start + map_file.base\n file = map_file.backing_file if file != \"binary\" else \"binary\"\n\n extract = self.memory[start:end, file]\n\n file_info = f\" (file: {file})\" if file not in (\"absolute\", \"hybrid\") else \"\"\n print(f\"Memory from {start:#x} to {end:#x}{file_info}:\")\n\n pprint_memory_util(\n address_start,\n extract,\n word_size,\n self.maps,\n integer_mode=integer_mode,\n )\n"},{"location":"from_pydoc/generated/snapshots/snapshot/#libdebug.snapshots.snapshot.Snapshot.mem","title":"mem property","text":"Alias for memory.
"},{"location":"from_pydoc/generated/snapshots/snapshot/#libdebug.snapshots.snapshot.Snapshot.memory","title":"memory property","text":"Returns a view of the memory of the thread.
"},{"location":"from_pydoc/generated/snapshots/snapshot/#libdebug.snapshots.snapshot.Snapshot.registers","title":"registers property","text":"Alias for regs.
"},{"location":"from_pydoc/generated/snapshots/snapshot/#libdebug.snapshots.snapshot.Snapshot._save_memory_maps","title":"_save_memory_maps(debugger, writable_only)","text":"Saves memory maps of the process to the snapshot.
Source code inlibdebug/snapshots/snapshot.py def _save_memory_maps(self: Snapshot, debugger: InternalDebugger, writable_only: bool) -> None:\n \"\"\"Saves memory maps of the process to the snapshot.\"\"\"\n process_name = debugger._process_name\n full_process_path = debugger._process_full_path\n self.maps = MemoryMapSnapshotList([], process_name, full_process_path)\n\n for curr_map in debugger.maps:\n # Skip non-writable maps if requested\n # Always skip maps that fail on read\n if not writable_only or \"w\" in curr_map.permissions:\n try:\n contents = debugger.memory[curr_map.start : curr_map.end, \"absolute\"]\n except (ValueError, OSError, OverflowError):\n # There are some memory regions that cannot be read, such as [vvar], [vdso], etc.\n contents = None\n else:\n contents = None\n\n saved_map = MemoryMapSnapshot(\n curr_map.start,\n curr_map.end,\n curr_map.permissions,\n curr_map.size,\n curr_map.offset,\n curr_map.backing_file,\n contents,\n )\n self.maps.append(saved_map)\n"},{"location":"from_pydoc/generated/snapshots/snapshot/#libdebug.snapshots.snapshot.Snapshot.backtrace","title":"backtrace()","text":"Returns the current backtrace of the thread.
Source code inlibdebug/snapshots/snapshot.py def backtrace(self: Snapshot) -> list[int]:\n \"\"\"Returns the current backtrace of the thread.\"\"\"\n if self.level == \"base\":\n raise ValueError(\"Backtrace is not available at base level. Stack is not available.\")\n\n stack_unwinder = stack_unwinding_provider(self.arch)\n return stack_unwinder.unwind(self)\n"},{"location":"from_pydoc/generated/snapshots/snapshot/#libdebug.snapshots.snapshot.Snapshot.diff","title":"diff(other) abstractmethod","text":"Creates a diff object between two snapshots.
Source code inlibdebug/snapshots/snapshot.py @abstractmethod\ndef diff(self: Snapshot, other: Snapshot) -> Diff:\n \"\"\"Creates a diff object between two snapshots.\"\"\"\n"},{"location":"from_pydoc/generated/snapshots/snapshot/#libdebug.snapshots.snapshot.Snapshot.pprint_backtrace","title":"pprint_backtrace()","text":"Pretty prints the current backtrace of the thread.
Source code inlibdebug/snapshots/snapshot.py def pprint_backtrace(self: ThreadContext) -> None:\n \"\"\"Pretty prints the current backtrace of the thread.\"\"\"\n if self.level == \"base\":\n raise ValueError(\"Backtrace is not available at base level. Stack is not available.\")\n\n stack_unwinder = stack_unwinding_provider(self.arch)\n backtrace = stack_unwinder.unwind(self)\n pprint_backtrace_util(backtrace, self.maps, self._memory._symbol_ref)\n"},{"location":"from_pydoc/generated/snapshots/snapshot/#libdebug.snapshots.snapshot.Snapshot.pprint_maps","title":"pprint_maps()","text":"Prints the memory maps of the process.
Source code inlibdebug/snapshots/snapshot.py def pprint_maps(self: Snapshot) -> None:\n \"\"\"Prints the memory maps of the process.\"\"\"\n pprint_maps_util(self.maps)\n"},{"location":"from_pydoc/generated/snapshots/snapshot/#libdebug.snapshots.snapshot.Snapshot.pprint_memory","title":"pprint_memory(start, end, file='hybrid', override_word_size=None, integer_mode=False)","text":"Pretty print the memory diff.
Parameters:
Name Type Description Defaultstart int The start address of the memory diff.
requiredend int The end address of the memory diff.
requiredfile str The backing file for relative / absolute addressing. Defaults to \"hybrid\".
'hybrid' override_word_size int The word size to use for the diff in place of the ISA word size. Defaults to None.
None integer_mode bool If True, the diff will be printed as hex integers (system endianness applies). Defaults to False.
False Source code in libdebug/snapshots/snapshot.py def pprint_memory(\n self: Snapshot,\n start: int,\n end: int,\n file: str = \"hybrid\",\n override_word_size: int | None = None,\n integer_mode: bool = False,\n) -> None:\n \"\"\"Pretty print the memory diff.\n\n Args:\n start (int): The start address of the memory diff.\n end (int): The end address of the memory diff.\n file (str, optional): The backing file for relative / absolute addressing. Defaults to \"hybrid\".\n override_word_size (int, optional): The word size to use for the diff in place of the ISA word size. Defaults to None.\n integer_mode (bool, optional): If True, the diff will be printed as hex integers (system endianness applies). Defaults to False.\n \"\"\"\n if self.level == \"base\":\n raise ValueError(\"Memory is not available at base level.\")\n\n if start > end:\n tmp = start\n start = end\n end = tmp\n\n word_size = get_platform_gp_register_size(self.arch) if override_word_size is None else override_word_size\n\n # Resolve the address\n if file == \"absolute\":\n address_start = start\n elif file == \"hybrid\":\n try:\n # Try to resolve the address as absolute\n self.memory[start, 1, \"absolute\"]\n address_start = start\n except ValueError:\n # If the address is not in the maps, we use the binary file\n address_start = start + self.maps.filter(\"binary\")[0].start\n file = \"binary\"\n else:\n map_file = self.maps.filter(file)[0]\n address_start = start + map_file.base\n file = map_file.backing_file if file != \"binary\" else \"binary\"\n\n extract = self.memory[start:end, file]\n\n file_info = f\" (file: {file})\" if file not in (\"absolute\", \"hybrid\") else \"\"\n print(f\"Memory from {start:#x} to {end:#x}{file_info}:\")\n\n pprint_memory_util(\n address_start,\n extract,\n word_size,\n self.maps,\n integer_mode=integer_mode,\n )\n"},{"location":"from_pydoc/generated/snapshots/snapshot/#libdebug.snapshots.snapshot.Snapshot.pprint_registers","title":"pprint_registers()","text":"Pretty prints the thread's registers.
Source code inlibdebug/snapshots/snapshot.py def pprint_registers(self: Snapshot) -> None:\n \"\"\"Pretty prints the thread's registers.\"\"\"\n pprint_registers_util(self.regs, self.maps, self.regs._generic_regs)\n"},{"location":"from_pydoc/generated/snapshots/snapshot/#libdebug.snapshots.snapshot.Snapshot.pprint_registers_all","title":"pprint_registers_all()","text":"Pretty prints all the thread's registers.
Source code inlibdebug/snapshots/snapshot.py def pprint_registers_all(self: Snapshot) -> None:\n \"\"\"Pretty prints all the thread's registers.\"\"\"\n pprint_registers_all_util(\n self.regs,\n self.maps,\n self.regs._generic_regs,\n self.regs._special_regs,\n self.regs._vec_fp_regs,\n )\n"},{"location":"from_pydoc/generated/snapshots/snapshot/#libdebug.snapshots.snapshot.Snapshot.pprint_regs","title":"pprint_regs()","text":"Alias for the pprint_registers method.
Pretty prints the thread's registers.
Source code inlibdebug/snapshots/snapshot.py def pprint_regs(self: Snapshot) -> None:\n \"\"\"Alias for the `pprint_registers` method.\n\n Pretty prints the thread's registers.\n \"\"\"\n self.pprint_registers()\n"},{"location":"from_pydoc/generated/snapshots/snapshot/#libdebug.snapshots.snapshot.Snapshot.pprint_regs_all","title":"pprint_regs_all()","text":"Alias for the pprint_registers_all method.
Pretty prints all the thread's registers.
Source code inlibdebug/snapshots/snapshot.py def pprint_regs_all(self: Snapshot) -> None:\n \"\"\"Alias for the `pprint_registers_all` method.\n\n Pretty prints all the thread's registers.\n \"\"\"\n self.pprint_registers_all()\n"},{"location":"from_pydoc/generated/snapshots/snapshot/#libdebug.snapshots.snapshot.Snapshot.save","title":"save(file_path)","text":"Saves the snapshot object to a file.
Source code inlibdebug/snapshots/snapshot.py def save(self: Snapshot, file_path: str) -> None:\n \"\"\"Saves the snapshot object to a file.\"\"\"\n self._serialization_helper.save(self, file_path)\n"},{"location":"from_pydoc/generated/snapshots/memory/memory_map_diff/","title":"libdebug.snapshots.memory.memory_map_diff","text":""},{"location":"from_pydoc/generated/snapshots/memory/memory_map_diff/#libdebug.snapshots.memory.memory_map_diff.MemoryMapDiff","title":"MemoryMapDiff dataclass","text":"This object represents a diff between memory contents in a memory map.
Source code inlibdebug/snapshots/memory/memory_map_diff.py @dataclass\nclass MemoryMapDiff:\n \"\"\"This object represents a diff between memory contents in a memory map.\"\"\"\n\n old_map_state: MemoryMapSnapshot\n \"\"\"The old state of the memory map.\"\"\"\n\n new_map_state: MemoryMapSnapshot\n \"\"\"The new state of the memory map.\"\"\"\n\n has_changed: bool\n \"\"\"Whether the memory map has changed.\"\"\"\n\n _cached_diffs: list[slice] = None\n \"\"\"Cached diff slices.\"\"\"\n\n @property\n def content_diff(self: MemoryMapDiff) -> list[slice]:\n \"\"\"Resolve the content diffs of a memory map between two snapshots.\n\n Returns:\n list[slice]: The list of slices representing the relative positions of diverging content.\n \"\"\"\n # If the diff has already been computed, return it\n if self._cached_diffs is not None:\n return self._cached_diffs\n\n if self.old_map_state is None:\n raise ValueError(\"Cannot resolve content diff for a new memory map.\")\n if self.new_map_state is None:\n raise ValueError(\"Cannot resolve content diff for a removed memory map.\")\n\n if self.old_map_state.content is None or self.new_map_state.content is None:\n raise ValueError(\"Memory contents not available for this memory page.\")\n\n old_content = self.old_map_state.content\n new_content = self.new_map_state.content\n\n work_len = min(len(old_content), len(new_content))\n\n found_slices = []\n\n # Find all the slices\n cursor = 0\n while cursor < work_len:\n # Find the first differing byte of the sequence\n if old_content[cursor] == new_content[cursor]:\n cursor += 1\n continue\n\n start = cursor\n # Find the last non-zero byte of the sequence\n while cursor < work_len and old_content[cursor] != new_content[cursor]:\n cursor += 1\n\n end = cursor\n\n found_slices.append(slice(start, end))\n\n # Cache the diff slices\n self._cached_diffs = found_slices\n\n return found_slices\n"},{"location":"from_pydoc/generated/snapshots/memory/memory_map_diff/#libdebug.snapshots.memory.memory_map_diff.MemoryMapDiff._cached_diffs","title":"_cached_diffs = None class-attribute instance-attribute","text":"Cached diff slices.
"},{"location":"from_pydoc/generated/snapshots/memory/memory_map_diff/#libdebug.snapshots.memory.memory_map_diff.MemoryMapDiff.content_diff","title":"content_diff property","text":"Resolve the content diffs of a memory map between two snapshots.
Returns:
Type Descriptionlist[slice] list[slice]: The list of slices representing the relative positions of diverging content.
"},{"location":"from_pydoc/generated/snapshots/memory/memory_map_diff/#libdebug.snapshots.memory.memory_map_diff.MemoryMapDiff.has_changed","title":"has_changed instance-attribute","text":"Whether the memory map has changed.
"},{"location":"from_pydoc/generated/snapshots/memory/memory_map_diff/#libdebug.snapshots.memory.memory_map_diff.MemoryMapDiff.new_map_state","title":"new_map_state instance-attribute","text":"The new state of the memory map.
"},{"location":"from_pydoc/generated/snapshots/memory/memory_map_diff/#libdebug.snapshots.memory.memory_map_diff.MemoryMapDiff.old_map_state","title":"old_map_state instance-attribute","text":"The old state of the memory map.
"},{"location":"from_pydoc/generated/snapshots/memory/memory_map_diff_list/","title":"libdebug.snapshots.memory.memory_map_diff_list","text":""},{"location":"from_pydoc/generated/snapshots/memory/memory_map_diff_list/#libdebug.snapshots.memory.memory_map_diff_list.MemoryMapDiffList","title":"MemoryMapDiffList","text":" Bases: list
A list of memory map snapshot diffs from the target process.
Source code inlibdebug/snapshots/memory/memory_map_diff_list.py class MemoryMapDiffList(list):\n \"\"\"A list of memory map snapshot diffs from the target process.\"\"\"\n\n def __init__(\n self: MemoryMapDiffList,\n memory_maps: list[MemoryMapDiff],\n process_name: str,\n full_process_path: str,\n ) -> None:\n \"\"\"Initializes the MemoryMapSnapshotList.\"\"\"\n super().__init__(memory_maps)\n self._process_full_path = full_process_path\n self._process_name = process_name\n\n def _search_by_address(self: MemoryMapDiffList, address: int) -> list[MemoryMapDiff]:\n \"\"\"Searches for a memory map diff by address.\n\n Args:\n address (int): The address to search for.\n\n Returns:\n list[MemoryMapDiff]: The memory map diff matching the specified address.\n \"\"\"\n for vmap_diff in self:\n if vmap_diff.old_map_state.start <= address < vmap_diff.new_map_state.end:\n return [vmap_diff]\n return []\n\n def _search_by_backing_file(self: MemoryMapDiffList, backing_file: str) -> list[MemoryMapDiff]:\n \"\"\"Searches for a memory map diff by backing file.\n\n Args:\n backing_file (str): The backing file to search for.\n\n Returns:\n list[MemoryMapDiff]: The memory map diff matching the specified backing file.\n \"\"\"\n if backing_file in [\"binary\", self._process_name]:\n backing_file = self._process_full_path\n\n filtered_maps = []\n unique_files = set()\n\n for vmap_diff in self:\n compare_with_old = vmap_diff.old_map_state is not None\n compare_with_new = vmap_diff.new_map_state is not None\n\n if compare_with_old and backing_file in vmap_diff.old_map_state.backing_file:\n filtered_maps.append(vmap_diff)\n unique_files.add(vmap_diff.old_map_state.backing_file)\n elif compare_with_new and backing_file in vmap_diff.new_map_state.backing_file:\n filtered_maps.append(vmap_diff)\n unique_files.add(vmap_diff.new_map_state.backing_file)\n\n if len(unique_files) > 1:\n liblog.warning(\n f\"The substring {backing_file} is present in multiple, different backing files. The address resolution cannot be accurate. The matching backing files are: {', '.join(unique_files)}.\",\n )\n\n return filtered_maps\n\n def filter(self: MemoryMapDiffList, value: int | str) -> MemoryMapDiffList[MemoryMapDiff]:\n \"\"\"Filters the memory maps according to the specified value.\n\n If the value is an integer, it is treated as an address.\n If the value is a string, it is treated as a backing file.\n\n Args:\n value (int | str): The value to search for.\n\n Returns:\n MemoryMapDiffList[MemoryMapDiff]: The memory maps matching the specified value.\n \"\"\"\n if isinstance(value, int):\n filtered_maps = self._search_by_address(value)\n elif isinstance(value, str):\n filtered_maps = self._search_by_backing_file(value)\n else:\n raise TypeError(\"The value must be an integer or a string.\")\n\n return MemoryMapDiffList(filtered_maps, self._process_name, self._process_full_path)\n"},{"location":"from_pydoc/generated/snapshots/memory/memory_map_diff_list/#libdebug.snapshots.memory.memory_map_diff_list.MemoryMapDiffList.__init__","title":"__init__(memory_maps, process_name, full_process_path)","text":"Initializes the MemoryMapSnapshotList.
Source code inlibdebug/snapshots/memory/memory_map_diff_list.py def __init__(\n self: MemoryMapDiffList,\n memory_maps: list[MemoryMapDiff],\n process_name: str,\n full_process_path: str,\n) -> None:\n \"\"\"Initializes the MemoryMapSnapshotList.\"\"\"\n super().__init__(memory_maps)\n self._process_full_path = full_process_path\n self._process_name = process_name\n"},{"location":"from_pydoc/generated/snapshots/memory/memory_map_diff_list/#libdebug.snapshots.memory.memory_map_diff_list.MemoryMapDiffList._search_by_address","title":"_search_by_address(address)","text":"Searches for a memory map diff by address.
Parameters:
Name Type Description Defaultaddress int The address to search for.
requiredReturns:
Type Descriptionlist[MemoryMapDiff] list[MemoryMapDiff]: The memory map diff matching the specified address.
Source code inlibdebug/snapshots/memory/memory_map_diff_list.py def _search_by_address(self: MemoryMapDiffList, address: int) -> list[MemoryMapDiff]:\n \"\"\"Searches for a memory map diff by address.\n\n Args:\n address (int): The address to search for.\n\n Returns:\n list[MemoryMapDiff]: The memory map diff matching the specified address.\n \"\"\"\n for vmap_diff in self:\n if vmap_diff.old_map_state.start <= address < vmap_diff.new_map_state.end:\n return [vmap_diff]\n return []\n"},{"location":"from_pydoc/generated/snapshots/memory/memory_map_diff_list/#libdebug.snapshots.memory.memory_map_diff_list.MemoryMapDiffList._search_by_backing_file","title":"_search_by_backing_file(backing_file)","text":"Searches for a memory map diff by backing file.
Parameters:
Name Type Description Defaultbacking_file str The backing file to search for.
requiredReturns:
Type Descriptionlist[MemoryMapDiff] list[MemoryMapDiff]: The memory map diff matching the specified backing file.
Source code inlibdebug/snapshots/memory/memory_map_diff_list.py def _search_by_backing_file(self: MemoryMapDiffList, backing_file: str) -> list[MemoryMapDiff]:\n \"\"\"Searches for a memory map diff by backing file.\n\n Args:\n backing_file (str): The backing file to search for.\n\n Returns:\n list[MemoryMapDiff]: The memory map diff matching the specified backing file.\n \"\"\"\n if backing_file in [\"binary\", self._process_name]:\n backing_file = self._process_full_path\n\n filtered_maps = []\n unique_files = set()\n\n for vmap_diff in self:\n compare_with_old = vmap_diff.old_map_state is not None\n compare_with_new = vmap_diff.new_map_state is not None\n\n if compare_with_old and backing_file in vmap_diff.old_map_state.backing_file:\n filtered_maps.append(vmap_diff)\n unique_files.add(vmap_diff.old_map_state.backing_file)\n elif compare_with_new and backing_file in vmap_diff.new_map_state.backing_file:\n filtered_maps.append(vmap_diff)\n unique_files.add(vmap_diff.new_map_state.backing_file)\n\n if len(unique_files) > 1:\n liblog.warning(\n f\"The substring {backing_file} is present in multiple, different backing files. The address resolution cannot be accurate. The matching backing files are: {', '.join(unique_files)}.\",\n )\n\n return filtered_maps\n"},{"location":"from_pydoc/generated/snapshots/memory/memory_map_diff_list/#libdebug.snapshots.memory.memory_map_diff_list.MemoryMapDiffList.filter","title":"filter(value)","text":"Filters the memory maps according to the specified value.
If the value is an integer, it is treated as an address. If the value is a string, it is treated as a backing file.
Parameters:
Name Type Description Defaultvalue int | str The value to search for.
requiredReturns:
Type DescriptionMemoryMapDiffList[MemoryMapDiff] MemoryMapDiffList[MemoryMapDiff]: The memory maps matching the specified value.
Source code inlibdebug/snapshots/memory/memory_map_diff_list.py def filter(self: MemoryMapDiffList, value: int | str) -> MemoryMapDiffList[MemoryMapDiff]:\n \"\"\"Filters the memory maps according to the specified value.\n\n If the value is an integer, it is treated as an address.\n If the value is a string, it is treated as a backing file.\n\n Args:\n value (int | str): The value to search for.\n\n Returns:\n MemoryMapDiffList[MemoryMapDiff]: The memory maps matching the specified value.\n \"\"\"\n if isinstance(value, int):\n filtered_maps = self._search_by_address(value)\n elif isinstance(value, str):\n filtered_maps = self._search_by_backing_file(value)\n else:\n raise TypeError(\"The value must be an integer or a string.\")\n\n return MemoryMapDiffList(filtered_maps, self._process_name, self._process_full_path)\n"},{"location":"from_pydoc/generated/snapshots/memory/memory_map_snapshot/","title":"libdebug.snapshots.memory.memory_map_snapshot","text":""},{"location":"from_pydoc/generated/snapshots/memory/memory_map_snapshot/#libdebug.snapshots.memory.memory_map_snapshot.MemoryMapSnapshot","title":"MemoryMapSnapshot dataclass","text":" Bases: MemoryMap
A snapshot of the memory map of the target process.
Attributes:
Name Type Descriptionstart int The start address of the memory map. You can access it also with the 'base' attribute.
end int The end address of the memory map.
permissions str The permissions of the memory map.
size int The size of the memory map.
offset int The relative offset of the memory map.
backing_file str The backing file of the memory map, or the symbolic name of the memory map.
content bytes The content of the memory map, used for snapshotted pages.
Source code inlibdebug/snapshots/memory/memory_map_snapshot.py @dataclass\nclass MemoryMapSnapshot(MemoryMap):\n \"\"\"A snapshot of the memory map of the target process.\n\n Attributes:\n start (int): The start address of the memory map. You can access it also with the 'base' attribute.\n end (int): The end address of the memory map.\n permissions (str): The permissions of the memory map.\n size (int): The size of the memory map.\n offset (int): The relative offset of the memory map.\n backing_file (str): The backing file of the memory map, or the symbolic name of the memory map.\n content (bytes): The content of the memory map, used for snapshotted pages.\n \"\"\"\n\n content: bytes = None\n \"\"\"The content of the memory map, used for snapshotted pages.\"\"\"\n\n def is_same_identity(self: MemoryMapSnapshot, other: MemoryMap) -> bool:\n \"\"\"Check if the memory map corresponds to another memory map.\"\"\"\n return self.start == other.start and self.backing_file == other.backing_file\n\n def __repr__(self: MemoryMapSnapshot) -> str:\n \"\"\"Return the string representation of the memory map.\"\"\"\n str_repr = super().__repr__()\n\n if self.content is not None:\n str_repr = str_repr[:-1] + \", content=...)\"\n\n return str_repr\n\n def __eq__(self, value: object) -> bool:\n \"\"\"Check if this MemoryMap is equal to another object.\n\n Args:\n value (object): The object to compare to.\n\n Returns:\n bool: True if the objects are equal, False otherwise.\n \"\"\"\n if not isinstance(value, MemoryMap):\n return False\n\n is_snapshot_map = isinstance(value, MemoryMapSnapshot)\n\n is_content_map_1 = self.content is not None\n is_content_map_2 = is_snapshot_map and value.content is not None\n\n if is_content_map_1 != is_content_map_2:\n liblog.warning(\"Comparing a memory map snapshot with content with a memory map without content. Equality will not take into account the content.\") \n\n # Check if the content is available and if it is the same\n should_compare_content = is_snapshot_map and is_content_map_1 and is_content_map_2\n same_content = not should_compare_content or self.content == value.content\n\n return (\n self.start == value.start\n and self.end == value.end\n and self.permissions == value.permissions\n and self.size == value.size\n and self.offset == value.offset\n and self.backing_file == value.backing_file\n and same_content\n )\n"},{"location":"from_pydoc/generated/snapshots/memory/memory_map_snapshot/#libdebug.snapshots.memory.memory_map_snapshot.MemoryMapSnapshot.content","title":"content = None class-attribute instance-attribute","text":"The content of the memory map, used for snapshotted pages.
"},{"location":"from_pydoc/generated/snapshots/memory/memory_map_snapshot/#libdebug.snapshots.memory.memory_map_snapshot.MemoryMapSnapshot.__eq__","title":"__eq__(value)","text":"Check if this MemoryMap is equal to another object.
Parameters:
Name Type Description Defaultvalue object The object to compare to.
requiredReturns:
Name Type Descriptionbool bool True if the objects are equal, False otherwise.
Source code inlibdebug/snapshots/memory/memory_map_snapshot.py def __eq__(self, value: object) -> bool:\n \"\"\"Check if this MemoryMap is equal to another object.\n\n Args:\n value (object): The object to compare to.\n\n Returns:\n bool: True if the objects are equal, False otherwise.\n \"\"\"\n if not isinstance(value, MemoryMap):\n return False\n\n is_snapshot_map = isinstance(value, MemoryMapSnapshot)\n\n is_content_map_1 = self.content is not None\n is_content_map_2 = is_snapshot_map and value.content is not None\n\n if is_content_map_1 != is_content_map_2:\n liblog.warning(\"Comparing a memory map snapshot with content with a memory map without content. Equality will not take into account the content.\") \n\n # Check if the content is available and if it is the same\n should_compare_content = is_snapshot_map and is_content_map_1 and is_content_map_2\n same_content = not should_compare_content or self.content == value.content\n\n return (\n self.start == value.start\n and self.end == value.end\n and self.permissions == value.permissions\n and self.size == value.size\n and self.offset == value.offset\n and self.backing_file == value.backing_file\n and same_content\n )\n"},{"location":"from_pydoc/generated/snapshots/memory/memory_map_snapshot/#libdebug.snapshots.memory.memory_map_snapshot.MemoryMapSnapshot.__repr__","title":"__repr__()","text":"Return the string representation of the memory map.
Source code inlibdebug/snapshots/memory/memory_map_snapshot.py def __repr__(self: MemoryMapSnapshot) -> str:\n \"\"\"Return the string representation of the memory map.\"\"\"\n str_repr = super().__repr__()\n\n if self.content is not None:\n str_repr = str_repr[:-1] + \", content=...)\"\n\n return str_repr\n"},{"location":"from_pydoc/generated/snapshots/memory/memory_map_snapshot/#libdebug.snapshots.memory.memory_map_snapshot.MemoryMapSnapshot.is_same_identity","title":"is_same_identity(other)","text":"Check if the memory map corresponds to another memory map.
Source code inlibdebug/snapshots/memory/memory_map_snapshot.py def is_same_identity(self: MemoryMapSnapshot, other: MemoryMap) -> bool:\n \"\"\"Check if the memory map corresponds to another memory map.\"\"\"\n return self.start == other.start and self.backing_file == other.backing_file\n"},{"location":"from_pydoc/generated/snapshots/memory/memory_map_snapshot_list/","title":"libdebug.snapshots.memory.memory_map_snapshot_list","text":""},{"location":"from_pydoc/generated/snapshots/memory/memory_map_snapshot_list/#libdebug.snapshots.memory.memory_map_snapshot_list.MemoryMapSnapshotList","title":"MemoryMapSnapshotList","text":" Bases: list[MemoryMapSnapshot]
A list of memory map snapshot from the target process.
Source code inlibdebug/snapshots/memory/memory_map_snapshot_list.py class MemoryMapSnapshotList(list[MemoryMapSnapshot]):\n \"\"\"A list of memory map snapshot from the target process.\"\"\"\n\n def __init__(\n self: MemoryMapSnapshotList,\n memory_maps: list[MemoryMapSnapshot],\n process_name: str,\n full_process_path: str,\n ) -> None:\n \"\"\"Initializes the MemoryMapSnapshotList.\"\"\"\n super().__init__(memory_maps)\n self._process_full_path = full_process_path\n self._process_name = process_name\n\n def _search_by_address(self: MemoryMapSnapshotList, address: int) -> list[MemoryMapSnapshot]:\n \"\"\"Searches for a memory map by address.\n\n Args:\n address (int): The address to search for.\n\n Returns:\n list[MemoryMapSnapshot]: The memory map matching the specified address.\n \"\"\"\n for vmap in self:\n if vmap.start <= address < vmap.end:\n return [vmap]\n return []\n\n def _search_by_backing_file(self: MemoryMapSnapshotList, backing_file: str) -> list[MemoryMapSnapshot]:\n \"\"\"Searches for a memory map by backing file.\n\n Args:\n backing_file (str): The backing file to search for.\n\n Returns:\n list[MemoryMapSnapshot]: The memory map matching the specified backing file.\n \"\"\"\n if backing_file in [\"binary\", self._process_name]:\n backing_file = self._process_full_path\n\n filtered_maps = []\n unique_files = set()\n\n for vmap in self:\n if backing_file in vmap.backing_file:\n filtered_maps.append(vmap)\n unique_files.add(vmap.backing_file)\n\n if len(unique_files) > 1:\n liblog.warning(\n f\"The substring {backing_file} is present in multiple, different backing files. The address resolution cannot be accurate. The matching backing files are: {', '.join(unique_files)}.\",\n )\n\n return filtered_maps\n\n def filter(self: MemoryMapSnapshotList, value: int | str) -> MemoryMapSnapshotList[MemoryMapSnapshot]:\n \"\"\"Filters the memory maps according to the specified value.\n\n If the value is an integer, it is treated as an address.\n If the value is a string, it is treated as a backing file.\n\n Args:\n value (int | str): The value to search for.\n\n Returns:\n MemoryMapSnapshotList[MemoryMapSnapshot]: The memory map snapshots matching the specified value.\n \"\"\"\n if isinstance(value, int):\n filtered_maps = self._search_by_address(value)\n elif isinstance(value, str):\n filtered_maps = self._search_by_backing_file(value)\n else:\n raise TypeError(\"The value must be an integer or a string.\")\n\n return MemoryMapSnapshotList(filtered_maps, self._process_name, self._process_full_path)\n"},{"location":"from_pydoc/generated/snapshots/memory/memory_map_snapshot_list/#libdebug.snapshots.memory.memory_map_snapshot_list.MemoryMapSnapshotList.__init__","title":"__init__(memory_maps, process_name, full_process_path)","text":"Initializes the MemoryMapSnapshotList.
Source code inlibdebug/snapshots/memory/memory_map_snapshot_list.py def __init__(\n self: MemoryMapSnapshotList,\n memory_maps: list[MemoryMapSnapshot],\n process_name: str,\n full_process_path: str,\n) -> None:\n \"\"\"Initializes the MemoryMapSnapshotList.\"\"\"\n super().__init__(memory_maps)\n self._process_full_path = full_process_path\n self._process_name = process_name\n"},{"location":"from_pydoc/generated/snapshots/memory/memory_map_snapshot_list/#libdebug.snapshots.memory.memory_map_snapshot_list.MemoryMapSnapshotList._search_by_address","title":"_search_by_address(address)","text":"Searches for a memory map by address.
Parameters:
Name Type Description Defaultaddress int The address to search for.
requiredReturns:
Type Descriptionlist[MemoryMapSnapshot] list[MemoryMapSnapshot]: The memory map matching the specified address.
Source code inlibdebug/snapshots/memory/memory_map_snapshot_list.py def _search_by_address(self: MemoryMapSnapshotList, address: int) -> list[MemoryMapSnapshot]:\n \"\"\"Searches for a memory map by address.\n\n Args:\n address (int): The address to search for.\n\n Returns:\n list[MemoryMapSnapshot]: The memory map matching the specified address.\n \"\"\"\n for vmap in self:\n if vmap.start <= address < vmap.end:\n return [vmap]\n return []\n"},{"location":"from_pydoc/generated/snapshots/memory/memory_map_snapshot_list/#libdebug.snapshots.memory.memory_map_snapshot_list.MemoryMapSnapshotList._search_by_backing_file","title":"_search_by_backing_file(backing_file)","text":"Searches for a memory map by backing file.
Parameters:
Name Type Description Defaultbacking_file str The backing file to search for.
requiredReturns:
Type Descriptionlist[MemoryMapSnapshot] list[MemoryMapSnapshot]: The memory map matching the specified backing file.
Source code inlibdebug/snapshots/memory/memory_map_snapshot_list.py def _search_by_backing_file(self: MemoryMapSnapshotList, backing_file: str) -> list[MemoryMapSnapshot]:\n \"\"\"Searches for a memory map by backing file.\n\n Args:\n backing_file (str): The backing file to search for.\n\n Returns:\n list[MemoryMapSnapshot]: The memory map matching the specified backing file.\n \"\"\"\n if backing_file in [\"binary\", self._process_name]:\n backing_file = self._process_full_path\n\n filtered_maps = []\n unique_files = set()\n\n for vmap in self:\n if backing_file in vmap.backing_file:\n filtered_maps.append(vmap)\n unique_files.add(vmap.backing_file)\n\n if len(unique_files) > 1:\n liblog.warning(\n f\"The substring {backing_file} is present in multiple, different backing files. The address resolution cannot be accurate. The matching backing files are: {', '.join(unique_files)}.\",\n )\n\n return filtered_maps\n"},{"location":"from_pydoc/generated/snapshots/memory/memory_map_snapshot_list/#libdebug.snapshots.memory.memory_map_snapshot_list.MemoryMapSnapshotList.filter","title":"filter(value)","text":"Filters the memory maps according to the specified value.
If the value is an integer, it is treated as an address. If the value is a string, it is treated as a backing file.
Parameters:
Name Type Description Defaultvalue int | str The value to search for.
requiredReturns:
Type DescriptionMemoryMapSnapshotList[MemoryMapSnapshot] MemoryMapSnapshotList[MemoryMapSnapshot]: The memory map snapshots matching the specified value.
Source code inlibdebug/snapshots/memory/memory_map_snapshot_list.py def filter(self: MemoryMapSnapshotList, value: int | str) -> MemoryMapSnapshotList[MemoryMapSnapshot]:\n \"\"\"Filters the memory maps according to the specified value.\n\n If the value is an integer, it is treated as an address.\n If the value is a string, it is treated as a backing file.\n\n Args:\n value (int | str): The value to search for.\n\n Returns:\n MemoryMapSnapshotList[MemoryMapSnapshot]: The memory map snapshots matching the specified value.\n \"\"\"\n if isinstance(value, int):\n filtered_maps = self._search_by_address(value)\n elif isinstance(value, str):\n filtered_maps = self._search_by_backing_file(value)\n else:\n raise TypeError(\"The value must be an integer or a string.\")\n\n return MemoryMapSnapshotList(filtered_maps, self._process_name, self._process_full_path)\n"},{"location":"from_pydoc/generated/snapshots/memory/snapshot_memory_view/","title":"libdebug.snapshots.memory.snapshot_memory_view","text":""},{"location":"from_pydoc/generated/snapshots/memory/snapshot_memory_view/#libdebug.snapshots.memory.snapshot_memory_view.SnapshotMemoryView","title":"SnapshotMemoryView","text":" Bases: AbstractMemoryView
Memory view for a thread / process snapshot.
Source code inlibdebug/snapshots/memory/snapshot_memory_view.py class SnapshotMemoryView(AbstractMemoryView):\n \"\"\"Memory view for a thread / process snapshot.\"\"\"\n\n def __init__(self: SnapshotMemoryView, snapshot: ThreadSnapshot | ProcessSnapshot, symbols: SymbolList) -> None:\n \"\"\"Initializes the MemoryView.\"\"\"\n self._snap_ref = snapshot\n self._symbol_ref = symbols\n\n def read(self: SnapshotMemoryView, address: int, size: int) -> bytes:\n \"\"\"Reads memory from the target snapshot.\n\n Args:\n address (int): The address to read from.\n size (int): The number of bytes to read.\n\n Returns:\n bytes: The read bytes.\n \"\"\"\n snapshot_maps = self._snap_ref.maps\n\n start_index = 0\n start_map = None\n has_failed = True\n\n # Find the start map index\n while start_index < len(snapshot_maps):\n start_map = snapshot_maps[start_index]\n\n if address < start_map.start:\n break\n elif start_map.start <= address < start_map.end:\n has_failed = False\n break\n start_index += 1\n\n if has_failed:\n raise ValueError(\"No mapped memory at the specified start address.\")\n\n end_index = start_index\n end_address = address + size - 1\n end_map = None\n has_failed = True\n\n # Find the end map index\n while end_index < len(snapshot_maps):\n end_map = snapshot_maps[end_index]\n\n if end_address < end_map.start:\n break\n elif end_map.start <= end_address < end_map.end:\n has_failed = False\n break\n end_index += 1\n\n if has_failed:\n raise ValueError(\"No mapped memory at the specified address.\")\n\n target_maps = self._snap_ref.maps[start_index:end_index + 1]\n\n if not target_maps:\n raise ValueError(\"No mapped memory at the specified address.\")\n\n for target_map in target_maps:\n # The memory of the target map cannot be retrieved\n if target_map.content is None:\n error = \"One or more of the memory maps involved was not snapshotted\"\n\n if self._snap_ref.level == \"base\":\n error += \", snapshot level is base, no memory contents were saved.\"\n elif self._snap_ref.level == \"writable\" and \"w\" not in target_map.permissions:\n error += \", snapshot level is writable but the target page corresponds to non-writable memory.\"\n else:\n error += \" (it could be a priviledged memory map e.g. [vvar]).\"\n\n raise ValueError(error)\n\n start_offset = address - target_maps[0].start\n\n if len(target_maps) == 1:\n end_offset = start_offset + size\n return target_maps[0].content[start_offset:end_offset]\n else:\n data = target_maps[0].content[start_offset:]\n\n for target_map in target_maps[1:-1]:\n data += target_map.content\n\n end_offset = size - len(data)\n data += target_maps[-1].content[:end_offset]\n\n return data\n\n def write(self: SnapshotMemoryView, address: int, data: bytes) -> None:\n \"\"\"Writes memory to the target snapshot.\n\n Args:\n address (int): The address to write to.\n data (bytes): The data to write.\n \"\"\"\n raise NotImplementedError(\"Snapshot memory is read-only, duh.\")\n\n def find(\n self: SnapshotMemoryView,\n value: bytes | str | int,\n file: str = \"all\",\n start: int | None = None,\n end: int | None = None,\n ) -> list[int]:\n \"\"\"Searches for the given value in the saved memory maps of the snapshot.\n\n The start and end addresses can be used to limit the search to a specific range.\n If not specified, the search will be performed on the whole memory map.\n\n Args:\n value (bytes | str | int): The value to search for.\n file (str): The backing file to search the value in. Defaults to \"all\", which means all memory.\n start (int | None): The start address of the search. Defaults to None.\n end (int | None): The end address of the search. Defaults to None.\n\n Returns:\n list[int]: A list of offset where the value was found.\n \"\"\"\n if self._snap_ref.level == \"base\":\n raise ValueError(\"Memory snapshot is not available at base level.\")\n\n return super().find(value, file, start, end)\n\n def resolve_symbol(self: SnapshotMemoryView, symbol: str, file: str) -> Symbol:\n \"\"\"Resolve a symbol from the symbol list.\n\n Args:\n symbol (str): The symbol to resolve.\n file (str): The backing file to resolve the address in.\n\n Returns:\n Symbol: The resolved address.\n \"\"\"\n offset = 0\n\n if \"+\" in symbol:\n symbol, offset = symbol.split(\"+\")\n offset = int(offset, 16)\n\n results = self._symbol_ref.filter(symbol)\n\n # Get the first result that matches the backing file\n results = [result for result in results if file in result.backing_file]\n\n if len(results) == 0:\n raise ValueError(f\"Symbol {symbol} not found in snaphot memory.\")\n\n page_base = self._snap_ref.maps.filter(results[0].backing_file)[0].start\n\n return page_base + results[0].start + offset\n\n def resolve_address(\n self: SnapshotMemoryView,\n address: int,\n backing_file: str,\n skip_absolute_address_validation: bool = False,\n ) -> int:\n \"\"\"Normalizes and validates the specified address.\n\n Args:\n address (int): The address to normalize and validate.\n backing_file (str): The backing file to resolve the address in.\n skip_absolute_address_validation (bool, optional): Whether to skip bounds checking for absolute addresses. Defaults to False.\n\n Returns:\n int: The normalized and validated address.\n\n Raises:\n ValueError: If the substring `backing_file` is present in multiple backing files.\n \"\"\"\n if skip_absolute_address_validation and backing_file == \"absolute\":\n return address\n\n maps = self._snap_ref.maps\n\n if backing_file in [\"hybrid\", \"absolute\"]:\n if maps.filter(address):\n # If the address is absolute, we can return it directly\n return address\n elif backing_file == \"absolute\":\n # The address is explicitly an absolute address but we did not find it\n raise ValueError(\n \"The specified absolute address does not exist. Check the address or specify a backing file.\",\n )\n else:\n # If the address was not found and the backing file is not \"absolute\",\n # we have to assume it is in the main map\n backing_file = self._snap_ref._process_full_path\n liblog.warning(\n f\"No backing file specified and no corresponding absolute address found for {hex(address)}. Assuming {backing_file}.\",\n )\n\n filtered_maps = maps.filter(backing_file)\n\n return normalize_and_validate_address(address, filtered_maps)\n\n @property\n def maps(self: SnapshotMemoryView) -> MemoryMapSnapshotList:\n \"\"\"Returns a list of memory maps in the target process.\n\n Returns:\n MemoryMapList: The memory maps.\n \"\"\"\n return self._snap_ref.maps\n"},{"location":"from_pydoc/generated/snapshots/memory/snapshot_memory_view/#libdebug.snapshots.memory.snapshot_memory_view.SnapshotMemoryView.maps","title":"maps property","text":"Returns a list of memory maps in the target process.
Returns:
Name Type DescriptionMemoryMapList MemoryMapSnapshotList The memory maps.
"},{"location":"from_pydoc/generated/snapshots/memory/snapshot_memory_view/#libdebug.snapshots.memory.snapshot_memory_view.SnapshotMemoryView.__init__","title":"__init__(snapshot, symbols)","text":"Initializes the MemoryView.
Source code inlibdebug/snapshots/memory/snapshot_memory_view.py def __init__(self: SnapshotMemoryView, snapshot: ThreadSnapshot | ProcessSnapshot, symbols: SymbolList) -> None:\n \"\"\"Initializes the MemoryView.\"\"\"\n self._snap_ref = snapshot\n self._symbol_ref = symbols\n"},{"location":"from_pydoc/generated/snapshots/memory/snapshot_memory_view/#libdebug.snapshots.memory.snapshot_memory_view.SnapshotMemoryView.find","title":"find(value, file='all', start=None, end=None)","text":"Searches for the given value in the saved memory maps of the snapshot.
The start and end addresses can be used to limit the search to a specific range. If not specified, the search will be performed on the whole memory map.
Parameters:
Name Type Description Defaultvalue bytes | str | int The value to search for.
requiredfile str The backing file to search the value in. Defaults to \"all\", which means all memory.
'all' start int | None The start address of the search. Defaults to None.
None end int | None The end address of the search. Defaults to None.
None Returns:
Type Descriptionlist[int] list[int]: A list of offset where the value was found.
Source code inlibdebug/snapshots/memory/snapshot_memory_view.py def find(\n self: SnapshotMemoryView,\n value: bytes | str | int,\n file: str = \"all\",\n start: int | None = None,\n end: int | None = None,\n) -> list[int]:\n \"\"\"Searches for the given value in the saved memory maps of the snapshot.\n\n The start and end addresses can be used to limit the search to a specific range.\n If not specified, the search will be performed on the whole memory map.\n\n Args:\n value (bytes | str | int): The value to search for.\n file (str): The backing file to search the value in. Defaults to \"all\", which means all memory.\n start (int | None): The start address of the search. Defaults to None.\n end (int | None): The end address of the search. Defaults to None.\n\n Returns:\n list[int]: A list of offset where the value was found.\n \"\"\"\n if self._snap_ref.level == \"base\":\n raise ValueError(\"Memory snapshot is not available at base level.\")\n\n return super().find(value, file, start, end)\n"},{"location":"from_pydoc/generated/snapshots/memory/snapshot_memory_view/#libdebug.snapshots.memory.snapshot_memory_view.SnapshotMemoryView.read","title":"read(address, size)","text":"Reads memory from the target snapshot.
Parameters:
Name Type Description Defaultaddress int The address to read from.
requiredsize int The number of bytes to read.
requiredReturns:
Name Type Descriptionbytes bytes The read bytes.
Source code inlibdebug/snapshots/memory/snapshot_memory_view.py def read(self: SnapshotMemoryView, address: int, size: int) -> bytes:\n \"\"\"Reads memory from the target snapshot.\n\n Args:\n address (int): The address to read from.\n size (int): The number of bytes to read.\n\n Returns:\n bytes: The read bytes.\n \"\"\"\n snapshot_maps = self._snap_ref.maps\n\n start_index = 0\n start_map = None\n has_failed = True\n\n # Find the start map index\n while start_index < len(snapshot_maps):\n start_map = snapshot_maps[start_index]\n\n if address < start_map.start:\n break\n elif start_map.start <= address < start_map.end:\n has_failed = False\n break\n start_index += 1\n\n if has_failed:\n raise ValueError(\"No mapped memory at the specified start address.\")\n\n end_index = start_index\n end_address = address + size - 1\n end_map = None\n has_failed = True\n\n # Find the end map index\n while end_index < len(snapshot_maps):\n end_map = snapshot_maps[end_index]\n\n if end_address < end_map.start:\n break\n elif end_map.start <= end_address < end_map.end:\n has_failed = False\n break\n end_index += 1\n\n if has_failed:\n raise ValueError(\"No mapped memory at the specified address.\")\n\n target_maps = self._snap_ref.maps[start_index:end_index + 1]\n\n if not target_maps:\n raise ValueError(\"No mapped memory at the specified address.\")\n\n for target_map in target_maps:\n # The memory of the target map cannot be retrieved\n if target_map.content is None:\n error = \"One or more of the memory maps involved was not snapshotted\"\n\n if self._snap_ref.level == \"base\":\n error += \", snapshot level is base, no memory contents were saved.\"\n elif self._snap_ref.level == \"writable\" and \"w\" not in target_map.permissions:\n error += \", snapshot level is writable but the target page corresponds to non-writable memory.\"\n else:\n error += \" (it could be a priviledged memory map e.g. [vvar]).\"\n\n raise ValueError(error)\n\n start_offset = address - target_maps[0].start\n\n if len(target_maps) == 1:\n end_offset = start_offset + size\n return target_maps[0].content[start_offset:end_offset]\n else:\n data = target_maps[0].content[start_offset:]\n\n for target_map in target_maps[1:-1]:\n data += target_map.content\n\n end_offset = size - len(data)\n data += target_maps[-1].content[:end_offset]\n\n return data\n"},{"location":"from_pydoc/generated/snapshots/memory/snapshot_memory_view/#libdebug.snapshots.memory.snapshot_memory_view.SnapshotMemoryView.resolve_address","title":"resolve_address(address, backing_file, skip_absolute_address_validation=False)","text":"Normalizes and validates the specified address.
Parameters:
Name Type Description Defaultaddress int The address to normalize and validate.
requiredbacking_file str The backing file to resolve the address in.
requiredskip_absolute_address_validation bool Whether to skip bounds checking for absolute addresses. Defaults to False.
False Returns:
Name Type Descriptionint int The normalized and validated address.
Raises:
Type DescriptionValueError If the substring backing_file is present in multiple backing files.
libdebug/snapshots/memory/snapshot_memory_view.py def resolve_address(\n self: SnapshotMemoryView,\n address: int,\n backing_file: str,\n skip_absolute_address_validation: bool = False,\n) -> int:\n \"\"\"Normalizes and validates the specified address.\n\n Args:\n address (int): The address to normalize and validate.\n backing_file (str): The backing file to resolve the address in.\n skip_absolute_address_validation (bool, optional): Whether to skip bounds checking for absolute addresses. Defaults to False.\n\n Returns:\n int: The normalized and validated address.\n\n Raises:\n ValueError: If the substring `backing_file` is present in multiple backing files.\n \"\"\"\n if skip_absolute_address_validation and backing_file == \"absolute\":\n return address\n\n maps = self._snap_ref.maps\n\n if backing_file in [\"hybrid\", \"absolute\"]:\n if maps.filter(address):\n # If the address is absolute, we can return it directly\n return address\n elif backing_file == \"absolute\":\n # The address is explicitly an absolute address but we did not find it\n raise ValueError(\n \"The specified absolute address does not exist. Check the address or specify a backing file.\",\n )\n else:\n # If the address was not found and the backing file is not \"absolute\",\n # we have to assume it is in the main map\n backing_file = self._snap_ref._process_full_path\n liblog.warning(\n f\"No backing file specified and no corresponding absolute address found for {hex(address)}. Assuming {backing_file}.\",\n )\n\n filtered_maps = maps.filter(backing_file)\n\n return normalize_and_validate_address(address, filtered_maps)\n"},{"location":"from_pydoc/generated/snapshots/memory/snapshot_memory_view/#libdebug.snapshots.memory.snapshot_memory_view.SnapshotMemoryView.resolve_symbol","title":"resolve_symbol(symbol, file)","text":"Resolve a symbol from the symbol list.
Parameters:
Name Type Description Defaultsymbol str The symbol to resolve.
requiredfile str The backing file to resolve the address in.
requiredReturns:
Name Type DescriptionSymbol Symbol The resolved address.
Source code inlibdebug/snapshots/memory/snapshot_memory_view.py def resolve_symbol(self: SnapshotMemoryView, symbol: str, file: str) -> Symbol:\n \"\"\"Resolve a symbol from the symbol list.\n\n Args:\n symbol (str): The symbol to resolve.\n file (str): The backing file to resolve the address in.\n\n Returns:\n Symbol: The resolved address.\n \"\"\"\n offset = 0\n\n if \"+\" in symbol:\n symbol, offset = symbol.split(\"+\")\n offset = int(offset, 16)\n\n results = self._symbol_ref.filter(symbol)\n\n # Get the first result that matches the backing file\n results = [result for result in results if file in result.backing_file]\n\n if len(results) == 0:\n raise ValueError(f\"Symbol {symbol} not found in snaphot memory.\")\n\n page_base = self._snap_ref.maps.filter(results[0].backing_file)[0].start\n\n return page_base + results[0].start + offset\n"},{"location":"from_pydoc/generated/snapshots/memory/snapshot_memory_view/#libdebug.snapshots.memory.snapshot_memory_view.SnapshotMemoryView.write","title":"write(address, data)","text":"Writes memory to the target snapshot.
Parameters:
Name Type Description Defaultaddress int The address to write to.
requireddata bytes The data to write.
required Source code inlibdebug/snapshots/memory/snapshot_memory_view.py def write(self: SnapshotMemoryView, address: int, data: bytes) -> None:\n \"\"\"Writes memory to the target snapshot.\n\n Args:\n address (int): The address to write to.\n data (bytes): The data to write.\n \"\"\"\n raise NotImplementedError(\"Snapshot memory is read-only, duh.\")\n"},{"location":"from_pydoc/generated/snapshots/process/process_shapshot_diff/","title":"libdebug.snapshots.process.process_shapshot_diff","text":""},{"location":"from_pydoc/generated/snapshots/process/process_shapshot_diff/#libdebug.snapshots.process.process_shapshot_diff.ProcessSnapshotDiff","title":"ProcessSnapshotDiff","text":" Bases: Diff
This object represents a diff between process snapshots.
Source code inlibdebug/snapshots/process/process_shapshot_diff.py class ProcessSnapshotDiff(Diff):\n \"\"\"This object represents a diff between process snapshots.\"\"\"\n\n def __init__(self: ProcessSnapshotDiff, snapshot1: ProcessSnapshot, snapshot2: ProcessSnapshot) -> None:\n \"\"\"Returns a diff between given snapshots of the same process.\n\n Args:\n snapshot1 (ProcessSnapshot): A process snapshot.\n snapshot2 (ProcessSnapshot): A process snapshot.\n \"\"\"\n super().__init__(snapshot1, snapshot2)\n\n # Register diffs\n self._save_reg_diffs()\n\n # Memory map diffs\n self._resolve_maps_diff()\n\n # Thread diffs\n self._generate_thread_diffs()\n\n if (self.snapshot1._process_name == self.snapshot2._process_name) and (\n self.snapshot1.aslr_enabled or self.snapshot2.aslr_enabled\n ):\n liblog.warning(\"ASLR is enabled in either or both snapshots. Diff may be messy.\")\n\n def _generate_thread_diffs(self: ProcessSnapshotDiff) -> None:\n \"\"\"Generates diffs between threads in the two compared snapshots.\n\n Thread differences:\n - Born threads and dead threads are stored directly in separate lists (no state diff exists between the two).\n - Threads that exist in both snapshots are stored as diffs and can be accessed through the threads_diff property.\n \"\"\"\n self.born_threads = []\n self.dead_threads = []\n self.threads_diff = []\n\n snapshot1_by_tid = {thread.tid: thread for thread in self.snapshot1.threads}\n snapshot2_by_tid = {thread.tid: thread for thread in self.snapshot2.threads}\n\n for tid, t1 in snapshot1_by_tid.items():\n t2 = snapshot2_by_tid.get(tid)\n if t2 is None:\n self.dead_threads.append(t1)\n else:\n diff = LightweightThreadSnapshotDiff(t1, t2, self)\n self.threads_diff.append(diff)\n\n for tid, t2 in snapshot2_by_tid.items():\n if tid not in snapshot1_by_tid:\n self.born_threads.append(t2)\n"},{"location":"from_pydoc/generated/snapshots/process/process_shapshot_diff/#libdebug.snapshots.process.process_shapshot_diff.ProcessSnapshotDiff.__init__","title":"__init__(snapshot1, snapshot2)","text":"Returns a diff between given snapshots of the same process.
Parameters:
Name Type Description Defaultsnapshot1 ProcessSnapshot A process snapshot.
requiredsnapshot2 ProcessSnapshot A process snapshot.
required Source code inlibdebug/snapshots/process/process_shapshot_diff.py def __init__(self: ProcessSnapshotDiff, snapshot1: ProcessSnapshot, snapshot2: ProcessSnapshot) -> None:\n \"\"\"Returns a diff between given snapshots of the same process.\n\n Args:\n snapshot1 (ProcessSnapshot): A process snapshot.\n snapshot2 (ProcessSnapshot): A process snapshot.\n \"\"\"\n super().__init__(snapshot1, snapshot2)\n\n # Register diffs\n self._save_reg_diffs()\n\n # Memory map diffs\n self._resolve_maps_diff()\n\n # Thread diffs\n self._generate_thread_diffs()\n\n if (self.snapshot1._process_name == self.snapshot2._process_name) and (\n self.snapshot1.aslr_enabled or self.snapshot2.aslr_enabled\n ):\n liblog.warning(\"ASLR is enabled in either or both snapshots. Diff may be messy.\")\n"},{"location":"from_pydoc/generated/snapshots/process/process_shapshot_diff/#libdebug.snapshots.process.process_shapshot_diff.ProcessSnapshotDiff._generate_thread_diffs","title":"_generate_thread_diffs()","text":"Generates diffs between threads in the two compared snapshots.
Thread differenceslibdebug/snapshots/process/process_shapshot_diff.py def _generate_thread_diffs(self: ProcessSnapshotDiff) -> None:\n \"\"\"Generates diffs between threads in the two compared snapshots.\n\n Thread differences:\n - Born threads and dead threads are stored directly in separate lists (no state diff exists between the two).\n - Threads that exist in both snapshots are stored as diffs and can be accessed through the threads_diff property.\n \"\"\"\n self.born_threads = []\n self.dead_threads = []\n self.threads_diff = []\n\n snapshot1_by_tid = {thread.tid: thread for thread in self.snapshot1.threads}\n snapshot2_by_tid = {thread.tid: thread for thread in self.snapshot2.threads}\n\n for tid, t1 in snapshot1_by_tid.items():\n t2 = snapshot2_by_tid.get(tid)\n if t2 is None:\n self.dead_threads.append(t1)\n else:\n diff = LightweightThreadSnapshotDiff(t1, t2, self)\n self.threads_diff.append(diff)\n\n for tid, t2 in snapshot2_by_tid.items():\n if tid not in snapshot1_by_tid:\n self.born_threads.append(t2)\n"},{"location":"from_pydoc/generated/snapshots/process/process_snapshot/","title":"libdebug.snapshots.process.process_snapshot","text":""},{"location":"from_pydoc/generated/snapshots/process/process_snapshot/#libdebug.snapshots.process.process_snapshot.ProcessSnapshot","title":"ProcessSnapshot","text":" Bases: Snapshot
This object represents a snapshot of the target process. It holds information about the process's state.
Snapshot levels: - base: Registers - writable: Registers, writable memory contents - full: Registers, stack, all readable memory contents
Source code inlibdebug/snapshots/process/process_snapshot.py class ProcessSnapshot(Snapshot):\n \"\"\"This object represents a snapshot of the target process. It holds information about the process's state.\n\n Snapshot levels:\n - base: Registers\n - writable: Registers, writable memory contents\n - full: Registers, stack, all readable memory contents\n \"\"\"\n\n def __init__(\n self: ProcessSnapshot, debugger: InternalDebugger, level: str = \"base\", name: str | None = None\n ) -> None:\n \"\"\"Creates a new snapshot object for the given process.\n\n Args:\n debugger (Debugger): The thread to take a snapshot of.\n level (str, optional): The level of the snapshot. Defaults to \"base\".\n name (str, optional): A name associated to the snapshot. Defaults to None.\n \"\"\"\n # Set id of the snapshot and increment the counter\n self.snapshot_id = debugger._snapshot_count\n debugger.notify_snaphot_taken()\n\n # Basic snapshot info\n self.process_id = debugger.process_id\n self.pid = self.process_id\n self.name = name\n self.level = level\n self.arch = debugger.arch\n self.aslr_enabled = debugger.aslr_enabled\n self._process_full_path = debugger._process_full_path\n self._process_name = debugger._process_name\n self._serialization_helper = debugger.serialization_helper\n\n # Memory maps\n match level:\n case \"base\":\n self.maps = MemoryMapSnapshotList([], self._process_name, self._process_full_path)\n\n for curr_map in debugger.maps:\n saved_map = MemoryMapSnapshot(\n start=curr_map.start,\n end=curr_map.end,\n permissions=curr_map.permissions,\n size=curr_map.size,\n offset=curr_map.offset,\n backing_file=curr_map.backing_file,\n content=None,\n )\n self.maps.append(saved_map)\n\n self._memory = None\n case \"writable\":\n if not debugger.fast_memory:\n liblog.warning(\n \"Memory snapshot requested but fast memory is not enabled. This will take a long time.\",\n )\n\n # Save all memory pages\n self._save_memory_maps(debugger, writable_only=True)\n\n self._memory = SnapshotMemoryView(self, debugger.symbols)\n case \"full\":\n if not debugger.fast_memory:\n liblog.warning(\n \"Memory snapshot requested but fast memory is not enabled. This will take a long time.\",\n )\n\n # Save all memory pages\n self._save_memory_maps(debugger, writable_only=False)\n\n self._memory = SnapshotMemoryView(self, debugger.symbols)\n case _:\n raise ValueError(f\"Invalid snapshot level {level}\")\n\n # Snapshot the threads\n self._save_threads(debugger)\n\n # Log the creation of the snapshot\n named_addition = \" named \" + self.name if name is not None else \"\"\n liblog.debugger(\n f\"Created snapshot {self.snapshot_id} of level {self.level} for process {self.pid}{named_addition}\"\n )\n\n def _save_threads(self: ProcessSnapshot, debugger: InternalDebugger) -> None:\n self.threads = []\n\n for thread in debugger.threads:\n # Create a lightweight snapshot for the thread\n lw_snapshot = LightweightThreadSnapshot(thread, self)\n\n self.threads.append(lw_snapshot)\n\n @property\n def regs(self: ProcessSnapshot) -> SnapshotRegisters:\n \"\"\"Returns the registers of the process snapshot.\"\"\"\n return self.threads[0].regs\n\n def diff(self: ProcessSnapshot, other: ProcessSnapshot) -> Diff:\n \"\"\"Returns the diff between two process snapshots.\"\"\"\n if not isinstance(other, ProcessSnapshot):\n raise TypeError(\"Both arguments must be ProcessSnapshot objects.\")\n\n return ProcessSnapshotDiff(self, other)\n"},{"location":"from_pydoc/generated/snapshots/process/process_snapshot/#libdebug.snapshots.process.process_snapshot.ProcessSnapshot.regs","title":"regs property","text":"Returns the registers of the process snapshot.
"},{"location":"from_pydoc/generated/snapshots/process/process_snapshot/#libdebug.snapshots.process.process_snapshot.ProcessSnapshot.__init__","title":"__init__(debugger, level='base', name=None)","text":"Creates a new snapshot object for the given process.
Parameters:
Name Type Description Defaultdebugger Debugger The thread to take a snapshot of.
requiredlevel str The level of the snapshot. Defaults to \"base\".
'base' name str A name associated to the snapshot. Defaults to None.
None Source code in libdebug/snapshots/process/process_snapshot.py def __init__(\n self: ProcessSnapshot, debugger: InternalDebugger, level: str = \"base\", name: str | None = None\n) -> None:\n \"\"\"Creates a new snapshot object for the given process.\n\n Args:\n debugger (Debugger): The thread to take a snapshot of.\n level (str, optional): The level of the snapshot. Defaults to \"base\".\n name (str, optional): A name associated to the snapshot. Defaults to None.\n \"\"\"\n # Set id of the snapshot and increment the counter\n self.snapshot_id = debugger._snapshot_count\n debugger.notify_snaphot_taken()\n\n # Basic snapshot info\n self.process_id = debugger.process_id\n self.pid = self.process_id\n self.name = name\n self.level = level\n self.arch = debugger.arch\n self.aslr_enabled = debugger.aslr_enabled\n self._process_full_path = debugger._process_full_path\n self._process_name = debugger._process_name\n self._serialization_helper = debugger.serialization_helper\n\n # Memory maps\n match level:\n case \"base\":\n self.maps = MemoryMapSnapshotList([], self._process_name, self._process_full_path)\n\n for curr_map in debugger.maps:\n saved_map = MemoryMapSnapshot(\n start=curr_map.start,\n end=curr_map.end,\n permissions=curr_map.permissions,\n size=curr_map.size,\n offset=curr_map.offset,\n backing_file=curr_map.backing_file,\n content=None,\n )\n self.maps.append(saved_map)\n\n self._memory = None\n case \"writable\":\n if not debugger.fast_memory:\n liblog.warning(\n \"Memory snapshot requested but fast memory is not enabled. This will take a long time.\",\n )\n\n # Save all memory pages\n self._save_memory_maps(debugger, writable_only=True)\n\n self._memory = SnapshotMemoryView(self, debugger.symbols)\n case \"full\":\n if not debugger.fast_memory:\n liblog.warning(\n \"Memory snapshot requested but fast memory is not enabled. This will take a long time.\",\n )\n\n # Save all memory pages\n self._save_memory_maps(debugger, writable_only=False)\n\n self._memory = SnapshotMemoryView(self, debugger.symbols)\n case _:\n raise ValueError(f\"Invalid snapshot level {level}\")\n\n # Snapshot the threads\n self._save_threads(debugger)\n\n # Log the creation of the snapshot\n named_addition = \" named \" + self.name if name is not None else \"\"\n liblog.debugger(\n f\"Created snapshot {self.snapshot_id} of level {self.level} for process {self.pid}{named_addition}\"\n )\n"},{"location":"from_pydoc/generated/snapshots/process/process_snapshot/#libdebug.snapshots.process.process_snapshot.ProcessSnapshot.diff","title":"diff(other)","text":"Returns the diff between two process snapshots.
Source code inlibdebug/snapshots/process/process_snapshot.py def diff(self: ProcessSnapshot, other: ProcessSnapshot) -> Diff:\n \"\"\"Returns the diff between two process snapshots.\"\"\"\n if not isinstance(other, ProcessSnapshot):\n raise TypeError(\"Both arguments must be ProcessSnapshot objects.\")\n\n return ProcessSnapshotDiff(self, other)\n"},{"location":"from_pydoc/generated/snapshots/registers/register_diff/","title":"libdebug.snapshots.registers.register_diff","text":""},{"location":"from_pydoc/generated/snapshots/registers/register_diff/#libdebug.snapshots.registers.register_diff.RegisterDiff","title":"RegisterDiff dataclass","text":"This object represents a diff between registers in a thread snapshot.
Source code inlibdebug/snapshots/registers/register_diff.py @dataclass\nclass RegisterDiff:\n \"\"\"This object represents a diff between registers in a thread snapshot.\"\"\"\n\n old_value: int | float\n \"\"\"The old value of the register.\"\"\"\n\n new_value: int | float\n \"\"\"The new value of the register.\"\"\"\n\n has_changed: bool\n \"\"\"Whether the register has changed.\"\"\"\n\n def __repr__(self: RegisterDiff) -> str:\n \"\"\"Return a string representation of the RegisterDiff object.\"\"\"\n old_value_str = hex(self.old_value) if isinstance(self.old_value, int) else str(self.old_value)\n new_value_str = hex(self.new_value) if isinstance(self.new_value, int) else str(self.new_value)\n return f\"RegisterDiff(old_value={old_value_str}, new_value={new_value_str}, has_changed={self.has_changed})\"\n"},{"location":"from_pydoc/generated/snapshots/registers/register_diff/#libdebug.snapshots.registers.register_diff.RegisterDiff.has_changed","title":"has_changed instance-attribute","text":"Whether the register has changed.
"},{"location":"from_pydoc/generated/snapshots/registers/register_diff/#libdebug.snapshots.registers.register_diff.RegisterDiff.new_value","title":"new_value instance-attribute","text":"The new value of the register.
"},{"location":"from_pydoc/generated/snapshots/registers/register_diff/#libdebug.snapshots.registers.register_diff.RegisterDiff.old_value","title":"old_value instance-attribute","text":"The old value of the register.
"},{"location":"from_pydoc/generated/snapshots/registers/register_diff/#libdebug.snapshots.registers.register_diff.RegisterDiff.__repr__","title":"__repr__()","text":"Return a string representation of the RegisterDiff object.
Source code inlibdebug/snapshots/registers/register_diff.py def __repr__(self: RegisterDiff) -> str:\n \"\"\"Return a string representation of the RegisterDiff object.\"\"\"\n old_value_str = hex(self.old_value) if isinstance(self.old_value, int) else str(self.old_value)\n new_value_str = hex(self.new_value) if isinstance(self.new_value, int) else str(self.new_value)\n return f\"RegisterDiff(old_value={old_value_str}, new_value={new_value_str}, has_changed={self.has_changed})\"\n"},{"location":"from_pydoc/generated/snapshots/registers/register_diff_accessor/","title":"libdebug.snapshots.registers.register_diff_accessor","text":""},{"location":"from_pydoc/generated/snapshots/registers/register_diff_accessor/#libdebug.snapshots.registers.register_diff_accessor.RegisterDiffAccessor","title":"RegisterDiffAccessor","text":"Class used to access RegisterDiff objects for a thread snapshot.
Source code inlibdebug/snapshots/registers/register_diff_accessor.py class RegisterDiffAccessor:\n \"\"\"Class used to access RegisterDiff objects for a thread snapshot.\"\"\"\n\n def __init__(\n self: RegisterDiffAccessor,\n generic_regs: list[str],\n special_regs: list[str],\n vec_fp_regs: list[str],\n ) -> None:\n \"\"\"Initializes the RegisterDiffAccessor object.\n\n Args:\n generic_regs (list[str]): The list of generic registers to include in the repr.\n special_regs (list[str]): The list of special registers to include in the repr.\n vec_fp_regs (list[str]): The list of vector and floating point registers to include in the repr.\n \"\"\"\n self._generic_regs = generic_regs\n self._special_regs = special_regs\n self._vec_fp_regs = vec_fp_regs\n\n def __repr__(self: RegisterDiffAccessor) -> str:\n \"\"\"Return a string representation of the RegisterDiffAccessor object.\"\"\"\n str_repr = \"RegisterDiffAccessor(\\n\\n\"\n\n # Header with column alignment\n str_repr += \"{:<15} {:<20} {:<20}\\n\".format(\"Register\", \"Old Value\", \"New Value\")\n str_repr += \"-\" * 60 + \"\\n\"\n\n # Log all integer changes\n for attr_name in self._generic_regs:\n attr = self.__getattribute__(attr_name)\n\n if attr.has_changed:\n # Format integer values in hexadecimal without zero-padding\n old_value = f\"{attr.old_value:<18}\" if isinstance(attr.old_value, float) else f\"{attr.old_value:<#16x}\"\n new_value = f\"{attr.new_value:<18}\" if isinstance(attr.new_value, float) else f\"{attr.new_value:<#16x}\"\n # Align output for consistent spacing between old and new values\n str_repr += f\"{attr_name:<15} {old_value} {new_value}\\n\"\n\n return str_repr\n"},{"location":"from_pydoc/generated/snapshots/registers/register_diff_accessor/#libdebug.snapshots.registers.register_diff_accessor.RegisterDiffAccessor.__init__","title":"__init__(generic_regs, special_regs, vec_fp_regs)","text":"Initializes the RegisterDiffAccessor object.
Parameters:
Name Type Description Defaultgeneric_regs list[str] The list of generic registers to include in the repr.
requiredspecial_regs list[str] The list of special registers to include in the repr.
requiredvec_fp_regs list[str] The list of vector and floating point registers to include in the repr.
required Source code inlibdebug/snapshots/registers/register_diff_accessor.py def __init__(\n self: RegisterDiffAccessor,\n generic_regs: list[str],\n special_regs: list[str],\n vec_fp_regs: list[str],\n) -> None:\n \"\"\"Initializes the RegisterDiffAccessor object.\n\n Args:\n generic_regs (list[str]): The list of generic registers to include in the repr.\n special_regs (list[str]): The list of special registers to include in the repr.\n vec_fp_regs (list[str]): The list of vector and floating point registers to include in the repr.\n \"\"\"\n self._generic_regs = generic_regs\n self._special_regs = special_regs\n self._vec_fp_regs = vec_fp_regs\n"},{"location":"from_pydoc/generated/snapshots/registers/register_diff_accessor/#libdebug.snapshots.registers.register_diff_accessor.RegisterDiffAccessor.__repr__","title":"__repr__()","text":"Return a string representation of the RegisterDiffAccessor object.
Source code inlibdebug/snapshots/registers/register_diff_accessor.py def __repr__(self: RegisterDiffAccessor) -> str:\n \"\"\"Return a string representation of the RegisterDiffAccessor object.\"\"\"\n str_repr = \"RegisterDiffAccessor(\\n\\n\"\n\n # Header with column alignment\n str_repr += \"{:<15} {:<20} {:<20}\\n\".format(\"Register\", \"Old Value\", \"New Value\")\n str_repr += \"-\" * 60 + \"\\n\"\n\n # Log all integer changes\n for attr_name in self._generic_regs:\n attr = self.__getattribute__(attr_name)\n\n if attr.has_changed:\n # Format integer values in hexadecimal without zero-padding\n old_value = f\"{attr.old_value:<18}\" if isinstance(attr.old_value, float) else f\"{attr.old_value:<#16x}\"\n new_value = f\"{attr.new_value:<18}\" if isinstance(attr.new_value, float) else f\"{attr.new_value:<#16x}\"\n # Align output for consistent spacing between old and new values\n str_repr += f\"{attr_name:<15} {old_value} {new_value}\\n\"\n\n return str_repr\n"},{"location":"from_pydoc/generated/snapshots/registers/snapshot_registers/","title":"libdebug.snapshots.registers.snapshot_registers","text":""},{"location":"from_pydoc/generated/snapshots/registers/snapshot_registers/#libdebug.snapshots.registers.snapshot_registers.SnapshotRegisters","title":"SnapshotRegisters","text":" Bases: Registers
Class that holds the state of the architectural-dependent registers of a snapshot.
Source code inlibdebug/snapshots/registers/snapshot_registers.py class SnapshotRegisters(Registers):\n \"\"\"Class that holds the state of the architectural-dependent registers of a snapshot.\"\"\"\n\n def __init__(\n self: SnapshotRegisters,\n thread_id: int,\n generic_regs: list[str],\n special_regs: list[str],\n vec_fp_regs: list[str],\n ) -> None:\n \"\"\"Initializes the Registers object.\n\n Args:\n thread_id (int): The thread ID.\n generic_regs (list[str]): The list of registers to include in the repr.\n special_regs (list[str]): The list of special registers to include in the repr.\n vec_fp_regs (list[str]): The list of vector and floating point registers to include in the repr\n \"\"\"\n self._thread_id = thread_id\n self._generic_regs = generic_regs\n self._special_regs = special_regs\n self._vec_fp_regs = vec_fp_regs\n\n def filter(self: SnapshotRegisters, value: float) -> list[str]:\n \"\"\"Filters the registers by value.\n\n Args:\n value (float): The value to search for.\n\n Returns:\n list[str]: A list of names of the registers containing the value.\n \"\"\"\n attributes = self.__dict__\n\n return [attr for attr in attributes if getattr(self, attr) == value]\n"},{"location":"from_pydoc/generated/snapshots/registers/snapshot_registers/#libdebug.snapshots.registers.snapshot_registers.SnapshotRegisters.__init__","title":"__init__(thread_id, generic_regs, special_regs, vec_fp_regs)","text":"Initializes the Registers object.
Parameters:
Name Type Description Defaultthread_id int The thread ID.
requiredgeneric_regs list[str] The list of registers to include in the repr.
requiredspecial_regs list[str] The list of special registers to include in the repr.
requiredvec_fp_regs list[str] The list of vector and floating point registers to include in the repr
required Source code inlibdebug/snapshots/registers/snapshot_registers.py def __init__(\n self: SnapshotRegisters,\n thread_id: int,\n generic_regs: list[str],\n special_regs: list[str],\n vec_fp_regs: list[str],\n) -> None:\n \"\"\"Initializes the Registers object.\n\n Args:\n thread_id (int): The thread ID.\n generic_regs (list[str]): The list of registers to include in the repr.\n special_regs (list[str]): The list of special registers to include in the repr.\n vec_fp_regs (list[str]): The list of vector and floating point registers to include in the repr\n \"\"\"\n self._thread_id = thread_id\n self._generic_regs = generic_regs\n self._special_regs = special_regs\n self._vec_fp_regs = vec_fp_regs\n"},{"location":"from_pydoc/generated/snapshots/registers/snapshot_registers/#libdebug.snapshots.registers.snapshot_registers.SnapshotRegisters.filter","title":"filter(value)","text":"Filters the registers by value.
Parameters:
Name Type Description Defaultvalue float The value to search for.
requiredReturns:
Type Descriptionlist[str] list[str]: A list of names of the registers containing the value.
Source code inlibdebug/snapshots/registers/snapshot_registers.py def filter(self: SnapshotRegisters, value: float) -> list[str]:\n \"\"\"Filters the registers by value.\n\n Args:\n value (float): The value to search for.\n\n Returns:\n list[str]: A list of names of the registers containing the value.\n \"\"\"\n attributes = self.__dict__\n\n return [attr for attr in attributes if getattr(self, attr) == value]\n"},{"location":"from_pydoc/generated/snapshots/serialization/json_serializer/","title":"libdebug.snapshots.serialization.json_serializer","text":""},{"location":"from_pydoc/generated/snapshots/serialization/json_serializer/#libdebug.snapshots.serialization.json_serializer.JSONSerializer","title":"JSONSerializer","text":"Helper class to serialize and deserialize snapshots using JSON format.
Source code inlibdebug/snapshots/serialization/json_serializer.py class JSONSerializer:\n \"\"\"Helper class to serialize and deserialize snapshots using JSON format.\"\"\"\n\n def load(self: JSONSerializer, file_path: str) -> Snapshot:\n \"\"\"Load a snapshot from a JSON file.\n\n Args:\n file_path (str): The path to the JSON file containing the snapshot.\n\n Returns:\n Snapshot: The loaded snapshot object.\n \"\"\"\n with Path(file_path).open() as file:\n snapshot_dict = json.load(file)\n\n # Determine the type of snapshot\n is_process_snapshot = \"process_id\" in snapshot_dict\n\n # Create a new instance of the appropriate class\n if is_process_snapshot:\n loaded_snap = ProcessSnapshot.__new__(ProcessSnapshot)\n loaded_snap.process_id = snapshot_dict[\"process_id\"]\n loaded_snap.pid = loaded_snap.process_id\n else:\n loaded_snap = ThreadSnapshot.__new__(ThreadSnapshot)\n loaded_snap.thread_id = snapshot_dict[\"thread_id\"]\n loaded_snap.tid = loaded_snap.thread_id\n\n # Basic snapshot info\n loaded_snap.snapshot_id = snapshot_dict[\"snapshot_id\"]\n loaded_snap.arch = snapshot_dict[\"arch\"]\n loaded_snap.name = snapshot_dict[\"name\"]\n loaded_snap.level = snapshot_dict[\"level\"]\n loaded_snap.aslr_enabled = snapshot_dict.get(\"aslr_enabled\")\n loaded_snap._process_full_path = snapshot_dict.get(\"_process_full_path\", None)\n loaded_snap._process_name = snapshot_dict.get(\"_process_name\", None)\n\n # Create a register field for the snapshot\n if not is_process_snapshot:\n loaded_snap.regs = SnapshotRegisters(\n loaded_snap.thread_id,\n snapshot_dict[\"architectural_registers\"][\"generic\"],\n snapshot_dict[\"architectural_registers\"][\"special\"],\n snapshot_dict[\"architectural_registers\"][\"vector_fp\"],\n )\n\n # Load registers\n for reg_name, reg_value in snapshot_dict[\"regs\"].items():\n loaded_snap.regs.__setattr__(reg_name, reg_value)\n\n # Recreate memory maps\n loaded_maps = snapshot_dict[\"maps\"]\n raw_map_list = []\n\n for saved_map in loaded_maps:\n new_map = MemoryMapSnapshot(\n saved_map[\"start\"],\n saved_map[\"end\"],\n saved_map[\"permissions\"],\n saved_map[\"size\"],\n saved_map[\"offset\"],\n saved_map[\"backing_file\"],\n b64decode(saved_map[\"content\"]) if saved_map[\"content\"] is not None else None,\n )\n raw_map_list.append(new_map)\n\n loaded_snap.maps = MemoryMapSnapshotList(\n raw_map_list,\n loaded_snap._process_name,\n loaded_snap._process_full_path,\n )\n\n # Handle threads for ProcessSnapshot\n if is_process_snapshot:\n loaded_snap.threads = []\n for thread_dict in snapshot_dict[\"threads\"]:\n thread_snap = LightweightThreadSnapshot.__new__(LightweightThreadSnapshot)\n thread_snap.snapshot_id = thread_dict[\"snapshot_id\"]\n thread_snap.thread_id = thread_dict[\"thread_id\"]\n thread_snap.tid = thread_snap.thread_id\n thread_snap._proc_snapshot = loaded_snap\n\n thread_snap.regs = SnapshotRegisters(\n thread_snap.thread_id,\n snapshot_dict[\"architectural_registers\"][\"generic\"],\n snapshot_dict[\"architectural_registers\"][\"special\"],\n snapshot_dict[\"architectural_registers\"][\"vector_fp\"],\n )\n\n for reg_name, reg_value in thread_dict[\"regs\"].items():\n thread_snap.regs.__setattr__(reg_name, reg_value)\n\n loaded_snap.threads.append(thread_snap)\n\n # Handle symbols\n raw_loaded_symbols = snapshot_dict.get(\"symbols\", None)\n if raw_loaded_symbols is not None:\n sym_list = [\n Symbol(\n saved_symbol[\"start\"],\n saved_symbol[\"end\"],\n saved_symbol[\"name\"],\n saved_symbol[\"backing_file\"],\n )\n for saved_symbol in raw_loaded_symbols\n ]\n sym_list = SymbolList(sym_list, loaded_snap)\n loaded_snap._memory = SnapshotMemoryView(loaded_snap, sym_list)\n elif loaded_snap.level != \"base\":\n raise ValueError(\"Memory snapshot loading requested but no symbols were saved.\")\n else:\n loaded_snap._memory = None\n\n return loaded_snap\n\n def dump(self: JSONSerializer, snapshot: Snapshot, out_path: str) -> None:\n \"\"\"Dump a snapshot to a JSON file.\n\n Args:\n snapshot (Snapshot): The snapshot to be dumped.\n out_path (str): The path to the output JSON file.\n \"\"\"\n\n def get_register_names(regs: SnapshotRegisters) -> list[str]:\n return [reg_name for reg_name in dir(regs) if isinstance(getattr(regs, reg_name), int | float)]\n\n def save_memory_maps(maps: MemoryMapSnapshotList) -> list[dict]:\n return [\n {\n \"start\": memory_map.start,\n \"end\": memory_map.end,\n \"permissions\": memory_map.permissions,\n \"size\": memory_map.size,\n \"offset\": memory_map.offset,\n \"backing_file\": memory_map.backing_file,\n \"content\": b64encode(memory_map.content).decode(\"utf-8\")\n if memory_map.content is not None\n else None,\n }\n for memory_map in maps\n ]\n\n def save_symbols(memory: SnapshotMemoryView) -> list[dict] | None:\n if memory is None:\n return None\n return [\n {\n \"start\": symbol.start,\n \"end\": symbol.end,\n \"name\": symbol.name,\n \"backing_file\": symbol.backing_file,\n }\n for symbol in memory._symbol_ref\n ]\n\n all_reg_names = get_register_names(snapshot.regs)\n\n serializable_dict = {\n \"type\": \"process\" if hasattr(snapshot, \"threads\") else \"thread\",\n \"arch\": snapshot.arch,\n \"snapshot_id\": snapshot.snapshot_id,\n \"level\": snapshot.level,\n \"name\": snapshot.name,\n \"aslr_enabled\": snapshot.aslr_enabled,\n \"architectural_registers\": {\n \"generic\": snapshot.regs._generic_regs,\n \"special\": snapshot.regs._special_regs,\n \"vector_fp\": snapshot.regs._vec_fp_regs,\n },\n \"maps\": save_memory_maps(snapshot.maps),\n \"symbols\": save_symbols(snapshot._memory),\n }\n\n if hasattr(snapshot, \"threads\"):\n # ProcessSnapshot-specific data\n thread_snapshots = [\n {\n \"snapshot_id\": thread.snapshot_id,\n \"thread_id\": thread.thread_id,\n \"regs\": {reg_name: getattr(thread.regs, reg_name) for reg_name in all_reg_names},\n }\n for thread in snapshot.threads\n ]\n serializable_dict.update(\n {\n \"process_id\": snapshot.process_id,\n \"threads\": thread_snapshots,\n \"_process_full_path\": snapshot._process_full_path,\n \"_process_name\": snapshot._process_name,\n }\n )\n else:\n # ThreadSnapshot-specific data\n serializable_dict.update(\n {\n \"thread_id\": snapshot.thread_id,\n \"regs\": {reg_name: getattr(snapshot.regs, reg_name) for reg_name in all_reg_names},\n \"_process_full_path\": snapshot._process_full_path,\n \"_process_name\": snapshot._process_name,\n }\n )\n\n with Path(out_path).open(\"w\") as file:\n json.dump(serializable_dict, file)\n"},{"location":"from_pydoc/generated/snapshots/serialization/json_serializer/#libdebug.snapshots.serialization.json_serializer.JSONSerializer.dump","title":"dump(snapshot, out_path)","text":"Dump a snapshot to a JSON file.
Parameters:
Name Type Description Defaultsnapshot Snapshot The snapshot to be dumped.
requiredout_path str The path to the output JSON file.
required Source code inlibdebug/snapshots/serialization/json_serializer.py def dump(self: JSONSerializer, snapshot: Snapshot, out_path: str) -> None:\n \"\"\"Dump a snapshot to a JSON file.\n\n Args:\n snapshot (Snapshot): The snapshot to be dumped.\n out_path (str): The path to the output JSON file.\n \"\"\"\n\n def get_register_names(regs: SnapshotRegisters) -> list[str]:\n return [reg_name for reg_name in dir(regs) if isinstance(getattr(regs, reg_name), int | float)]\n\n def save_memory_maps(maps: MemoryMapSnapshotList) -> list[dict]:\n return [\n {\n \"start\": memory_map.start,\n \"end\": memory_map.end,\n \"permissions\": memory_map.permissions,\n \"size\": memory_map.size,\n \"offset\": memory_map.offset,\n \"backing_file\": memory_map.backing_file,\n \"content\": b64encode(memory_map.content).decode(\"utf-8\")\n if memory_map.content is not None\n else None,\n }\n for memory_map in maps\n ]\n\n def save_symbols(memory: SnapshotMemoryView) -> list[dict] | None:\n if memory is None:\n return None\n return [\n {\n \"start\": symbol.start,\n \"end\": symbol.end,\n \"name\": symbol.name,\n \"backing_file\": symbol.backing_file,\n }\n for symbol in memory._symbol_ref\n ]\n\n all_reg_names = get_register_names(snapshot.regs)\n\n serializable_dict = {\n \"type\": \"process\" if hasattr(snapshot, \"threads\") else \"thread\",\n \"arch\": snapshot.arch,\n \"snapshot_id\": snapshot.snapshot_id,\n \"level\": snapshot.level,\n \"name\": snapshot.name,\n \"aslr_enabled\": snapshot.aslr_enabled,\n \"architectural_registers\": {\n \"generic\": snapshot.regs._generic_regs,\n \"special\": snapshot.regs._special_regs,\n \"vector_fp\": snapshot.regs._vec_fp_regs,\n },\n \"maps\": save_memory_maps(snapshot.maps),\n \"symbols\": save_symbols(snapshot._memory),\n }\n\n if hasattr(snapshot, \"threads\"):\n # ProcessSnapshot-specific data\n thread_snapshots = [\n {\n \"snapshot_id\": thread.snapshot_id,\n \"thread_id\": thread.thread_id,\n \"regs\": {reg_name: getattr(thread.regs, reg_name) for reg_name in all_reg_names},\n }\n for thread in snapshot.threads\n ]\n serializable_dict.update(\n {\n \"process_id\": snapshot.process_id,\n \"threads\": thread_snapshots,\n \"_process_full_path\": snapshot._process_full_path,\n \"_process_name\": snapshot._process_name,\n }\n )\n else:\n # ThreadSnapshot-specific data\n serializable_dict.update(\n {\n \"thread_id\": snapshot.thread_id,\n \"regs\": {reg_name: getattr(snapshot.regs, reg_name) for reg_name in all_reg_names},\n \"_process_full_path\": snapshot._process_full_path,\n \"_process_name\": snapshot._process_name,\n }\n )\n\n with Path(out_path).open(\"w\") as file:\n json.dump(serializable_dict, file)\n"},{"location":"from_pydoc/generated/snapshots/serialization/json_serializer/#libdebug.snapshots.serialization.json_serializer.JSONSerializer.load","title":"load(file_path)","text":"Load a snapshot from a JSON file.
Parameters:
Name Type Description Defaultfile_path str The path to the JSON file containing the snapshot.
requiredReturns:
Name Type DescriptionSnapshot Snapshot The loaded snapshot object.
Source code inlibdebug/snapshots/serialization/json_serializer.py def load(self: JSONSerializer, file_path: str) -> Snapshot:\n \"\"\"Load a snapshot from a JSON file.\n\n Args:\n file_path (str): The path to the JSON file containing the snapshot.\n\n Returns:\n Snapshot: The loaded snapshot object.\n \"\"\"\n with Path(file_path).open() as file:\n snapshot_dict = json.load(file)\n\n # Determine the type of snapshot\n is_process_snapshot = \"process_id\" in snapshot_dict\n\n # Create a new instance of the appropriate class\n if is_process_snapshot:\n loaded_snap = ProcessSnapshot.__new__(ProcessSnapshot)\n loaded_snap.process_id = snapshot_dict[\"process_id\"]\n loaded_snap.pid = loaded_snap.process_id\n else:\n loaded_snap = ThreadSnapshot.__new__(ThreadSnapshot)\n loaded_snap.thread_id = snapshot_dict[\"thread_id\"]\n loaded_snap.tid = loaded_snap.thread_id\n\n # Basic snapshot info\n loaded_snap.snapshot_id = snapshot_dict[\"snapshot_id\"]\n loaded_snap.arch = snapshot_dict[\"arch\"]\n loaded_snap.name = snapshot_dict[\"name\"]\n loaded_snap.level = snapshot_dict[\"level\"]\n loaded_snap.aslr_enabled = snapshot_dict.get(\"aslr_enabled\")\n loaded_snap._process_full_path = snapshot_dict.get(\"_process_full_path\", None)\n loaded_snap._process_name = snapshot_dict.get(\"_process_name\", None)\n\n # Create a register field for the snapshot\n if not is_process_snapshot:\n loaded_snap.regs = SnapshotRegisters(\n loaded_snap.thread_id,\n snapshot_dict[\"architectural_registers\"][\"generic\"],\n snapshot_dict[\"architectural_registers\"][\"special\"],\n snapshot_dict[\"architectural_registers\"][\"vector_fp\"],\n )\n\n # Load registers\n for reg_name, reg_value in snapshot_dict[\"regs\"].items():\n loaded_snap.regs.__setattr__(reg_name, reg_value)\n\n # Recreate memory maps\n loaded_maps = snapshot_dict[\"maps\"]\n raw_map_list = []\n\n for saved_map in loaded_maps:\n new_map = MemoryMapSnapshot(\n saved_map[\"start\"],\n saved_map[\"end\"],\n saved_map[\"permissions\"],\n saved_map[\"size\"],\n saved_map[\"offset\"],\n saved_map[\"backing_file\"],\n b64decode(saved_map[\"content\"]) if saved_map[\"content\"] is not None else None,\n )\n raw_map_list.append(new_map)\n\n loaded_snap.maps = MemoryMapSnapshotList(\n raw_map_list,\n loaded_snap._process_name,\n loaded_snap._process_full_path,\n )\n\n # Handle threads for ProcessSnapshot\n if is_process_snapshot:\n loaded_snap.threads = []\n for thread_dict in snapshot_dict[\"threads\"]:\n thread_snap = LightweightThreadSnapshot.__new__(LightweightThreadSnapshot)\n thread_snap.snapshot_id = thread_dict[\"snapshot_id\"]\n thread_snap.thread_id = thread_dict[\"thread_id\"]\n thread_snap.tid = thread_snap.thread_id\n thread_snap._proc_snapshot = loaded_snap\n\n thread_snap.regs = SnapshotRegisters(\n thread_snap.thread_id,\n snapshot_dict[\"architectural_registers\"][\"generic\"],\n snapshot_dict[\"architectural_registers\"][\"special\"],\n snapshot_dict[\"architectural_registers\"][\"vector_fp\"],\n )\n\n for reg_name, reg_value in thread_dict[\"regs\"].items():\n thread_snap.regs.__setattr__(reg_name, reg_value)\n\n loaded_snap.threads.append(thread_snap)\n\n # Handle symbols\n raw_loaded_symbols = snapshot_dict.get(\"symbols\", None)\n if raw_loaded_symbols is not None:\n sym_list = [\n Symbol(\n saved_symbol[\"start\"],\n saved_symbol[\"end\"],\n saved_symbol[\"name\"],\n saved_symbol[\"backing_file\"],\n )\n for saved_symbol in raw_loaded_symbols\n ]\n sym_list = SymbolList(sym_list, loaded_snap)\n loaded_snap._memory = SnapshotMemoryView(loaded_snap, sym_list)\n elif loaded_snap.level != \"base\":\n raise ValueError(\"Memory snapshot loading requested but no symbols were saved.\")\n else:\n loaded_snap._memory = None\n\n return loaded_snap\n"},{"location":"from_pydoc/generated/snapshots/serialization/serialization_helper/","title":"libdebug.snapshots.serialization.serialization_helper","text":""},{"location":"from_pydoc/generated/snapshots/serialization/serialization_helper/#libdebug.snapshots.serialization.serialization_helper.SerializationHelper","title":"SerializationHelper","text":"Helper class to serialize and deserialize snapshots.
Source code inlibdebug/snapshots/serialization/serialization_helper.py class SerializationHelper:\n \"\"\"Helper class to serialize and deserialize snapshots.\"\"\"\n\n def load(self: SerializationHelper, file_path: str) -> Snapshot:\n \"\"\"Load a snapshot from a file.\n\n Args:\n file_path (str): The path to the file containing the snapshot.\n\n Returns:\n Snapshot: The loaded snapshot object.\n \"\"\"\n if not file_path.endswith(\".json\"):\n liblog.warning(\"The target file doesn't have a JSON extension. The output will be assumed JSON.\")\n\n # Future code can select the serializer\n # Currently, only JSON is supported\n serializer_type = SupportedSerializers.JSON\n\n serializer = serializer_type.serializer_class()\n\n return serializer.load(file_path)\n\n def save(self: SerializationHelper, snapshot: Snapshot, out_path: str) -> None:\n \"\"\"Dump a snapshot to a file.\n\n Args:\n snapshot (Snapshot): The snapshot to be dumped.\n out_path (str): The path to the output file.\n \"\"\"\n if not out_path.endswith(\".json\"):\n liblog.warning(\"The target file doesn't have a JSON extension. The output will be assumed JSON.\")\n\n # Future code can select the serializer\n # Currently, only JSON is supported\n serializer_type = SupportedSerializers.JSON\n\n serializer = serializer_type.serializer_class()\n\n serializer.dump(snapshot, out_path)\n"},{"location":"from_pydoc/generated/snapshots/serialization/serialization_helper/#libdebug.snapshots.serialization.serialization_helper.SerializationHelper.load","title":"load(file_path)","text":"Load a snapshot from a file.
Parameters:
Name Type Description Defaultfile_path str The path to the file containing the snapshot.
requiredReturns:
Name Type DescriptionSnapshot Snapshot The loaded snapshot object.
Source code inlibdebug/snapshots/serialization/serialization_helper.py def load(self: SerializationHelper, file_path: str) -> Snapshot:\n \"\"\"Load a snapshot from a file.\n\n Args:\n file_path (str): The path to the file containing the snapshot.\n\n Returns:\n Snapshot: The loaded snapshot object.\n \"\"\"\n if not file_path.endswith(\".json\"):\n liblog.warning(\"The target file doesn't have a JSON extension. The output will be assumed JSON.\")\n\n # Future code can select the serializer\n # Currently, only JSON is supported\n serializer_type = SupportedSerializers.JSON\n\n serializer = serializer_type.serializer_class()\n\n return serializer.load(file_path)\n"},{"location":"from_pydoc/generated/snapshots/serialization/serialization_helper/#libdebug.snapshots.serialization.serialization_helper.SerializationHelper.save","title":"save(snapshot, out_path)","text":"Dump a snapshot to a file.
Parameters:
Name Type Description Defaultsnapshot Snapshot The snapshot to be dumped.
requiredout_path str The path to the output file.
required Source code inlibdebug/snapshots/serialization/serialization_helper.py def save(self: SerializationHelper, snapshot: Snapshot, out_path: str) -> None:\n \"\"\"Dump a snapshot to a file.\n\n Args:\n snapshot (Snapshot): The snapshot to be dumped.\n out_path (str): The path to the output file.\n \"\"\"\n if not out_path.endswith(\".json\"):\n liblog.warning(\"The target file doesn't have a JSON extension. The output will be assumed JSON.\")\n\n # Future code can select the serializer\n # Currently, only JSON is supported\n serializer_type = SupportedSerializers.JSON\n\n serializer = serializer_type.serializer_class()\n\n serializer.dump(snapshot, out_path)\n"},{"location":"from_pydoc/generated/snapshots/serialization/serializer/","title":"libdebug.snapshots.serialization.serializer","text":""},{"location":"from_pydoc/generated/snapshots/serialization/serializer/#libdebug.snapshots.serialization.serializer.AbstractSerializer","title":"AbstractSerializer","text":" Bases: ABC
Helper class to serialize and deserialize snapshots.
Source code inlibdebug/snapshots/serialization/serializer.py class AbstractSerializer(ABC):\n \"\"\"Helper class to serialize and deserialize snapshots.\"\"\"\n\n @abstractmethod\n def load(self: AbstractSerializer, file_path: str) -> Snapshot:\n \"\"\"Load a snapshot from a file.\n\n Args:\n file_path (str): The path to the file containing the snapshot.\n\n Returns:\n Snapshot: The loaded snapshot object.\n \"\"\"\n\n @abstractmethod\n def dump(self: AbstractSerializer, snapshot: Snapshot, out_path: str) -> None:\n \"\"\"Dump a snapshot to a file.\n\n Args:\n snapshot (Snapshot): The snapshot to be dumped.\n out_path (str): The path to the output file.\n \"\"\"\n"},{"location":"from_pydoc/generated/snapshots/serialization/serializer/#libdebug.snapshots.serialization.serializer.AbstractSerializer.dump","title":"dump(snapshot, out_path) abstractmethod","text":"Dump a snapshot to a file.
Parameters:
Name Type Description Defaultsnapshot Snapshot The snapshot to be dumped.
requiredout_path str The path to the output file.
required Source code inlibdebug/snapshots/serialization/serializer.py @abstractmethod\ndef dump(self: AbstractSerializer, snapshot: Snapshot, out_path: str) -> None:\n \"\"\"Dump a snapshot to a file.\n\n Args:\n snapshot (Snapshot): The snapshot to be dumped.\n out_path (str): The path to the output file.\n \"\"\"\n"},{"location":"from_pydoc/generated/snapshots/serialization/serializer/#libdebug.snapshots.serialization.serializer.AbstractSerializer.load","title":"load(file_path) abstractmethod","text":"Load a snapshot from a file.
Parameters:
Name Type Description Defaultfile_path str The path to the file containing the snapshot.
requiredReturns:
Name Type DescriptionSnapshot Snapshot The loaded snapshot object.
Source code inlibdebug/snapshots/serialization/serializer.py @abstractmethod\ndef load(self: AbstractSerializer, file_path: str) -> Snapshot:\n \"\"\"Load a snapshot from a file.\n\n Args:\n file_path (str): The path to the file containing the snapshot.\n\n Returns:\n Snapshot: The loaded snapshot object.\n \"\"\"\n"},{"location":"from_pydoc/generated/snapshots/serialization/supported_serializers/","title":"libdebug.snapshots.serialization.supported_serializers","text":""},{"location":"from_pydoc/generated/snapshots/serialization/supported_serializers/#libdebug.snapshots.serialization.supported_serializers.SupportedSerializers","title":"SupportedSerializers","text":" Bases: Enum
Enumeration of supported serializers for snapshots.
Source code inlibdebug/snapshots/serialization/supported_serializers.py class SupportedSerializers(Enum):\n \"\"\"Enumeration of supported serializers for snapshots.\"\"\"\n JSON = JSONSerializer\n\n @property\n def serializer_class(self: SupportedSerializers) -> AbstractSerializer:\n \"\"\"Return the serializer class.\"\"\"\n return self.value\n"},{"location":"from_pydoc/generated/snapshots/serialization/supported_serializers/#libdebug.snapshots.serialization.supported_serializers.SupportedSerializers.serializer_class","title":"serializer_class property","text":"Return the serializer class.
"},{"location":"from_pydoc/generated/snapshots/thread/lw_thread_snapshot/","title":"libdebug.snapshots.thread.lw_thread_snapshot","text":""},{"location":"from_pydoc/generated/snapshots/thread/lw_thread_snapshot/#libdebug.snapshots.thread.lw_thread_snapshot.LightweightThreadSnapshot","title":"LightweightThreadSnapshot","text":" Bases: ThreadSnapshot
This object represents a snapshot of the target thread. It has to be initialized by a ProcessSnapshot, since it initializes its properties with shared process state. It holds information about a thread's state.
Snapshot levels: - base: Registers - writable: Registers, writable memory contents - full: Registers, all readable memory contents
Source code inlibdebug/snapshots/thread/lw_thread_snapshot.py class LightweightThreadSnapshot(ThreadSnapshot):\n \"\"\"This object represents a snapshot of the target thread. It has to be initialized by a ProcessSnapshot, since it initializes its properties with shared process state. It holds information about a thread's state.\n\n Snapshot levels:\n - base: Registers\n - writable: Registers, writable memory contents\n - full: Registers, all readable memory contents\n \"\"\"\n\n def __init__(\n self: LightweightThreadSnapshot,\n thread: ThreadContext,\n process_snapshot: ProcessSnapshot,\n ) -> None:\n \"\"\"Creates a new snapshot object for the given thread.\n\n Args:\n thread (ThreadContext): The thread to take a snapshot of.\n process_snapshot (ProcessSnapshot): The process snapshot to which the thread belongs.\n \"\"\"\n # Set id of the snapshot and increment the counter\n self.snapshot_id = thread._snapshot_count\n thread.notify_snapshot_taken()\n\n # Basic snapshot info\n self.thread_id = thread.thread_id\n self.tid = thread.tid\n\n # If there is a name, append the thread id\n if process_snapshot.name is None:\n self.name = None\n else:\n self.name = f\"{process_snapshot.name} - Thread {self.tid}\"\n\n # Get thread registers\n self._save_regs(thread)\n\n self._proc_snapshot = process_snapshot\n\n @property\n def level(self: LightweightThreadSnapshot) -> str:\n \"\"\"Returns the snapshot level.\"\"\"\n return self._proc_snapshot.level\n\n @property\n def arch(self: LightweightThreadSnapshot) -> str:\n \"\"\"Returns the architecture of the thread snapshot.\"\"\"\n return self._proc_snapshot.arch\n\n @property\n def maps(self: LightweightThreadSnapshot) -> MemoryMapSnapshotList:\n \"\"\"Returns the memory map snapshot list associated with the process snapshot.\"\"\"\n return self._proc_snapshot.maps\n\n @property\n def _memory(self: LightweightThreadSnapshot) -> SnapshotMemoryView:\n \"\"\"Returns the memory view associated with the process snapshot.\"\"\"\n return self._proc_snapshot._memory\n"},{"location":"from_pydoc/generated/snapshots/thread/lw_thread_snapshot/#libdebug.snapshots.thread.lw_thread_snapshot.LightweightThreadSnapshot._memory","title":"_memory property","text":"Returns the memory view associated with the process snapshot.
"},{"location":"from_pydoc/generated/snapshots/thread/lw_thread_snapshot/#libdebug.snapshots.thread.lw_thread_snapshot.LightweightThreadSnapshot.arch","title":"arch property","text":"Returns the architecture of the thread snapshot.
"},{"location":"from_pydoc/generated/snapshots/thread/lw_thread_snapshot/#libdebug.snapshots.thread.lw_thread_snapshot.LightweightThreadSnapshot.level","title":"level property","text":"Returns the snapshot level.
"},{"location":"from_pydoc/generated/snapshots/thread/lw_thread_snapshot/#libdebug.snapshots.thread.lw_thread_snapshot.LightweightThreadSnapshot.maps","title":"maps property","text":"Returns the memory map snapshot list associated with the process snapshot.
"},{"location":"from_pydoc/generated/snapshots/thread/lw_thread_snapshot/#libdebug.snapshots.thread.lw_thread_snapshot.LightweightThreadSnapshot.__init__","title":"__init__(thread, process_snapshot)","text":"Creates a new snapshot object for the given thread.
Parameters:
Name Type Description Defaultthread ThreadContext The thread to take a snapshot of.
requiredprocess_snapshot ProcessSnapshot The process snapshot to which the thread belongs.
required Source code inlibdebug/snapshots/thread/lw_thread_snapshot.py def __init__(\n self: LightweightThreadSnapshot,\n thread: ThreadContext,\n process_snapshot: ProcessSnapshot,\n) -> None:\n \"\"\"Creates a new snapshot object for the given thread.\n\n Args:\n thread (ThreadContext): The thread to take a snapshot of.\n process_snapshot (ProcessSnapshot): The process snapshot to which the thread belongs.\n \"\"\"\n # Set id of the snapshot and increment the counter\n self.snapshot_id = thread._snapshot_count\n thread.notify_snapshot_taken()\n\n # Basic snapshot info\n self.thread_id = thread.thread_id\n self.tid = thread.tid\n\n # If there is a name, append the thread id\n if process_snapshot.name is None:\n self.name = None\n else:\n self.name = f\"{process_snapshot.name} - Thread {self.tid}\"\n\n # Get thread registers\n self._save_regs(thread)\n\n self._proc_snapshot = process_snapshot\n"},{"location":"from_pydoc/generated/snapshots/thread/lw_thread_snapshot_diff/","title":"libdebug.snapshots.thread.lw_thread_snapshot_diff","text":""},{"location":"from_pydoc/generated/snapshots/thread/lw_thread_snapshot_diff/#libdebug.snapshots.thread.lw_thread_snapshot_diff.LightweightThreadSnapshotDiff","title":"LightweightThreadSnapshotDiff","text":" Bases: ThreadSnapshotDiff
This object represents a diff between thread snapshots.
Source code inlibdebug/snapshots/thread/lw_thread_snapshot_diff.py class LightweightThreadSnapshotDiff(ThreadSnapshotDiff):\n \"\"\"This object represents a diff between thread snapshots.\"\"\"\n\n def __init__(\n self: LightweightThreadSnapshotDiff,\n snapshot1: ThreadSnapshot,\n snapshot2: ThreadSnapshot,\n process_diff: ProcessSnapshotDiff,\n ) -> ThreadSnapshotDiff:\n \"\"\"Returns a diff between given snapshots of the same thread.\n\n Args:\n snapshot1 (ThreadSnapshot): A thread snapshot.\n snapshot2 (ThreadSnapshot): A thread snapshot.\n process_diff (ProcessSnapshotDiff): The diff of the process to which the thread belongs.\n \"\"\"\n # Generic diff initialization\n Diff.__init__(self, snapshot1, snapshot2)\n\n # Register diffs\n self._save_reg_diffs()\n\n self._proc_diff = process_diff\n\n @property\n def maps(self: LightweightThreadSnapshotDiff) -> list[MemoryMapDiff]:\n \"\"\"Return the memory map diff.\"\"\"\n return self._proc_diff.maps\n"},{"location":"from_pydoc/generated/snapshots/thread/lw_thread_snapshot_diff/#libdebug.snapshots.thread.lw_thread_snapshot_diff.LightweightThreadSnapshotDiff.maps","title":"maps property","text":"Return the memory map diff.
"},{"location":"from_pydoc/generated/snapshots/thread/lw_thread_snapshot_diff/#libdebug.snapshots.thread.lw_thread_snapshot_diff.LightweightThreadSnapshotDiff.__init__","title":"__init__(snapshot1, snapshot2, process_diff)","text":"Returns a diff between given snapshots of the same thread.
Parameters:
Name Type Description Defaultsnapshot1 ThreadSnapshot A thread snapshot.
requiredsnapshot2 ThreadSnapshot A thread snapshot.
requiredprocess_diff ProcessSnapshotDiff The diff of the process to which the thread belongs.
required Source code inlibdebug/snapshots/thread/lw_thread_snapshot_diff.py def __init__(\n self: LightweightThreadSnapshotDiff,\n snapshot1: ThreadSnapshot,\n snapshot2: ThreadSnapshot,\n process_diff: ProcessSnapshotDiff,\n) -> ThreadSnapshotDiff:\n \"\"\"Returns a diff between given snapshots of the same thread.\n\n Args:\n snapshot1 (ThreadSnapshot): A thread snapshot.\n snapshot2 (ThreadSnapshot): A thread snapshot.\n process_diff (ProcessSnapshotDiff): The diff of the process to which the thread belongs.\n \"\"\"\n # Generic diff initialization\n Diff.__init__(self, snapshot1, snapshot2)\n\n # Register diffs\n self._save_reg_diffs()\n\n self._proc_diff = process_diff\n"},{"location":"from_pydoc/generated/snapshots/thread/thread_snapshot/","title":"libdebug.snapshots.thread.thread_snapshot","text":""},{"location":"from_pydoc/generated/snapshots/thread/thread_snapshot/#libdebug.snapshots.thread.thread_snapshot.ThreadSnapshot","title":"ThreadSnapshot","text":" Bases: Snapshot
This object represents a snapshot of the target thread. It holds information about a thread's state.
Snapshot levels: - base: Registers - writable: Registers, writable memory contents - full: Registers, all readable memory contents
Source code inlibdebug/snapshots/thread/thread_snapshot.py class ThreadSnapshot(Snapshot):\n \"\"\"This object represents a snapshot of the target thread. It holds information about a thread's state.\n\n Snapshot levels:\n - base: Registers\n - writable: Registers, writable memory contents\n - full: Registers, all readable memory contents\n \"\"\"\n\n def __init__(self: ThreadSnapshot, thread: ThreadContext, level: str = \"base\", name: str | None = None) -> None:\n \"\"\"Creates a new snapshot object for the given thread.\n\n Args:\n thread (ThreadContext): The thread to take a snapshot of.\n level (str, optional): The level of the snapshot. Defaults to \"base\".\n name (str, optional): A name associated to the snapshot. Defaults to None.\n \"\"\"\n # Set id of the snapshot and increment the counter\n self.snapshot_id = thread._snapshot_count\n thread.notify_snapshot_taken()\n\n # Basic snapshot info\n self.thread_id = thread.thread_id\n self.tid = thread.tid\n self.name = name\n self.level = level\n self.arch = thread._internal_debugger.arch\n self.aslr_enabled = thread._internal_debugger.aslr_enabled\n self._process_full_path = thread.debugger._internal_debugger._process_full_path\n self._process_name = thread.debugger._internal_debugger._process_name\n self._serialization_helper = thread._internal_debugger.serialization_helper\n\n # Get thread registers\n self._save_regs(thread)\n\n # Memory maps\n match level:\n case \"base\":\n map_list = []\n\n for curr_map in thread.debugger.maps:\n saved_map = MemoryMapSnapshot(\n start=curr_map.start,\n end=curr_map.end,\n permissions=curr_map.permissions,\n size=curr_map.size,\n offset=curr_map.offset,\n backing_file=curr_map.backing_file,\n content=None,\n )\n map_list.append(saved_map)\n\n self.maps = MemoryMapSnapshotList(map_list, self._process_name, self._process_full_path)\n\n self._memory = None\n case \"writable\":\n if not thread.debugger.fast_memory:\n liblog.warning(\n \"Memory snapshot requested but fast memory is not enabled. This will take a long time.\",\n )\n\n # Save all writable memory pages\n self._save_memory_maps(thread.debugger._internal_debugger, writable_only=True)\n\n self._memory = SnapshotMemoryView(self, thread.debugger.symbols)\n case \"full\":\n if not thread.debugger.fast_memory:\n liblog.warning(\n \"Memory snapshot requested but fast memory is not enabled. This will take a long time.\",\n )\n\n # Save all memory pages\n self._save_memory_maps(thread._internal_debugger, writable_only=False)\n\n self._memory = SnapshotMemoryView(self, thread.debugger.symbols)\n case _:\n raise ValueError(f\"Invalid snapshot level {level}\")\n\n # Log the creation of the snapshot\n named_addition = \" named \" + self.name if name is not None else \"\"\n liblog.debugger(\n f\"Created snapshot {self.snapshot_id} of level {self.level} for thread {self.tid}{named_addition}\",\n )\n\n def diff(self: ThreadSnapshot, other: ThreadSnapshot) -> Diff:\n \"\"\"Creates a diff object between two snapshots.\"\"\"\n if not isinstance(other, ThreadSnapshot):\n raise TypeError(\"Both arguments must be ThreadSnapshot objects.\")\n\n return ThreadSnapshotDiff(self, other)\n"},{"location":"from_pydoc/generated/snapshots/thread/thread_snapshot/#libdebug.snapshots.thread.thread_snapshot.ThreadSnapshot.__init__","title":"__init__(thread, level='base', name=None)","text":"Creates a new snapshot object for the given thread.
Parameters:
Name Type Description Defaultthread ThreadContext The thread to take a snapshot of.
requiredlevel str The level of the snapshot. Defaults to \"base\".
'base' name str A name associated to the snapshot. Defaults to None.
None Source code in libdebug/snapshots/thread/thread_snapshot.py def __init__(self: ThreadSnapshot, thread: ThreadContext, level: str = \"base\", name: str | None = None) -> None:\n \"\"\"Creates a new snapshot object for the given thread.\n\n Args:\n thread (ThreadContext): The thread to take a snapshot of.\n level (str, optional): The level of the snapshot. Defaults to \"base\".\n name (str, optional): A name associated to the snapshot. Defaults to None.\n \"\"\"\n # Set id of the snapshot and increment the counter\n self.snapshot_id = thread._snapshot_count\n thread.notify_snapshot_taken()\n\n # Basic snapshot info\n self.thread_id = thread.thread_id\n self.tid = thread.tid\n self.name = name\n self.level = level\n self.arch = thread._internal_debugger.arch\n self.aslr_enabled = thread._internal_debugger.aslr_enabled\n self._process_full_path = thread.debugger._internal_debugger._process_full_path\n self._process_name = thread.debugger._internal_debugger._process_name\n self._serialization_helper = thread._internal_debugger.serialization_helper\n\n # Get thread registers\n self._save_regs(thread)\n\n # Memory maps\n match level:\n case \"base\":\n map_list = []\n\n for curr_map in thread.debugger.maps:\n saved_map = MemoryMapSnapshot(\n start=curr_map.start,\n end=curr_map.end,\n permissions=curr_map.permissions,\n size=curr_map.size,\n offset=curr_map.offset,\n backing_file=curr_map.backing_file,\n content=None,\n )\n map_list.append(saved_map)\n\n self.maps = MemoryMapSnapshotList(map_list, self._process_name, self._process_full_path)\n\n self._memory = None\n case \"writable\":\n if not thread.debugger.fast_memory:\n liblog.warning(\n \"Memory snapshot requested but fast memory is not enabled. This will take a long time.\",\n )\n\n # Save all writable memory pages\n self._save_memory_maps(thread.debugger._internal_debugger, writable_only=True)\n\n self._memory = SnapshotMemoryView(self, thread.debugger.symbols)\n case \"full\":\n if not thread.debugger.fast_memory:\n liblog.warning(\n \"Memory snapshot requested but fast memory is not enabled. This will take a long time.\",\n )\n\n # Save all memory pages\n self._save_memory_maps(thread._internal_debugger, writable_only=False)\n\n self._memory = SnapshotMemoryView(self, thread.debugger.symbols)\n case _:\n raise ValueError(f\"Invalid snapshot level {level}\")\n\n # Log the creation of the snapshot\n named_addition = \" named \" + self.name if name is not None else \"\"\n liblog.debugger(\n f\"Created snapshot {self.snapshot_id} of level {self.level} for thread {self.tid}{named_addition}\",\n )\n"},{"location":"from_pydoc/generated/snapshots/thread/thread_snapshot/#libdebug.snapshots.thread.thread_snapshot.ThreadSnapshot.diff","title":"diff(other)","text":"Creates a diff object between two snapshots.
Source code inlibdebug/snapshots/thread/thread_snapshot.py def diff(self: ThreadSnapshot, other: ThreadSnapshot) -> Diff:\n \"\"\"Creates a diff object between two snapshots.\"\"\"\n if not isinstance(other, ThreadSnapshot):\n raise TypeError(\"Both arguments must be ThreadSnapshot objects.\")\n\n return ThreadSnapshotDiff(self, other)\n"},{"location":"from_pydoc/generated/snapshots/thread/thread_snapshot_diff/","title":"libdebug.snapshots.thread.thread_snapshot_diff","text":""},{"location":"from_pydoc/generated/snapshots/thread/thread_snapshot_diff/#libdebug.snapshots.thread.thread_snapshot_diff.ThreadSnapshotDiff","title":"ThreadSnapshotDiff","text":" Bases: Diff
This object represents a diff between thread snapshots.
Source code inlibdebug/snapshots/thread/thread_snapshot_diff.py class ThreadSnapshotDiff(Diff):\n \"\"\"This object represents a diff between thread snapshots.\"\"\"\n\n def __init__(self: ThreadSnapshotDiff, snapshot1: ThreadSnapshot, snapshot2: ThreadSnapshot) -> ThreadSnapshotDiff:\n \"\"\"Returns a diff between given snapshots of the same thread.\n\n Args:\n snapshot1 (ThreadSnapshot): A thread snapshot.\n snapshot2 (ThreadSnapshot): A thread snapshot.\n \"\"\"\n super().__init__(snapshot1, snapshot2)\n\n # Register diffs\n self._save_reg_diffs()\n\n # Memory map diffs\n self._resolve_maps_diff()\n\n if (self.snapshot1._process_name == self.snapshot2._process_name) and (\n self.snapshot1.aslr_enabled or self.snapshot2.aslr_enabled\n ):\n liblog.warning(\"ASLR is enabled in either or both snapshots. Diff may be messy.\")\n"},{"location":"from_pydoc/generated/snapshots/thread/thread_snapshot_diff/#libdebug.snapshots.thread.thread_snapshot_diff.ThreadSnapshotDiff.__init__","title":"__init__(snapshot1, snapshot2)","text":"Returns a diff between given snapshots of the same thread.
Parameters:
Name Type Description Defaultsnapshot1 ThreadSnapshot A thread snapshot.
requiredsnapshot2 ThreadSnapshot A thread snapshot.
required Source code inlibdebug/snapshots/thread/thread_snapshot_diff.py def __init__(self: ThreadSnapshotDiff, snapshot1: ThreadSnapshot, snapshot2: ThreadSnapshot) -> ThreadSnapshotDiff:\n \"\"\"Returns a diff between given snapshots of the same thread.\n\n Args:\n snapshot1 (ThreadSnapshot): A thread snapshot.\n snapshot2 (ThreadSnapshot): A thread snapshot.\n \"\"\"\n super().__init__(snapshot1, snapshot2)\n\n # Register diffs\n self._save_reg_diffs()\n\n # Memory map diffs\n self._resolve_maps_diff()\n\n if (self.snapshot1._process_name == self.snapshot2._process_name) and (\n self.snapshot1.aslr_enabled or self.snapshot2.aslr_enabled\n ):\n liblog.warning(\"ASLR is enabled in either or both snapshots. Diff may be messy.\")\n"},{"location":"from_pydoc/generated/state/resume_context/","title":"libdebug.state.resume_context","text":""},{"location":"from_pydoc/generated/state/resume_context/#libdebug.state.resume_context.EventType","title":"EventType","text":"A class representing the type of event that caused the resume decision.
Source code inlibdebug/state/resume_context.py class EventType:\n \"\"\"A class representing the type of event that caused the resume decision.\"\"\"\n\n UNKNOWN = \"Unknown Event\"\n BREAKPOINT = \"Breakpoint\"\n SYSCALL = \"Syscall\"\n SIGNAL = \"Signal\"\n USER_INTERRUPT = \"User Interrupt\"\n STEP = \"Step\"\n STARTUP = \"Process Startup\"\n CLONE = \"Thread Clone\"\n FORK = \"Process Fork\"\n EXIT = \"Process Exit\"\n SECCOMP = \"Seccomp\"\n"},{"location":"from_pydoc/generated/state/resume_context/#libdebug.state.resume_context.ResumeContext","title":"ResumeContext","text":"A class representing the context of the resume decision.
Source code inlibdebug/state/resume_context.py class ResumeContext:\n \"\"\"A class representing the context of the resume decision.\"\"\"\n\n def __init__(self: ResumeContext) -> None:\n \"\"\"Initializes the ResumeContext.\"\"\"\n self.resume: bool = True\n self.force_interrupt: bool = False\n self.is_a_step: bool = False\n self.is_startup: bool = False\n self.block_on_signal: bool = False\n self.threads_with_signals_to_forward: list[int] = []\n self.event_type: dict[int, EventType] = {}\n self.event_hit_ref: dict[int, Breakpoint] = {}\n\n def clear(self: ResumeContext) -> None:\n \"\"\"Clears the context.\"\"\"\n self.resume = True\n self.force_interrupt = False\n self.is_a_step = False\n self.is_startup = False\n self.block_on_signal = False\n self.threads_with_signals_to_forward.clear()\n self.event_type.clear()\n self.event_hit_ref.clear()\n\n def get_event_type(self: ResumeContext) -> str:\n \"\"\"Returns the event type to be printed.\"\"\"\n event_str = \"\"\n if self.event_type:\n for tid, event in self.event_type.items():\n if event == EventType.BREAKPOINT:\n hit_ref = self.event_hit_ref[tid]\n if hit_ref.condition != \"x\":\n event_str += (\n f\"Watchpoint at {hit_ref.address:#x} with condition {hit_ref.condition} on thread {tid}.\"\n )\n else:\n event_str += f\"Breakpoint at {hit_ref.address:#x} on thread {tid}.\"\n elif event == EventType.SYSCALL:\n hit_ref = self.event_hit_ref[tid]\n event_str += f\"Syscall {hit_ref.syscall_number} on thread {tid}.\"\n elif event == EventType.SIGNAL:\n hit_ref = self.event_hit_ref[tid]\n event_str += f\"Signal {hit_ref.signal} on thread {tid}.\"\n else:\n event_str += f\"{event} on thread {tid}.\"\n\n return event_str\n"},{"location":"from_pydoc/generated/state/resume_context/#libdebug.state.resume_context.ResumeContext.__init__","title":"__init__()","text":"Initializes the ResumeContext.
Source code inlibdebug/state/resume_context.py def __init__(self: ResumeContext) -> None:\n \"\"\"Initializes the ResumeContext.\"\"\"\n self.resume: bool = True\n self.force_interrupt: bool = False\n self.is_a_step: bool = False\n self.is_startup: bool = False\n self.block_on_signal: bool = False\n self.threads_with_signals_to_forward: list[int] = []\n self.event_type: dict[int, EventType] = {}\n self.event_hit_ref: dict[int, Breakpoint] = {}\n"},{"location":"from_pydoc/generated/state/resume_context/#libdebug.state.resume_context.ResumeContext.clear","title":"clear()","text":"Clears the context.
Source code inlibdebug/state/resume_context.py def clear(self: ResumeContext) -> None:\n \"\"\"Clears the context.\"\"\"\n self.resume = True\n self.force_interrupt = False\n self.is_a_step = False\n self.is_startup = False\n self.block_on_signal = False\n self.threads_with_signals_to_forward.clear()\n self.event_type.clear()\n self.event_hit_ref.clear()\n"},{"location":"from_pydoc/generated/state/resume_context/#libdebug.state.resume_context.ResumeContext.get_event_type","title":"get_event_type()","text":"Returns the event type to be printed.
Source code inlibdebug/state/resume_context.py def get_event_type(self: ResumeContext) -> str:\n \"\"\"Returns the event type to be printed.\"\"\"\n event_str = \"\"\n if self.event_type:\n for tid, event in self.event_type.items():\n if event == EventType.BREAKPOINT:\n hit_ref = self.event_hit_ref[tid]\n if hit_ref.condition != \"x\":\n event_str += (\n f\"Watchpoint at {hit_ref.address:#x} with condition {hit_ref.condition} on thread {tid}.\"\n )\n else:\n event_str += f\"Breakpoint at {hit_ref.address:#x} on thread {tid}.\"\n elif event == EventType.SYSCALL:\n hit_ref = self.event_hit_ref[tid]\n event_str += f\"Syscall {hit_ref.syscall_number} on thread {tid}.\"\n elif event == EventType.SIGNAL:\n hit_ref = self.event_hit_ref[tid]\n event_str += f\"Signal {hit_ref.signal} on thread {tid}.\"\n else:\n event_str += f\"{event} on thread {tid}.\"\n\n return event_str\n"},{"location":"from_pydoc/generated/state/thread_context/","title":"libdebug.state.thread_context","text":""},{"location":"from_pydoc/generated/state/thread_context/#libdebug.state.thread_context.ThreadContext","title":"ThreadContext","text":" Bases: ABC
This object represents a thread in the context of the target process. It holds information about the thread's state, registers and stack.
Source code inlibdebug/state/thread_context.py class ThreadContext(ABC):\n \"\"\"This object represents a thread in the context of the target process. It holds information about the thread's state, registers and stack.\"\"\"\n\n instruction_pointer: int\n \"\"\"The thread's instruction pointer.\"\"\"\n\n syscall_arg0: int\n \"\"\"The thread's syscall argument 0.\"\"\"\n\n syscall_arg1: int\n \"\"\"The thread's syscall argument 1.\"\"\"\n\n syscall_arg2: int\n \"\"\"The thread's syscall argument 2.\"\"\"\n\n syscall_arg3: int\n \"\"\"The thread's syscall argument 3.\"\"\"\n\n syscall_arg4: int\n \"\"\"The thread's syscall argument 4.\"\"\"\n\n syscall_arg5: int\n \"\"\"The thread's syscall argument 5.\"\"\"\n\n syscall_number: int\n \"\"\"The thread's syscall number.\"\"\"\n\n syscall_return: int\n \"\"\"The thread's syscall return value.\"\"\"\n\n regs: Registers\n \"\"\"The thread's registers.\"\"\"\n\n _internal_debugger: InternalDebugger | None = None\n \"\"\"The debugging context this thread belongs to.\"\"\"\n\n _register_holder: RegisterHolder | None = None\n \"\"\"The register holder object.\"\"\"\n\n _dead: bool = False\n \"\"\"Whether the thread is dead.\"\"\"\n\n _exit_code: int | None = None\n \"\"\"The thread's exit code.\"\"\"\n\n _exit_signal: int | None = None\n \"\"\"The thread's exit signal.\"\"\"\n\n _signal_number: int = 0\n \"\"\"The signal to forward to the thread.\"\"\"\n\n _thread_id: int\n \"\"\"The thread's ID.\"\"\"\n\n _snapshot_count: int = 0\n \"\"\"The number of snapshots taken.\"\"\"\n\n _zombie: bool = False\n \"\"\"Whether the thread is a zombie.\"\"\"\n\n def __init__(self: ThreadContext, thread_id: int, registers: RegisterHolder) -> None:\n \"\"\"Initializes the Thread Context.\"\"\"\n self._internal_debugger = provide_internal_debugger(self)\n self._thread_id = thread_id\n self._register_holder = registers\n regs_class = self._register_holder.provide_regs_class()\n self.regs = regs_class(thread_id, self._register_holder.provide_regs())\n self._register_holder.apply_on_regs(self.regs, regs_class)\n\n def set_as_dead(self: ThreadContext) -> None:\n \"\"\"Set the thread as dead.\"\"\"\n self._dead = True\n\n @property\n def debugger(self: ThreadContext) -> Debugger:\n \"\"\"The debugging context this thread belongs to.\"\"\"\n return self._internal_debugger.debugger\n\n @property\n def dead(self: ThreadContext) -> bool:\n \"\"\"Whether the thread is dead.\"\"\"\n return self._dead\n\n @property\n def memory(self: ThreadContext) -> AbstractMemoryView:\n \"\"\"The memory view of the debugged process.\"\"\"\n return self._internal_debugger.memory\n\n @property\n def mem(self: ThreadContext) -> AbstractMemoryView:\n \"\"\"Alias for the `memory` property.\n\n Get the memory view of the process.\n \"\"\"\n return self._internal_debugger.memory\n\n @property\n def process_id(self: ThreadContext) -> int:\n \"\"\"The process ID.\"\"\"\n return self._internal_debugger.process_id\n\n @property\n def pid(self: ThreadContext) -> int:\n \"\"\"Alias for `process_id` property.\n\n The process ID.\n \"\"\"\n return self._internal_debugger.process_id\n\n @property\n def thread_id(self: ThreadContext) -> int:\n \"\"\"The thread ID.\"\"\"\n return self._thread_id\n\n @property\n def tid(self: ThreadContext) -> int:\n \"\"\"The thread ID.\"\"\"\n return self._thread_id\n\n @property\n def running(self: ThreadContext) -> bool:\n \"\"\"Whether the process is running.\"\"\"\n return self._internal_debugger.running\n\n @property\n def saved_ip(self: ThreadContext) -> int:\n \"\"\"The return address of the current function.\"\"\"\n self._internal_debugger._ensure_process_stopped()\n stack_unwinder = stack_unwinding_provider(self._internal_debugger.arch)\n\n try:\n return_address = stack_unwinder.get_return_address(self, self._internal_debugger.maps)\n except (OSError, ValueError) as e:\n raise ValueError(\n \"Failed to get the return address. Check stack frame registers (e.g., base pointer).\",\n ) from e\n\n return return_address\n\n @property\n def exit_code(self: ThreadContext) -> int | None:\n \"\"\"The thread's exit code.\"\"\"\n self._internal_debugger._ensure_process_stopped()\n if not self.dead:\n liblog.warning(\"Thread is not dead. No exit code available.\")\n elif self._exit_code is None and self._exit_signal is not None:\n liblog.warning(\n \"Thread exited with signal %s. No exit code available.\",\n resolve_signal_name(self._exit_signal),\n )\n return self._exit_code\n\n @property\n def exit_signal(self: ThreadContext) -> str | None:\n \"\"\"The thread's exit signal.\"\"\"\n self._internal_debugger._ensure_process_stopped()\n if not self.dead:\n liblog.warning(\"Thread is not dead. No exit signal available.\")\n return None\n elif self._exit_signal is None and self._exit_code is not None:\n liblog.warning(\"Thread exited with code %d. No exit signal available.\", self._exit_code)\n return None\n return resolve_signal_name(self._exit_signal)\n\n @property\n def signal(self: ThreadContext) -> str | None:\n \"\"\"The signal will be forwarded to the thread.\"\"\"\n self._internal_debugger._ensure_process_stopped()\n return None if self._signal_number == 0 else resolve_signal_name(self._signal_number)\n\n @signal.setter\n def signal(self: ThreadContext, signal: str | int) -> None:\n \"\"\"Set the signal to forward to the thread.\"\"\"\n self._internal_debugger._ensure_process_stopped()\n if self._signal_number != 0:\n liblog.debugger(\n f\"Overwriting signal {resolve_signal_name(self._signal_number)} with {resolve_signal_name(signal) if isinstance(signal, int) else signal}.\",\n )\n if isinstance(signal, str):\n signal = resolve_signal_number(signal)\n self._signal_number = signal\n self._internal_debugger.resume_context.threads_with_signals_to_forward.append(self.thread_id)\n\n @property\n def signal_number(self: ThreadContext) -> int:\n \"\"\"The signal number to forward to the thread.\"\"\"\n return self._signal_number\n\n @property\n def zombie(self: ThreadContext) -> bool:\n \"\"\"Whether the thread is a zombie.\"\"\"\n return self._zombie\n\n def backtrace(self: ThreadContext, as_symbols: bool = False) -> list:\n \"\"\"Returns the current backtrace of the thread.\n\n Args:\n as_symbols (bool, optional): Whether to return the backtrace as symbols\n \"\"\"\n self._internal_debugger._ensure_process_stopped()\n stack_unwinder = stack_unwinding_provider(self._internal_debugger.arch)\n backtrace = stack_unwinder.unwind(self)\n if as_symbols:\n maps = self._internal_debugger.debugging_interface.get_maps()\n with extend_internal_debugger(self._internal_debugger):\n backtrace = [resolve_address_in_maps(x, maps) for x in backtrace]\n return backtrace\n\n def pprint_backtrace(self: ThreadContext) -> None:\n \"\"\"Pretty prints the current backtrace of the thread.\"\"\"\n self._internal_debugger._ensure_process_stopped()\n stack_unwinder = stack_unwinding_provider(self._internal_debugger.arch)\n backtrace = stack_unwinder.unwind(self)\n maps = self._internal_debugger.debugging_interface.get_maps()\n pprint_backtrace_util(backtrace, maps, self._internal_debugger.symbols)\n\n def pprint_registers(self: ThreadContext) -> None:\n \"\"\"Pretty prints the thread's registers.\"\"\"\n pprint_registers_util(\n self.regs,\n self._internal_debugger.maps,\n self._register_holder.provide_regs(),\n )\n\n def pprint_regs(self: ThreadContext) -> None:\n \"\"\"Alias for the `pprint_registers` method.\n\n Pretty prints the thread's registers.\n \"\"\"\n self.pprint_registers()\n\n def pprint_registers_all(self: ThreadContext) -> None:\n \"\"\"Pretty prints all the thread's registers.\"\"\"\n pprint_registers_all_util(\n self.regs,\n self._internal_debugger.maps,\n self._register_holder.provide_regs(),\n self._register_holder.provide_special_regs(),\n self._register_holder.provide_vector_fp_regs(),\n )\n\n def pprint_regs_all(self: ThreadContext) -> None:\n \"\"\"Alias for the `pprint_registers_all` method.\n\n Pretty prints all the thread's registers.\n \"\"\"\n self.pprint_registers_all()\n\n def step(self: ThreadContext) -> None:\n \"\"\"Executes a single instruction of the process.\"\"\"\n self._internal_debugger.step(self)\n\n def step_until(\n self: ThreadContext,\n position: int | str,\n max_steps: int = -1,\n file: str = \"hybrid\",\n ) -> None:\n \"\"\"Executes instructions of the process until the specified location is reached.\n\n Args:\n position (int | bytes): The location to reach.\n max_steps (int, optional): The maximum number of steps to execute. Defaults to -1.\n file (str, optional): The user-defined backing file to resolve the address in. Defaults to \"hybrid\" (libdebug will first try to solve the address as an absolute address, then as a relative address w.r.t. the \"binary\" map file).\n \"\"\"\n self._internal_debugger.step_until(self, position, max_steps, file)\n\n def finish(self: ThreadContext, heuristic: str = \"backtrace\") -> None:\n \"\"\"Continues execution until the current function returns or the process stops.\n\n The command requires a heuristic to determine the end of the function. The available heuristics are:\n - `backtrace`: The debugger will place a breakpoint on the saved return address found on the stack and continue execution on all threads.\n - `step-mode`: The debugger will step on the specified thread until the current function returns. This will be slower.\n\n Args:\n heuristic (str, optional): The heuristic to use. Defaults to \"backtrace\".\n \"\"\"\n self._internal_debugger.finish(self, heuristic=heuristic)\n\n def next(self: ThreadContext) -> None:\n \"\"\"Executes the next instruction of the process. If the instruction is a call, the debugger will continue until the called function returns.\"\"\"\n self._internal_debugger.next(self)\n\n def si(self: ThreadContext) -> None:\n \"\"\"Alias for the `step` method.\n\n Executes a single instruction of the process.\n \"\"\"\n self._internal_debugger.step(self)\n\n def su(\n self: ThreadContext,\n position: int | str,\n max_steps: int = -1,\n ) -> None:\n \"\"\"Alias for the `step_until` method.\n\n Executes instructions of the process until the specified location is reached.\n\n Args:\n position (int | bytes): The location to reach.\n max_steps (int, optional): The maximum number of steps to execute. Defaults to -1.\n \"\"\"\n self._internal_debugger.step_until(self, position, max_steps)\n\n def fin(self: ThreadContext, heuristic: str = \"backtrace\") -> None:\n \"\"\"Alias for the `finish` method. Continues execution until the current function returns or the process stops.\n\n The command requires a heuristic to determine the end of the function. The available heuristics are:\n - `backtrace`: The debugger will place a breakpoint on the saved return address found on the stack and continue execution on all threads.\n - `step-mode`: The debugger will step on the specified thread until the current function returns. This will be slower.\n\n Args:\n heuristic (str, optional): The heuristic to use. Defaults to \"backtrace\".\n \"\"\"\n self._internal_debugger.finish(self, heuristic)\n\n def ni(self: ThreadContext) -> None:\n \"\"\"Alias for the `next` method. Executes the next instruction of the process. If the instruction is a call, the debugger will continue until the called function returns.\"\"\"\n self._internal_debugger.next(self)\n\n def __repr__(self: ThreadContext) -> str:\n \"\"\"Returns a string representation of the object.\"\"\"\n repr_str = \"ThreadContext()\\n\"\n repr_str += f\" Thread ID: {self.thread_id}\\n\"\n repr_str += f\" Process ID: {self.process_id}\\n\"\n repr_str += f\" Instruction Pointer: {self.instruction_pointer:#x}\\n\"\n repr_str += f\" Dead: {self.dead}\"\n return repr_str\n\n def create_snapshot(self: ThreadContext, level: str = \"base\", name: str | None = None) -> ThreadSnapshot:\n \"\"\"Create a snapshot of the current thread state.\n\n Snapshot levels:\n - base: Registers\n - writable: Registers, writable memory contents\n - full: Registers, all readable memory contents\n\n Args:\n level (str): The level of the snapshot.\n name (str, optional): The name of the snapshot. Defaults to None.\n\n Returns:\n ThreadSnapshot: The created snapshot.\n \"\"\"\n self._internal_debugger._ensure_process_stopped()\n return ThreadSnapshot(self, level, name)\n\n def notify_snapshot_taken(self: ThreadContext) -> None:\n \"\"\"Notify the thread that a snapshot has been taken.\"\"\"\n self._snapshot_count += 1\n"},{"location":"from_pydoc/generated/state/thread_context/#libdebug.state.thread_context.ThreadContext._dead","title":"_dead = False class-attribute instance-attribute","text":"Whether the thread is dead.
"},{"location":"from_pydoc/generated/state/thread_context/#libdebug.state.thread_context.ThreadContext._exit_code","title":"_exit_code = None class-attribute instance-attribute","text":"The thread's exit code.
"},{"location":"from_pydoc/generated/state/thread_context/#libdebug.state.thread_context.ThreadContext._exit_signal","title":"_exit_signal = None class-attribute instance-attribute","text":"The thread's exit signal.
"},{"location":"from_pydoc/generated/state/thread_context/#libdebug.state.thread_context.ThreadContext._internal_debugger","title":"_internal_debugger = provide_internal_debugger(self) class-attribute instance-attribute","text":"The debugging context this thread belongs to.
"},{"location":"from_pydoc/generated/state/thread_context/#libdebug.state.thread_context.ThreadContext._register_holder","title":"_register_holder = registers class-attribute instance-attribute","text":"The register holder object.
"},{"location":"from_pydoc/generated/state/thread_context/#libdebug.state.thread_context.ThreadContext._signal_number","title":"_signal_number = 0 class-attribute instance-attribute","text":"The signal to forward to the thread.
"},{"location":"from_pydoc/generated/state/thread_context/#libdebug.state.thread_context.ThreadContext._snapshot_count","title":"_snapshot_count = 0 class-attribute instance-attribute","text":"The number of snapshots taken.
"},{"location":"from_pydoc/generated/state/thread_context/#libdebug.state.thread_context.ThreadContext._thread_id","title":"_thread_id = thread_id instance-attribute","text":"The thread's ID.
"},{"location":"from_pydoc/generated/state/thread_context/#libdebug.state.thread_context.ThreadContext._zombie","title":"_zombie = False class-attribute instance-attribute","text":"Whether the thread is a zombie.
"},{"location":"from_pydoc/generated/state/thread_context/#libdebug.state.thread_context.ThreadContext.dead","title":"dead property","text":"Whether the thread is dead.
"},{"location":"from_pydoc/generated/state/thread_context/#libdebug.state.thread_context.ThreadContext.debugger","title":"debugger property","text":"The debugging context this thread belongs to.
"},{"location":"from_pydoc/generated/state/thread_context/#libdebug.state.thread_context.ThreadContext.exit_code","title":"exit_code property","text":"The thread's exit code.
"},{"location":"from_pydoc/generated/state/thread_context/#libdebug.state.thread_context.ThreadContext.exit_signal","title":"exit_signal property","text":"The thread's exit signal.
"},{"location":"from_pydoc/generated/state/thread_context/#libdebug.state.thread_context.ThreadContext.instruction_pointer","title":"instruction_pointer instance-attribute","text":"The thread's instruction pointer.
"},{"location":"from_pydoc/generated/state/thread_context/#libdebug.state.thread_context.ThreadContext.mem","title":"mem property","text":"Alias for the memory property.
Get the memory view of the process.
"},{"location":"from_pydoc/generated/state/thread_context/#libdebug.state.thread_context.ThreadContext.memory","title":"memory property","text":"The memory view of the debugged process.
"},{"location":"from_pydoc/generated/state/thread_context/#libdebug.state.thread_context.ThreadContext.pid","title":"pid property","text":"Alias for process_id property.
The process ID.
"},{"location":"from_pydoc/generated/state/thread_context/#libdebug.state.thread_context.ThreadContext.process_id","title":"process_id property","text":"The process ID.
"},{"location":"from_pydoc/generated/state/thread_context/#libdebug.state.thread_context.ThreadContext.regs","title":"regs = regs_class(thread_id, self._register_holder.provide_regs()) instance-attribute","text":"The thread's registers.
"},{"location":"from_pydoc/generated/state/thread_context/#libdebug.state.thread_context.ThreadContext.running","title":"running property","text":"Whether the process is running.
"},{"location":"from_pydoc/generated/state/thread_context/#libdebug.state.thread_context.ThreadContext.saved_ip","title":"saved_ip property","text":"The return address of the current function.
"},{"location":"from_pydoc/generated/state/thread_context/#libdebug.state.thread_context.ThreadContext.signal","title":"signal property writable","text":"The signal will be forwarded to the thread.
"},{"location":"from_pydoc/generated/state/thread_context/#libdebug.state.thread_context.ThreadContext.signal_number","title":"signal_number property","text":"The signal number to forward to the thread.
"},{"location":"from_pydoc/generated/state/thread_context/#libdebug.state.thread_context.ThreadContext.syscall_arg0","title":"syscall_arg0 instance-attribute","text":"The thread's syscall argument 0.
"},{"location":"from_pydoc/generated/state/thread_context/#libdebug.state.thread_context.ThreadContext.syscall_arg1","title":"syscall_arg1 instance-attribute","text":"The thread's syscall argument 1.
"},{"location":"from_pydoc/generated/state/thread_context/#libdebug.state.thread_context.ThreadContext.syscall_arg2","title":"syscall_arg2 instance-attribute","text":"The thread's syscall argument 2.
"},{"location":"from_pydoc/generated/state/thread_context/#libdebug.state.thread_context.ThreadContext.syscall_arg3","title":"syscall_arg3 instance-attribute","text":"The thread's syscall argument 3.
"},{"location":"from_pydoc/generated/state/thread_context/#libdebug.state.thread_context.ThreadContext.syscall_arg4","title":"syscall_arg4 instance-attribute","text":"The thread's syscall argument 4.
"},{"location":"from_pydoc/generated/state/thread_context/#libdebug.state.thread_context.ThreadContext.syscall_arg5","title":"syscall_arg5 instance-attribute","text":"The thread's syscall argument 5.
"},{"location":"from_pydoc/generated/state/thread_context/#libdebug.state.thread_context.ThreadContext.syscall_number","title":"syscall_number instance-attribute","text":"The thread's syscall number.
"},{"location":"from_pydoc/generated/state/thread_context/#libdebug.state.thread_context.ThreadContext.syscall_return","title":"syscall_return instance-attribute","text":"The thread's syscall return value.
"},{"location":"from_pydoc/generated/state/thread_context/#libdebug.state.thread_context.ThreadContext.thread_id","title":"thread_id property","text":"The thread ID.
"},{"location":"from_pydoc/generated/state/thread_context/#libdebug.state.thread_context.ThreadContext.tid","title":"tid property","text":"The thread ID.
"},{"location":"from_pydoc/generated/state/thread_context/#libdebug.state.thread_context.ThreadContext.zombie","title":"zombie property","text":"Whether the thread is a zombie.
"},{"location":"from_pydoc/generated/state/thread_context/#libdebug.state.thread_context.ThreadContext.__init__","title":"__init__(thread_id, registers)","text":"Initializes the Thread Context.
Source code inlibdebug/state/thread_context.py def __init__(self: ThreadContext, thread_id: int, registers: RegisterHolder) -> None:\n \"\"\"Initializes the Thread Context.\"\"\"\n self._internal_debugger = provide_internal_debugger(self)\n self._thread_id = thread_id\n self._register_holder = registers\n regs_class = self._register_holder.provide_regs_class()\n self.regs = regs_class(thread_id, self._register_holder.provide_regs())\n self._register_holder.apply_on_regs(self.regs, regs_class)\n"},{"location":"from_pydoc/generated/state/thread_context/#libdebug.state.thread_context.ThreadContext.__repr__","title":"__repr__()","text":"Returns a string representation of the object.
Source code inlibdebug/state/thread_context.py def __repr__(self: ThreadContext) -> str:\n \"\"\"Returns a string representation of the object.\"\"\"\n repr_str = \"ThreadContext()\\n\"\n repr_str += f\" Thread ID: {self.thread_id}\\n\"\n repr_str += f\" Process ID: {self.process_id}\\n\"\n repr_str += f\" Instruction Pointer: {self.instruction_pointer:#x}\\n\"\n repr_str += f\" Dead: {self.dead}\"\n return repr_str\n"},{"location":"from_pydoc/generated/state/thread_context/#libdebug.state.thread_context.ThreadContext.backtrace","title":"backtrace(as_symbols=False)","text":"Returns the current backtrace of the thread.
Parameters:
Name Type Description Defaultas_symbols bool Whether to return the backtrace as symbols
False Source code in libdebug/state/thread_context.py def backtrace(self: ThreadContext, as_symbols: bool = False) -> list:\n \"\"\"Returns the current backtrace of the thread.\n\n Args:\n as_symbols (bool, optional): Whether to return the backtrace as symbols\n \"\"\"\n self._internal_debugger._ensure_process_stopped()\n stack_unwinder = stack_unwinding_provider(self._internal_debugger.arch)\n backtrace = stack_unwinder.unwind(self)\n if as_symbols:\n maps = self._internal_debugger.debugging_interface.get_maps()\n with extend_internal_debugger(self._internal_debugger):\n backtrace = [resolve_address_in_maps(x, maps) for x in backtrace]\n return backtrace\n"},{"location":"from_pydoc/generated/state/thread_context/#libdebug.state.thread_context.ThreadContext.create_snapshot","title":"create_snapshot(level='base', name=None)","text":"Create a snapshot of the current thread state.
Snapshot levels: - base: Registers - writable: Registers, writable memory contents - full: Registers, all readable memory contents
Parameters:
Name Type Description Defaultlevel str The level of the snapshot.
'base' name str The name of the snapshot. Defaults to None.
None Returns:
Name Type DescriptionThreadSnapshot ThreadSnapshot The created snapshot.
Source code inlibdebug/state/thread_context.py def create_snapshot(self: ThreadContext, level: str = \"base\", name: str | None = None) -> ThreadSnapshot:\n \"\"\"Create a snapshot of the current thread state.\n\n Snapshot levels:\n - base: Registers\n - writable: Registers, writable memory contents\n - full: Registers, all readable memory contents\n\n Args:\n level (str): The level of the snapshot.\n name (str, optional): The name of the snapshot. Defaults to None.\n\n Returns:\n ThreadSnapshot: The created snapshot.\n \"\"\"\n self._internal_debugger._ensure_process_stopped()\n return ThreadSnapshot(self, level, name)\n"},{"location":"from_pydoc/generated/state/thread_context/#libdebug.state.thread_context.ThreadContext.fin","title":"fin(heuristic='backtrace')","text":"Alias for the finish method. Continues execution until the current function returns or the process stops.
The command requires a heuristic to determine the end of the function. The available heuristics are: - backtrace: The debugger will place a breakpoint on the saved return address found on the stack and continue execution on all threads. - step-mode: The debugger will step on the specified thread until the current function returns. This will be slower.
Parameters:
Name Type Description Defaultheuristic str The heuristic to use. Defaults to \"backtrace\".
'backtrace' Source code in libdebug/state/thread_context.py def fin(self: ThreadContext, heuristic: str = \"backtrace\") -> None:\n \"\"\"Alias for the `finish` method. Continues execution until the current function returns or the process stops.\n\n The command requires a heuristic to determine the end of the function. The available heuristics are:\n - `backtrace`: The debugger will place a breakpoint on the saved return address found on the stack and continue execution on all threads.\n - `step-mode`: The debugger will step on the specified thread until the current function returns. This will be slower.\n\n Args:\n heuristic (str, optional): The heuristic to use. Defaults to \"backtrace\".\n \"\"\"\n self._internal_debugger.finish(self, heuristic)\n"},{"location":"from_pydoc/generated/state/thread_context/#libdebug.state.thread_context.ThreadContext.finish","title":"finish(heuristic='backtrace')","text":"Continues execution until the current function returns or the process stops.
The command requires a heuristic to determine the end of the function. The available heuristics are: - backtrace: The debugger will place a breakpoint on the saved return address found on the stack and continue execution on all threads. - step-mode: The debugger will step on the specified thread until the current function returns. This will be slower.
Parameters:
Name Type Description Defaultheuristic str The heuristic to use. Defaults to \"backtrace\".
'backtrace' Source code in libdebug/state/thread_context.py def finish(self: ThreadContext, heuristic: str = \"backtrace\") -> None:\n \"\"\"Continues execution until the current function returns or the process stops.\n\n The command requires a heuristic to determine the end of the function. The available heuristics are:\n - `backtrace`: The debugger will place a breakpoint on the saved return address found on the stack and continue execution on all threads.\n - `step-mode`: The debugger will step on the specified thread until the current function returns. This will be slower.\n\n Args:\n heuristic (str, optional): The heuristic to use. Defaults to \"backtrace\".\n \"\"\"\n self._internal_debugger.finish(self, heuristic=heuristic)\n"},{"location":"from_pydoc/generated/state/thread_context/#libdebug.state.thread_context.ThreadContext.next","title":"next()","text":"Executes the next instruction of the process. If the instruction is a call, the debugger will continue until the called function returns.
Source code inlibdebug/state/thread_context.py def next(self: ThreadContext) -> None:\n \"\"\"Executes the next instruction of the process. If the instruction is a call, the debugger will continue until the called function returns.\"\"\"\n self._internal_debugger.next(self)\n"},{"location":"from_pydoc/generated/state/thread_context/#libdebug.state.thread_context.ThreadContext.ni","title":"ni()","text":"Alias for the next method. Executes the next instruction of the process. If the instruction is a call, the debugger will continue until the called function returns.
libdebug/state/thread_context.py def ni(self: ThreadContext) -> None:\n \"\"\"Alias for the `next` method. Executes the next instruction of the process. If the instruction is a call, the debugger will continue until the called function returns.\"\"\"\n self._internal_debugger.next(self)\n"},{"location":"from_pydoc/generated/state/thread_context/#libdebug.state.thread_context.ThreadContext.notify_snapshot_taken","title":"notify_snapshot_taken()","text":"Notify the thread that a snapshot has been taken.
Source code inlibdebug/state/thread_context.py def notify_snapshot_taken(self: ThreadContext) -> None:\n \"\"\"Notify the thread that a snapshot has been taken.\"\"\"\n self._snapshot_count += 1\n"},{"location":"from_pydoc/generated/state/thread_context/#libdebug.state.thread_context.ThreadContext.pprint_backtrace","title":"pprint_backtrace()","text":"Pretty prints the current backtrace of the thread.
Source code inlibdebug/state/thread_context.py def pprint_backtrace(self: ThreadContext) -> None:\n \"\"\"Pretty prints the current backtrace of the thread.\"\"\"\n self._internal_debugger._ensure_process_stopped()\n stack_unwinder = stack_unwinding_provider(self._internal_debugger.arch)\n backtrace = stack_unwinder.unwind(self)\n maps = self._internal_debugger.debugging_interface.get_maps()\n pprint_backtrace_util(backtrace, maps, self._internal_debugger.symbols)\n"},{"location":"from_pydoc/generated/state/thread_context/#libdebug.state.thread_context.ThreadContext.pprint_registers","title":"pprint_registers()","text":"Pretty prints the thread's registers.
Source code inlibdebug/state/thread_context.py def pprint_registers(self: ThreadContext) -> None:\n \"\"\"Pretty prints the thread's registers.\"\"\"\n pprint_registers_util(\n self.regs,\n self._internal_debugger.maps,\n self._register_holder.provide_regs(),\n )\n"},{"location":"from_pydoc/generated/state/thread_context/#libdebug.state.thread_context.ThreadContext.pprint_registers_all","title":"pprint_registers_all()","text":"Pretty prints all the thread's registers.
Source code inlibdebug/state/thread_context.py def pprint_registers_all(self: ThreadContext) -> None:\n \"\"\"Pretty prints all the thread's registers.\"\"\"\n pprint_registers_all_util(\n self.regs,\n self._internal_debugger.maps,\n self._register_holder.provide_regs(),\n self._register_holder.provide_special_regs(),\n self._register_holder.provide_vector_fp_regs(),\n )\n"},{"location":"from_pydoc/generated/state/thread_context/#libdebug.state.thread_context.ThreadContext.pprint_regs","title":"pprint_regs()","text":"Alias for the pprint_registers method.
Pretty prints the thread's registers.
Source code inlibdebug/state/thread_context.py def pprint_regs(self: ThreadContext) -> None:\n \"\"\"Alias for the `pprint_registers` method.\n\n Pretty prints the thread's registers.\n \"\"\"\n self.pprint_registers()\n"},{"location":"from_pydoc/generated/state/thread_context/#libdebug.state.thread_context.ThreadContext.pprint_regs_all","title":"pprint_regs_all()","text":"Alias for the pprint_registers_all method.
Pretty prints all the thread's registers.
Source code inlibdebug/state/thread_context.py def pprint_regs_all(self: ThreadContext) -> None:\n \"\"\"Alias for the `pprint_registers_all` method.\n\n Pretty prints all the thread's registers.\n \"\"\"\n self.pprint_registers_all()\n"},{"location":"from_pydoc/generated/state/thread_context/#libdebug.state.thread_context.ThreadContext.set_as_dead","title":"set_as_dead()","text":"Set the thread as dead.
Source code inlibdebug/state/thread_context.py def set_as_dead(self: ThreadContext) -> None:\n \"\"\"Set the thread as dead.\"\"\"\n self._dead = True\n"},{"location":"from_pydoc/generated/state/thread_context/#libdebug.state.thread_context.ThreadContext.si","title":"si()","text":"Alias for the step method.
Executes a single instruction of the process.
Source code inlibdebug/state/thread_context.py def si(self: ThreadContext) -> None:\n \"\"\"Alias for the `step` method.\n\n Executes a single instruction of the process.\n \"\"\"\n self._internal_debugger.step(self)\n"},{"location":"from_pydoc/generated/state/thread_context/#libdebug.state.thread_context.ThreadContext.step","title":"step()","text":"Executes a single instruction of the process.
Source code inlibdebug/state/thread_context.py def step(self: ThreadContext) -> None:\n \"\"\"Executes a single instruction of the process.\"\"\"\n self._internal_debugger.step(self)\n"},{"location":"from_pydoc/generated/state/thread_context/#libdebug.state.thread_context.ThreadContext.step_until","title":"step_until(position, max_steps=-1, file='hybrid')","text":"Executes instructions of the process until the specified location is reached.
Parameters:
Name Type Description Defaultposition int | bytes The location to reach.
requiredmax_steps int The maximum number of steps to execute. Defaults to -1.
-1 file str The user-defined backing file to resolve the address in. Defaults to \"hybrid\" (libdebug will first try to solve the address as an absolute address, then as a relative address w.r.t. the \"binary\" map file).
'hybrid' Source code in libdebug/state/thread_context.py def step_until(\n self: ThreadContext,\n position: int | str,\n max_steps: int = -1,\n file: str = \"hybrid\",\n) -> None:\n \"\"\"Executes instructions of the process until the specified location is reached.\n\n Args:\n position (int | bytes): The location to reach.\n max_steps (int, optional): The maximum number of steps to execute. Defaults to -1.\n file (str, optional): The user-defined backing file to resolve the address in. Defaults to \"hybrid\" (libdebug will first try to solve the address as an absolute address, then as a relative address w.r.t. the \"binary\" map file).\n \"\"\"\n self._internal_debugger.step_until(self, position, max_steps, file)\n"},{"location":"from_pydoc/generated/state/thread_context/#libdebug.state.thread_context.ThreadContext.su","title":"su(position, max_steps=-1)","text":"Alias for the step_until method.
Executes instructions of the process until the specified location is reached.
Parameters:
Name Type Description Defaultposition int | bytes The location to reach.
requiredmax_steps int The maximum number of steps to execute. Defaults to -1.
-1 Source code in libdebug/state/thread_context.py def su(\n self: ThreadContext,\n position: int | str,\n max_steps: int = -1,\n) -> None:\n \"\"\"Alias for the `step_until` method.\n\n Executes instructions of the process until the specified location is reached.\n\n Args:\n position (int | bytes): The location to reach.\n max_steps (int, optional): The maximum number of steps to execute. Defaults to -1.\n \"\"\"\n self._internal_debugger.step_until(self, position, max_steps)\n"},{"location":"from_pydoc/generated/utils/ansi_escape_codes/","title":"libdebug.utils.ansi_escape_codes","text":""},{"location":"from_pydoc/generated/utils/ansi_escape_codes/#libdebug.utils.ansi_escape_codes.ANSIColors","title":"ANSIColors","text":"Class to define colors for the terminal.
Source code inlibdebug/utils/ansi_escape_codes.py class ANSIColors:\n \"\"\"Class to define colors for the terminal.\"\"\"\n\n RED = \"\\033[91m\"\n BLUE = \"\\033[94m\"\n GREEN = \"\\033[92m\"\n BRIGHT_YELLOW = \"\\033[93m\"\n YELLOW = \"\\033[33m\"\n PINK = \"\\033[95m\"\n CYAN = \"\\033[96m\"\n ORANGE = \"\\033[38;5;208m\"\n BOLD = \"\\033[1m\"\n UNDERLINE = \"\\033[4m\"\n STRIKE = \"\\033[9m\"\n DEFAULT_COLOR = \"\\033[39m\"\n RESET = \"\\033[0m\"\n"},{"location":"from_pydoc/generated/utils/arch_mappings/","title":"libdebug.utils.arch_mappings","text":""},{"location":"from_pydoc/generated/utils/arch_mappings/#libdebug.utils.arch_mappings.map_arch","title":"map_arch(arch)","text":"Map the architecture to the correct format.
Parameters:
Name Type Description Defaultarch str the architecture to map.
requiredReturns:
Name Type Descriptionstr str the mapped architecture.
Source code inlibdebug/utils/arch_mappings.py def map_arch(arch: str) -> str:\n \"\"\"Map the architecture to the correct format.\n\n Args:\n arch (str): the architecture to map.\n\n Returns:\n str: the mapped architecture.\n \"\"\"\n arch = arch.lower()\n\n if arch in ARCH_MAPPING.values():\n return arch\n elif arch in ARCH_MAPPING:\n return ARCH_MAPPING[arch]\n else:\n raise ValueError(f\"Architecture {arch} not supported.\")\n"},{"location":"from_pydoc/generated/utils/debugger_wrappers/","title":"libdebug.utils.debugger_wrappers","text":""},{"location":"from_pydoc/generated/utils/debugger_wrappers/#libdebug.utils.debugger_wrappers.background_alias","title":"background_alias(alias_method)","text":"Decorator that automatically resolves the call to a different method if coming from the background thread.
Source code inlibdebug/utils/debugger_wrappers.py def background_alias(alias_method: callable) -> callable:\n \"\"\"Decorator that automatically resolves the call to a different method if coming from the background thread.\"\"\"\n\n # This is the stupidest thing I've ever seen. Why Python, why?\n def _background_alias(method: callable) -> callable:\n @wraps(method)\n def inner(self: InternalDebugger, *args: ..., **kwargs: ...) -> ...:\n if self._is_in_background():\n return alias_method(self, *args, **kwargs)\n return method(self, *args, **kwargs)\n\n return inner\n\n return _background_alias\n"},{"location":"from_pydoc/generated/utils/debugger_wrappers/#libdebug.utils.debugger_wrappers.change_state_function_process","title":"change_state_function_process(method)","text":"Decorator to perfom control flow checks before executing a method.
Source code inlibdebug/utils/debugger_wrappers.py def change_state_function_process(method: callable) -> callable:\n \"\"\"Decorator to perfom control flow checks before executing a method.\"\"\"\n\n @wraps(method)\n def wrapper(self: InternalDebugger, *args: ..., **kwargs: ...) -> ...:\n if not self.instanced:\n raise RuntimeError(\n \"Process not running. Did you call run() or attach()?\",\n )\n\n if not self.is_debugging:\n raise RuntimeError(\n \"No process is being debugged. Check your script.\",\n )\n\n # We have to ensure that the process is stopped before executing the method\n self._ensure_process_stopped()\n\n # We have to ensure that at least one thread is alive before executing the method\n if self.threads[0].dead:\n raise RuntimeError(\"All threads are dead.\")\n return method(self, *args, **kwargs)\n\n return wrapper\n"},{"location":"from_pydoc/generated/utils/debugger_wrappers/#libdebug.utils.debugger_wrappers.change_state_function_thread","title":"change_state_function_thread(method)","text":"Decorator to perfom control flow checks before executing a method.
Source code inlibdebug/utils/debugger_wrappers.py def change_state_function_thread(method: callable) -> callable:\n \"\"\"Decorator to perfom control flow checks before executing a method.\"\"\"\n\n @wraps(method)\n def wrapper(\n self: InternalDebugger,\n thread: ThreadContext,\n *args: ...,\n **kwargs: ...,\n ) -> ...:\n if not self.instanced:\n raise RuntimeError(\n \"Process not running. Did you call run() or attach()?\",\n )\n\n if not self.is_debugging:\n raise RuntimeError(\n \"No process is being debugged. Check your script.\",\n )\n\n # We have to ensure that the process is stopped before executing the method\n self._ensure_process_stopped()\n\n # We have to ensure that at least one thread is alive before executing the method\n if thread.dead:\n raise RuntimeError(\"The thread is dead.\")\n return method(self, thread, *args, **kwargs)\n\n return wrapper\n"},{"location":"from_pydoc/generated/utils/debugging_utils/","title":"libdebug.utils.debugging_utils","text":""},{"location":"from_pydoc/generated/utils/debugging_utils/#libdebug.utils.debugging_utils.normalize_and_validate_address","title":"normalize_and_validate_address(address, maps)","text":"Normalizes and validates the specified address.
Parameters:
Name Type Description Defaultaddress int The address to normalize and validate.
requiredmaps MemoryMapList[MemoryMap] The memory maps.
requiredReturns:
Name Type Descriptionint int The normalized address.
ThrowsValueError: If the specified address does not belong to any memory map.
Source code inlibdebug/utils/debugging_utils.py def normalize_and_validate_address(address: int, maps: MemoryMapList[MemoryMap]) -> int:\n \"\"\"Normalizes and validates the specified address.\n\n Args:\n address (int): The address to normalize and validate.\n maps (MemoryMapList[MemoryMap]): The memory maps.\n\n Returns:\n int: The normalized address.\n\n Throws:\n ValueError: If the specified address does not belong to any memory map.\n \"\"\"\n if address < maps[0].start:\n # The address is lower than the base address of the lowest map. Suppose it is a relative address for a PIE binary.\n address += maps[0].start\n\n for vmap in maps:\n if vmap.start <= address < vmap.end:\n return address\n\n raise ValueError(f\"Address {hex(address)} does not belong to any memory map.\")\n"},{"location":"from_pydoc/generated/utils/debugging_utils/#libdebug.utils.debugging_utils.resolve_address_in_maps","title":"resolve_address_in_maps(address, maps)","text":"Returns the symbol corresponding to the specified address in the specified memory maps.
Parameters:
Name Type Description Defaultaddress int The address whose symbol should be returned.
requiredmaps MemoryMapList[MemoryMap] The memory maps.
requiredReturns:
Name Type Descriptionstr str The symbol corresponding to the specified address in the specified memory maps.
ThrowsValueError: If the specified address does not belong to any memory map.
Source code inlibdebug/utils/debugging_utils.py def resolve_address_in_maps(address: int, maps: MemoryMapList[MemoryMap]) -> str:\n \"\"\"Returns the symbol corresponding to the specified address in the specified memory maps.\n\n Args:\n address (int): The address whose symbol should be returned.\n maps (MemoryMapList[MemoryMap]): The memory maps.\n\n Returns:\n str: The symbol corresponding to the specified address in the specified memory maps.\n\n Throws:\n ValueError: If the specified address does not belong to any memory map.\n \"\"\"\n mapped_files = {}\n\n for vmap in maps:\n file = vmap.backing_file\n if not file or file[0] == \"[\":\n continue\n\n if file not in mapped_files:\n mapped_files[file] = (vmap.start, vmap.end)\n else:\n mapped_files[file] = (mapped_files[file][0], vmap.end)\n\n for file, (base_address, top_address) in mapped_files.items():\n # Check if the address is in the range of the current section\n if address < base_address or address >= top_address:\n continue\n\n try:\n return resolve_address(file, address - base_address) if is_pie(file) else resolve_address(file, address)\n except OSError as e:\n liblog.debugger(f\"Error while resolving address {hex(address)} in {file}: {e}\")\n except ValueError:\n pass\n\n return hex(address)\n"},{"location":"from_pydoc/generated/utils/debugging_utils/#libdebug.utils.debugging_utils.resolve_symbol_in_maps","title":"resolve_symbol_in_maps(symbol, maps)","text":"Returns the address of the specified symbol in the specified memory maps.
Parameters:
Name Type Description Defaultsymbol str The symbol whose address should be returned.
requiredmaps MemoryMapList[MemoryMap] The memory maps.
requiredReturns:
Name Type Descriptionint int The address of the specified symbol in the specified memory maps.
ThrowsValueError: If the specified symbol does not belong to any memory map.
Source code inlibdebug/utils/debugging_utils.py def resolve_symbol_in_maps(symbol: str, maps: MemoryMapList[MemoryMap]) -> int:\n \"\"\"Returns the address of the specified symbol in the specified memory maps.\n\n Args:\n symbol (str): The symbol whose address should be returned.\n maps (MemoryMapList[MemoryMap]): The memory maps.\n\n Returns:\n int: The address of the specified symbol in the specified memory maps.\n\n Throws:\n ValueError: If the specified symbol does not belong to any memory map.\n \"\"\"\n mapped_files = {}\n\n if \"+\" in symbol:\n symbol, offset_str = symbol.split(\"+\")\n offset = int(offset_str, 16)\n else:\n offset = 0\n\n for vmap in maps:\n if vmap.backing_file and vmap.backing_file not in mapped_files and vmap.backing_file[0] != \"[\":\n mapped_files[vmap.backing_file] = vmap.start\n\n for file, base_address in mapped_files.items():\n try:\n address = resolve_symbol(file, symbol)\n\n if is_pie(file):\n address += base_address\n\n return address + offset\n except OSError as e:\n liblog.debugger(f\"Error while resolving symbol {symbol} in {file}: {e}\")\n except ValueError:\n pass\n\n raise ValueError(f\"Symbol {symbol} not found in the specified mapped file. Please specify a valid symbol.\")\n"},{"location":"from_pydoc/generated/utils/debugging_utils/#libdebug.utils.debugging_utils.resolve_symbol_name_in_maps_util","title":"resolve_symbol_name_in_maps_util(address, external_symbols)","text":"Resolves the address.
Source code inlibdebug/utils/debugging_utils.py def resolve_symbol_name_in_maps_util(\n address: int,\n external_symbols: SymbolList,\n) -> str:\n \"\"\"Resolves the address.\"\"\"\n if not external_symbols:\n return f\"{address:#x}\"\n\n matching_symbols = external_symbols._search_by_address(address)\n\n if len(matching_symbols) == 0:\n return f\"{address:#x}\"\n elif len(matching_symbols) > 1:\n liblog.warning(f\"Multiple symbols found for address {address:#x}. Taking the first one.\")\n\n return matching_symbols[0].name\n"},{"location":"from_pydoc/generated/utils/elf_utils/","title":"libdebug.utils.elf_utils","text":""},{"location":"from_pydoc/generated/utils/elf_utils/#libdebug.utils.elf_utils._collect_external_info","title":"_collect_external_info(path) cached","text":"Returns a dictionary containing the symbols taken from the external debuginfo file.
Parameters:
Name Type Description Defaultpath str The path to the ELF file.
requiredReturns:
Type DescriptionSymbolList[Symbol] SymbolList[Symbol]: A list containing the symbols taken from the external debuginfo file.
Source code inlibdebug/utils/elf_utils.py @functools.cache\ndef _collect_external_info(path: str) -> SymbolList[Symbol]:\n \"\"\"Returns a dictionary containing the symbols taken from the external debuginfo file.\n\n Args:\n path (str): The path to the ELF file.\n\n Returns:\n SymbolList[Symbol]: A list containing the symbols taken from the external debuginfo file.\n \"\"\"\n liblog.debugger(\"Collecting external symbols from %s\", path)\n\n if not libdebug_debug_sym_parser.HAS_SYMBOL_SUPPORT:\n return SymbolList([], get_global_internal_debugger())\n\n ext_symbols = libdebug_debug_sym_parser.collect_external_symbols(path, libcontext.sym_lvl)\n\n return SymbolList(\n [Symbol(symbol.low_pc, symbol.high_pc, symbol.name, path) for symbol in ext_symbols],\n get_global_internal_debugger(),\n )\n"},{"location":"from_pydoc/generated/utils/elf_utils/#libdebug.utils.elf_utils._debuginfod","title":"_debuginfod(buildid) cached","text":"Returns the path to the debuginfo file corresponding to the specified buildid.
Parameters:
Name Type Description Defaultbuildid str The buildid of the debuginfo file.
requiredReturns:
Name Type Descriptiondebuginfod_path Path The path to the debuginfo file corresponding to the specified buildid.
Source code inlibdebug/utils/elf_utils.py @functools.cache\ndef _debuginfod(buildid: str) -> Path:\n \"\"\"Returns the path to the debuginfo file corresponding to the specified buildid.\n\n Args:\n buildid (str): The buildid of the debuginfo file.\n\n Returns:\n debuginfod_path (Path): The path to the debuginfo file corresponding to the specified buildid.\n \"\"\"\n debuginfod_path = Path.home() / \".cache\" / \"debuginfod_client\" / buildid / \"debuginfo\"\n\n if not debuginfod_path.exists():\n liblog.info(f\"Downloading debuginfo file for buildid {buildid}\")\n _download_debuginfod(buildid, debuginfod_path)\n\n return debuginfod_path\n"},{"location":"from_pydoc/generated/utils/elf_utils/#libdebug.utils.elf_utils._download_debuginfod","title":"_download_debuginfod(buildid, debuginfod_path)","text":"Downloads the debuginfo file corresponding to the specified buildid.
Parameters:
Name Type Description Defaultbuildid str The buildid of the debuginfo file.
requireddebuginfod_path Path The output directory.
required Source code inlibdebug/utils/elf_utils.py def _download_debuginfod(buildid: str, debuginfod_path: Path) -> None:\n \"\"\"Downloads the debuginfo file corresponding to the specified buildid.\n\n Args:\n buildid (str): The buildid of the debuginfo file.\n debuginfod_path (Path): The output directory.\n \"\"\"\n try:\n url = libcontext.debuginfod_server + \"buildid/\" + buildid + \"/debuginfo\"\n r = requests.get(url, allow_redirects=True, timeout=1)\n\n if r.ok:\n # We found the debuginfo file, just use it\n content = r.content\n elif r.status_code == NOT_FOUND:\n # We need to cache the empty content to avoid multiple requests\n liblog.error(f\"Debuginfo file for buildid {buildid} not found.\")\n content = b\"\"\n else:\n # We do not cache the content in case of error. We will retry the download next time.\n liblog.error(f\"Failed to download debuginfo file. Error code: {r.status_code}\")\n return\n\n debuginfod_path.parent.mkdir(parents=True, exist_ok=True)\n with debuginfod_path.open(\"wb\") as f:\n f.write(content)\n except Exception as e:\n liblog.debugger(f\"Exception {e} occurred while downloading debuginfod symbols\")\n"},{"location":"from_pydoc/generated/utils/elf_utils/#libdebug.utils.elf_utils._parse_elf_file","title":"_parse_elf_file(path, debug_info_level) cached","text":"Returns a dictionary containing the symbols of the specified ELF file and the buildid.
Parameters:
Name Type Description Defaultpath str The path to the ELF file.
requireddebug_info_level int The debug info level.
requiredReturns:
Name Type Descriptionsymbols SymbolList[Symbol A list containing the symbols of the specified ELF file.
buildid str The buildid of the specified ELF file.
debug_file_path str The path to the external debuginfo file corresponding.
Source code inlibdebug/utils/elf_utils.py @functools.cache\ndef _parse_elf_file(path: str, debug_info_level: int) -> tuple[SymbolList[Symbol], str | None, str | None]:\n \"\"\"Returns a dictionary containing the symbols of the specified ELF file and the buildid.\n\n Args:\n path (str): The path to the ELF file.\n debug_info_level (int): The debug info level.\n\n Returns:\n symbols (SymbolList[Symbol): A list containing the symbols of the specified ELF file.\n buildid (str): The buildid of the specified ELF file.\n debug_file_path (str): The path to the external debuginfo file corresponding.\n \"\"\"\n liblog.debugger(\"Searching for symbols in %s\", path)\n\n if not libdebug_debug_sym_parser.HAS_SYMBOL_SUPPORT:\n return SymbolList([], get_global_internal_debugger()), None, None\n\n elfinfo = libdebug_debug_sym_parser.read_elf_info(path, debug_info_level)\n\n symbols = [Symbol(symbol.low_pc, symbol.high_pc, symbol.name, path) for symbol in elfinfo.symbols]\n\n return SymbolList(symbols, get_global_internal_debugger()), elfinfo.build_id, elfinfo.debuglink\n"},{"location":"from_pydoc/generated/utils/elf_utils/#libdebug.utils.elf_utils.elf_architecture","title":"elf_architecture(path)","text":"Returns the architecture of the specified ELF file.
Parameters:
Name Type Description Defaultpath str The path to the ELF file.
requiredReturns:
Name Type Descriptionstr str The architecture of the specified ELF file.
Source code inlibdebug/utils/elf_utils.py def elf_architecture(path: str) -> str:\n \"\"\"Returns the architecture of the specified ELF file.\n\n Args:\n path (str): The path to the ELF file.\n\n Returns:\n str: The architecture of the specified ELF file.\n \"\"\"\n return parse_elf_characteristics(path)[2]\n"},{"location":"from_pydoc/generated/utils/elf_utils/#libdebug.utils.elf_utils.get_all_symbols","title":"get_all_symbols(backing_files)","text":"Returns a list of all the symbols in the target process.
Parameters:
Name Type Description Defaultbacking_files set[str] The set of backing files.
requiredReturns:
Type DescriptionSymbolList[Symbol] SymbolList[Symbol]: A list of all the symbols in the target process.
Source code inlibdebug/utils/elf_utils.py def get_all_symbols(backing_files: set[str]) -> SymbolList[Symbol]:\n \"\"\"Returns a list of all the symbols in the target process.\n\n Args:\n backing_files (set[str]): The set of backing files.\n\n Returns:\n SymbolList[Symbol]: A list of all the symbols in the target process.\n \"\"\"\n symbols = SymbolList([], get_global_internal_debugger())\n\n if libcontext.sym_lvl == 0:\n raise Exception(\n \"Symbol resolution is disabled. Please enable it by setting the sym_lvl libcontext parameter to a value greater than 0.\",\n )\n\n for file in backing_files:\n # Retrieve the symbols from the SymbolTableSection\n new_symbols, buildid, debug_file = _parse_elf_file(file, libcontext.sym_lvl)\n symbols += new_symbols\n\n # Retrieve the symbols from the external debuginfo file\n if buildid and debug_file and libcontext.sym_lvl > 2:\n folder = buildid[:2]\n absolute_debug_path_str = str((LOCAL_DEBUG_PATH / folder / debug_file).resolve())\n symbols += _collect_external_info(absolute_debug_path_str)\n\n # Retrieve the symbols from debuginfod\n if buildid and libcontext.sym_lvl > 4:\n absolute_debug_path = _debuginfod(buildid)\n if absolute_debug_path.exists():\n symbols += _collect_external_info(str(absolute_debug_path))\n\n return symbols\n"},{"location":"from_pydoc/generated/utils/elf_utils/#libdebug.utils.elf_utils.get_entry_point","title":"get_entry_point(path)","text":"Returns the entry point of the specified ELF file.
Parameters:
Name Type Description Defaultpath str The path to the ELF file.
requiredReturns:
Name Type Descriptionint int The entry point of the specified ELF file.
Source code inlibdebug/utils/elf_utils.py def get_entry_point(path: str) -> int:\n \"\"\"Returns the entry point of the specified ELF file.\n\n Args:\n path (str): The path to the ELF file.\n\n Returns:\n int: The entry point of the specified ELF file.\n \"\"\"\n return parse_elf_characteristics(path)[1]\n"},{"location":"from_pydoc/generated/utils/elf_utils/#libdebug.utils.elf_utils.is_pie","title":"is_pie(path)","text":"Returns True if the specified ELF file is position independent, False otherwise.
Parameters:
Name Type Description Defaultpath str The path to the ELF file.
requiredReturns:
Name Type Descriptionbool bool True if the specified ELF file is position independent, False otherwise.
Source code inlibdebug/utils/elf_utils.py def is_pie(path: str) -> bool:\n \"\"\"Returns True if the specified ELF file is position independent, False otherwise.\n\n Args:\n path (str): The path to the ELF file.\n\n Returns:\n bool: True if the specified ELF file is position independent, False otherwise.\n \"\"\"\n return parse_elf_characteristics(path)[0]\n"},{"location":"from_pydoc/generated/utils/elf_utils/#libdebug.utils.elf_utils.parse_elf_characteristics","title":"parse_elf_characteristics(path) cached","text":"Returns a tuple containing the PIE flag, the entry point and the architecture of the specified ELF file.
Parameters:
Name Type Description Defaultpath str The path to the ELF file.
requiredReturns:
Name Type Descriptiontuple tuple[bool, int, str] A tuple containing the PIE flag, the entry point and the architecture of the specified ELF file.
Source code inlibdebug/utils/elf_utils.py @functools.cache\ndef parse_elf_characteristics(path: str) -> tuple[bool, int, str]:\n \"\"\"Returns a tuple containing the PIE flag, the entry point and the architecture of the specified ELF file.\n\n Args:\n path (str): The path to the ELF file.\n\n Returns:\n tuple: A tuple containing the PIE flag, the entry point and the architecture of the specified ELF file.\n \"\"\"\n with Path(path).open(\"rb\") as elf_file:\n elf = ELFFile(elf_file)\n\n pie = elf.header.e_type == \"ET_DYN\"\n entry_point = elf.header.e_entry\n arch = elf.get_machine_arch()\n\n return pie, entry_point, arch\n"},{"location":"from_pydoc/generated/utils/elf_utils/#libdebug.utils.elf_utils.resolve_address","title":"resolve_address(path, address) cached","text":"Returns the symbol corresponding to the specified address in the specified ELF file.
Parameters:
Name Type Description Defaultpath str The path to the ELF file.
requiredaddress int The address whose symbol should be returned.
requiredReturns:
Name Type Descriptionstr str The symbol corresponding to the specified address in the specified ELF file.
Source code inlibdebug/utils/elf_utils.py @functools.cache\ndef resolve_address(path: str, address: int) -> str:\n \"\"\"Returns the symbol corresponding to the specified address in the specified ELF file.\n\n Args:\n path (str): The path to the ELF file.\n address (int): The address whose symbol should be returned.\n\n Returns:\n str: The symbol corresponding to the specified address in the specified ELF file.\n \"\"\"\n if libcontext.sym_lvl == 0:\n return hex(address)\n\n # Retrieve the symbols from the SymbolTableSection\n symbols, buildid, debug_file = _parse_elf_file(path, libcontext.sym_lvl)\n symbols = [symbol for symbol in symbols if symbol.start <= address < symbol.end]\n if symbols:\n symbol = symbols[0]\n return f\"{symbol.name}+{address - symbol.start:x}\"\n\n # Retrieve the symbols from the external debuginfo file\n if buildid and debug_file and libcontext.sym_lvl > 2:\n folder = buildid[:2]\n absolute_debug_path_str = str((LOCAL_DEBUG_PATH / folder / debug_file).resolve())\n symbols = _collect_external_info(absolute_debug_path_str)\n symbols = [symbol for symbol in symbols if symbol.start <= address < symbol.end]\n if symbols:\n symbol = symbols[0]\n return f\"{symbol.name}+{address - symbol.start:x}\"\n\n # Retrieve the symbols from debuginfod\n if buildid and libcontext.sym_lvl > 4:\n absolute_debug_path = _debuginfod(buildid)\n if absolute_debug_path.exists():\n symbols = _collect_external_info(str(absolute_debug_path))\n symbols = [symbol for symbol in symbols if symbol.start <= address < symbol.end]\n if symbols:\n symbol = symbols[0]\n return f\"{symbol.name}+{address - symbol.start:x}\"\n\n # Address not found\n raise ValueError(f\"Address {hex(address)} not found in {path}. Please specify a valid address.\")\n"},{"location":"from_pydoc/generated/utils/elf_utils/#libdebug.utils.elf_utils.resolve_argv_path","title":"resolve_argv_path(argv_path)","text":"Resolve the path of the binary to debug.
Parameters:
Name Type Description Defaultargv_path str The provided path of the binary to debug.
requiredReturns:
Name Type Descriptionstr str The resolved path of the binary to debug.
Source code inlibdebug/utils/elf_utils.py def resolve_argv_path(argv_path: str) -> str:\n \"\"\"Resolve the path of the binary to debug.\n\n Args:\n argv_path (str): The provided path of the binary to debug.\n\n Returns:\n str: The resolved path of the binary to debug.\n \"\"\"\n argv_path_expanded = Path(argv_path).expanduser()\n\n # Check if the path is absolute after expansion\n if argv_path_expanded.is_absolute():\n # It's an absolute path, return it as is\n resolved_path = argv_path_expanded\n elif argv_path_expanded.is_file():\n # It already points to a file, return it\n resolved_path = argv_path_expanded\n else:\n # Try to resolve the path using shutil\n resolved_path = abs_path if (abs_path := shutil.which(argv_path_expanded)) else argv_path_expanded\n return str(resolved_path)\n"},{"location":"from_pydoc/generated/utils/elf_utils/#libdebug.utils.elf_utils.resolve_symbol","title":"resolve_symbol(path, symbol) cached","text":"Returns the address of the specified symbol in the specified ELF file.
Parameters:
Name Type Description Defaultpath str The path to the ELF file.
requiredsymbol str The symbol whose address should be returned.
requiredReturns:
Name Type Descriptionint int The address of the specified symbol in the specified ELF file.
Source code inlibdebug/utils/elf_utils.py @functools.cache\ndef resolve_symbol(path: str, symbol: str) -> int:\n \"\"\"Returns the address of the specified symbol in the specified ELF file.\n\n Args:\n path (str): The path to the ELF file.\n symbol (str): The symbol whose address should be returned.\n\n Returns:\n int: The address of the specified symbol in the specified ELF file.\n \"\"\"\n if libcontext.sym_lvl == 0:\n raise Exception(\n \"Symbol resolution is disabled. Please enable it by setting the sym_lvl libcontext parameter to a value greater than 0.\",\n )\n\n # Retrieve the symbols from the SymbolTableSection\n symbols, buildid, debug_file = _parse_elf_file(path, libcontext.sym_lvl)\n symbols = [sym for sym in symbols if sym.name == symbol]\n if symbols:\n return symbols[0].start\n\n # Retrieve the symbols from the external debuginfo file\n if buildid and debug_file and libcontext.sym_lvl > 2:\n folder = buildid[:2]\n absolute_debug_path_str = str((LOCAL_DEBUG_PATH / folder / debug_file).resolve())\n symbols = _collect_external_info(absolute_debug_path_str)\n symbols = [sym for sym in symbols if sym.name == symbol]\n if symbols:\n return symbols[0].start\n\n # Retrieve the symbols from debuginfod\n if buildid and libcontext.sym_lvl > 4:\n absolute_debug_path = _debuginfod(buildid)\n if absolute_debug_path.exists():\n symbols = _collect_external_info(str(absolute_debug_path))\n symbols = [sym for sym in symbols if sym.name == symbol]\n if symbols:\n return symbols[0].start\n\n # Symbol not found\n raise ValueError(f\"Symbol {symbol} not found in {path}. Please specify a valid symbol.\")\n"},{"location":"from_pydoc/generated/utils/file_utils/","title":"libdebug.utils.file_utils","text":""},{"location":"from_pydoc/generated/utils/file_utils/#libdebug.utils.file_utils.ensure_file_executable","title":"ensure_file_executable(path) cached","text":"Ensures that a file exists and is executable.
Parameters:
Name Type Description Defaultpath str The path to the file.
required ThrowsFileNotFoundError: If the file does not exist. PermissionError: If the file is not executable.
Source code inlibdebug/utils/file_utils.py @functools.cache\ndef ensure_file_executable(path: str) -> None:\n \"\"\"Ensures that a file exists and is executable.\n\n Args:\n path (str): The path to the file.\n\n Throws:\n FileNotFoundError: If the file does not exist.\n PermissionError: If the file is not executable.\n \"\"\"\n file = Path(path)\n\n if not file.exists():\n raise FileNotFoundError(f\"File '{path}' does not exist.\")\n\n if not file.is_file():\n raise FileNotFoundError(f\"Path '{path}' is not a file.\")\n\n if not os.access(file, os.X_OK):\n raise PermissionError(f\"File '{path}' is not executable.\")\n"},{"location":"from_pydoc/generated/utils/gdb/","title":"libdebug.utils.gdb","text":""},{"location":"from_pydoc/generated/utils/gdb/#libdebug.utils.gdb.GoBack","title":"GoBack","text":" Bases: Command
This extension adds a new command to GDB that allows to detach from the current process and quit GDB.
Source code inlibdebug/utils/gdb.py class GoBack(gdb.Command):\n \"\"\"This extension adds a new command to GDB that allows to detach from the current process and quit GDB.\"\"\"\n\n def __init__(self: GoBack) -> None:\n \"\"\"Initializes the GoBack command.\"\"\"\n super().__init__(\"goback\", gdb.COMMAND_OBSCURE, gdb.COMPLETE_NONE, True)\n\n def invoke(self: GoBack, _: ..., __: bool) -> None:\n \"\"\"Detaches and quits from GDB on invocation.\"\"\"\n gdb.execute(\"detach\")\n gdb.execute(\"quit\")\n"},{"location":"from_pydoc/generated/utils/gdb/#libdebug.utils.gdb.GoBack.__init__","title":"__init__()","text":"Initializes the GoBack command.
Source code inlibdebug/utils/gdb.py def __init__(self: GoBack) -> None:\n \"\"\"Initializes the GoBack command.\"\"\"\n super().__init__(\"goback\", gdb.COMMAND_OBSCURE, gdb.COMPLETE_NONE, True)\n"},{"location":"from_pydoc/generated/utils/gdb/#libdebug.utils.gdb.GoBack.invoke","title":"invoke(_, __)","text":"Detaches and quits from GDB on invocation.
Source code inlibdebug/utils/gdb.py def invoke(self: GoBack, _: ..., __: bool) -> None:\n \"\"\"Detaches and quits from GDB on invocation.\"\"\"\n gdb.execute(\"detach\")\n gdb.execute(\"quit\")\n"},{"location":"from_pydoc/generated/utils/libcontext/","title":"libdebug.utils.libcontext","text":""},{"location":"from_pydoc/generated/utils/libcontext/#libdebug.utils.libcontext.LibContext","title":"LibContext","text":"A class that holds the global context of the library.
Source code inlibdebug/utils/libcontext.py class LibContext:\n \"\"\"A class that holds the global context of the library.\"\"\"\n\n _instance = None\n _pipe_logger_levels: list[str]\n _debugger_logger_levels: list[str]\n _general_logger_levels: list[str]\n _debuginfod_server: str\n\n def __new__(cls: type):\n \"\"\"Create a new instance of the class if it does not exist yet.\n\n Returns:\n LibContext: the instance of the class.\n \"\"\"\n if cls._instance is None:\n cls._instance = super().__new__(cls)\n cls._instance._initialized = False\n return cls._instance\n\n def __init__(self: LibContext) -> None:\n \"\"\"Initializes the context.\"\"\"\n if self._initialized:\n return\n\n self._pipe_logger_levels = [\"DEBUG\", \"SILENT\"]\n self._debugger_logger_levels = [\"DEBUG\", \"SILENT\"]\n self._general_logger_levels = [\"DEBUG\", \"INFO\", \"WARNING\", \"SILENT\"]\n self._sym_lvl = 5\n\n self._debugger_logger = \"SILENT\"\n self._pipe_logger = \"SILENT\"\n self._general_logger = \"INFO\"\n\n self._debuginfod_server = \"https://debuginfod.elfutils.org/\"\n\n # Adjust log levels based on command-line arguments\n if len(sys.argv) > 1:\n if \"debugger\" in sys.argv:\n liblog.debugger_logger.setLevel(\"DEBUG\")\n self._debugger_logger = \"DEBUG\"\n elif \"pipe\" in sys.argv:\n liblog.pipe_logger.setLevel(\"DEBUG\")\n self._pipe_logger = \"DEBUG\"\n elif \"dbg\" in sys.argv:\n self._set_debug_level_for_all()\n self._debugger_logger = \"DEBUG\"\n self._pipe_logger = \"DEBUG\"\n self._general_logger = \"DEBUG\"\n self._initialized = True\n\n self._terminal = []\n\n def _set_debug_level_for_all(self: LibContext) -> None:\n \"\"\"Set the debug level for all the loggers to DEBUG.\"\"\"\n for logger in [\n liblog.general_logger,\n liblog.debugger_logger,\n liblog.pipe_logger,\n ]:\n logger.setLevel(\"DEBUG\")\n\n @property\n def sym_lvl(self: LibContext) -> int:\n \"\"\"Property getter for sym_lvl.\n\n Returns:\n _sym_lvl (int): the current symbol level.\n \"\"\"\n return self._sym_lvl\n\n @sym_lvl.setter\n def sym_lvl(self: LibContext, value: int) -> None:\n \"\"\"Property setter for sym_lvl, ensuring it's between 0 and 5.\"\"\"\n if 0 <= value <= 5:\n self._sym_lvl = value\n else:\n raise ValueError(\"sym_lvl must be between 0 and 5\")\n\n @property\n def debugger_logger(self: LibContext) -> str:\n \"\"\"Property getter for debugger_logger.\n\n Returns:\n _debugger_logger (str): the current debugger logger level.\n \"\"\"\n return self._debugger_logger\n\n @debugger_logger.setter\n def debugger_logger(self: LibContext, value: str) -> None:\n \"\"\"Property setter for debugger_logger, ensuring it's a supported logging level.\"\"\"\n if value in self._debugger_logger_levels:\n self._debugger_logger = value\n liblog.debugger_logger.setLevel(value)\n else:\n raise ValueError(\n f\"debugger_logger must be a supported logging level. The supported levels are: {self._debugger_logger_levels}\",\n )\n\n @property\n def pipe_logger(self: LibContext) -> str:\n \"\"\"Property getter for pipe_logger.\n\n Returns:\n _pipe_logger (str): the current pipe logger level.\n \"\"\"\n return self._pipe_logger\n\n @pipe_logger.setter\n def pipe_logger(self: LibContext, value: str) -> None:\n \"\"\"Property setter for pipe_logger, ensuring it's a supported logging level.\"\"\"\n if value in self._pipe_logger_levels:\n self._pipe_logger = value\n liblog.pipe_logger.setLevel(value)\n else:\n raise ValueError(\n f\"pipe_logger must be a supported logging level. The supported levels are: {self._pipe_logger_levels}\",\n )\n\n @property\n def general_logger(self: LibContext) -> str:\n \"\"\"Property getter for general_logger.\n\n Returns:\n _general_logger (str): the current general logger level.\n \"\"\"\n return self._general_logger\n\n @general_logger.setter\n def general_logger(self: LibContext, value: str) -> None:\n \"\"\"Property setter for general_logger, ensuring it's a supported logging level.\"\"\"\n if value in self._general_logger_levels:\n self._general_logger = value\n liblog.general_logger.setLevel(value)\n else:\n raise ValueError(\n f\"general_logger must be a supported logging level. The supported levels are: {self._general_logger_levels}\",\n )\n\n @property\n def platform(self: LibContext) -> str:\n \"\"\"Return the current platform.\"\"\"\n return map_arch(platform.machine())\n\n @property\n def terminal(self: LibContext) -> list[str]:\n \"\"\"Property getter for terminal.\n\n Returns:\n _terminal (str): the current terminal.\n \"\"\"\n return self._terminal\n\n @terminal.setter\n def terminal(self: LibContext, value: list[str] | str) -> None:\n \"\"\"Property setter for terminal, ensuring it's a valid terminal.\"\"\"\n if isinstance(value, str):\n value = [value]\n\n self._terminal = value\n\n @property\n def debuginfod_server(self: LibContext) -> str:\n \"\"\"Property getter for debuginfod_server.\n\n Returns:\n _debuginfod_server (str): the current debuginfod server.\n \"\"\"\n return self._debuginfod_server\n\n @debuginfod_server.setter\n def debuginfod_server(self: LibContext, value: str) -> None:\n \"\"\"Property setter for debuginfod_server, ensuring it's a valid URL.\"\"\"\n if type(value) is not str or (not value.startswith(\"http://\") and not value.startswith(\"https://\")):\n raise ValueError(\n \"debuginfod_server must be a valid string URL in the format 'http://<server>' or 'https://<server>'\",\n )\n self._debuginfod_server = value\n\n def update(self: LibContext, **kwargs: ...) -> None:\n \"\"\"Update the context with the given values.\"\"\"\n for key, value in kwargs.items():\n if hasattr(self, key):\n setattr(self, key, value)\n\n @contextmanager\n def tmp(self: LibContext, **kwargs: ...) -> ...:\n \"\"\"Context manager that temporarily changes the library context. Use \"with\" statement.\"\"\"\n # Make a deep copy of the current state\n old_context = deepcopy(self.__dict__)\n self.update(**kwargs)\n try:\n yield\n finally:\n # Restore the original state\n self.__dict__.update(old_context)\n liblog.debugger_logger.setLevel(self.debugger_logger)\n liblog.pipe_logger.setLevel(self.pipe_logger)\n"},{"location":"from_pydoc/generated/utils/libcontext/#libdebug.utils.libcontext.LibContext.debugger_logger","title":"debugger_logger property writable","text":"Property getter for debugger_logger.
Returns:
Name Type Description_debugger_logger str the current debugger logger level.
"},{"location":"from_pydoc/generated/utils/libcontext/#libdebug.utils.libcontext.LibContext.debuginfod_server","title":"debuginfod_server property writable","text":"Property getter for debuginfod_server.
Returns:
Name Type Description_debuginfod_server str the current debuginfod server.
"},{"location":"from_pydoc/generated/utils/libcontext/#libdebug.utils.libcontext.LibContext.general_logger","title":"general_logger property writable","text":"Property getter for general_logger.
Returns:
Name Type Description_general_logger str the current general logger level.
"},{"location":"from_pydoc/generated/utils/libcontext/#libdebug.utils.libcontext.LibContext.pipe_logger","title":"pipe_logger property writable","text":"Property getter for pipe_logger.
Returns:
Name Type Description_pipe_logger str the current pipe logger level.
"},{"location":"from_pydoc/generated/utils/libcontext/#libdebug.utils.libcontext.LibContext.platform","title":"platform property","text":"Return the current platform.
"},{"location":"from_pydoc/generated/utils/libcontext/#libdebug.utils.libcontext.LibContext.sym_lvl","title":"sym_lvl property writable","text":"Property getter for sym_lvl.
Returns:
Name Type Description_sym_lvl int the current symbol level.
"},{"location":"from_pydoc/generated/utils/libcontext/#libdebug.utils.libcontext.LibContext.terminal","title":"terminal property writable","text":"Property getter for terminal.
Returns:
Name Type Description_terminal str the current terminal.
"},{"location":"from_pydoc/generated/utils/libcontext/#libdebug.utils.libcontext.LibContext.__init__","title":"__init__()","text":"Initializes the context.
Source code inlibdebug/utils/libcontext.py def __init__(self: LibContext) -> None:\n \"\"\"Initializes the context.\"\"\"\n if self._initialized:\n return\n\n self._pipe_logger_levels = [\"DEBUG\", \"SILENT\"]\n self._debugger_logger_levels = [\"DEBUG\", \"SILENT\"]\n self._general_logger_levels = [\"DEBUG\", \"INFO\", \"WARNING\", \"SILENT\"]\n self._sym_lvl = 5\n\n self._debugger_logger = \"SILENT\"\n self._pipe_logger = \"SILENT\"\n self._general_logger = \"INFO\"\n\n self._debuginfod_server = \"https://debuginfod.elfutils.org/\"\n\n # Adjust log levels based on command-line arguments\n if len(sys.argv) > 1:\n if \"debugger\" in sys.argv:\n liblog.debugger_logger.setLevel(\"DEBUG\")\n self._debugger_logger = \"DEBUG\"\n elif \"pipe\" in sys.argv:\n liblog.pipe_logger.setLevel(\"DEBUG\")\n self._pipe_logger = \"DEBUG\"\n elif \"dbg\" in sys.argv:\n self._set_debug_level_for_all()\n self._debugger_logger = \"DEBUG\"\n self._pipe_logger = \"DEBUG\"\n self._general_logger = \"DEBUG\"\n self._initialized = True\n\n self._terminal = []\n"},{"location":"from_pydoc/generated/utils/libcontext/#libdebug.utils.libcontext.LibContext.__new__","title":"__new__()","text":"Create a new instance of the class if it does not exist yet.
Returns:
Name Type DescriptionLibContext the instance of the class.
Source code inlibdebug/utils/libcontext.py def __new__(cls: type):\n \"\"\"Create a new instance of the class if it does not exist yet.\n\n Returns:\n LibContext: the instance of the class.\n \"\"\"\n if cls._instance is None:\n cls._instance = super().__new__(cls)\n cls._instance._initialized = False\n return cls._instance\n"},{"location":"from_pydoc/generated/utils/libcontext/#libdebug.utils.libcontext.LibContext._set_debug_level_for_all","title":"_set_debug_level_for_all()","text":"Set the debug level for all the loggers to DEBUG.
Source code inlibdebug/utils/libcontext.py def _set_debug_level_for_all(self: LibContext) -> None:\n \"\"\"Set the debug level for all the loggers to DEBUG.\"\"\"\n for logger in [\n liblog.general_logger,\n liblog.debugger_logger,\n liblog.pipe_logger,\n ]:\n logger.setLevel(\"DEBUG\")\n"},{"location":"from_pydoc/generated/utils/libcontext/#libdebug.utils.libcontext.LibContext.tmp","title":"tmp(**kwargs)","text":"Context manager that temporarily changes the library context. Use \"with\" statement.
Source code inlibdebug/utils/libcontext.py @contextmanager\ndef tmp(self: LibContext, **kwargs: ...) -> ...:\n \"\"\"Context manager that temporarily changes the library context. Use \"with\" statement.\"\"\"\n # Make a deep copy of the current state\n old_context = deepcopy(self.__dict__)\n self.update(**kwargs)\n try:\n yield\n finally:\n # Restore the original state\n self.__dict__.update(old_context)\n liblog.debugger_logger.setLevel(self.debugger_logger)\n liblog.pipe_logger.setLevel(self.pipe_logger)\n"},{"location":"from_pydoc/generated/utils/libcontext/#libdebug.utils.libcontext.LibContext.update","title":"update(**kwargs)","text":"Update the context with the given values.
Source code inlibdebug/utils/libcontext.py def update(self: LibContext, **kwargs: ...) -> None:\n \"\"\"Update the context with the given values.\"\"\"\n for key, value in kwargs.items():\n if hasattr(self, key):\n setattr(self, key, value)\n"},{"location":"from_pydoc/generated/utils/platform_utils/","title":"libdebug.utils.platform_utils","text":""},{"location":"from_pydoc/generated/utils/platform_utils/#libdebug.utils.platform_utils.get_platform_gp_register_size","title":"get_platform_gp_register_size(arch)","text":"Get the ptr size of the platform.
Parameters:
Name Type Description Defaultarch str The architecture of the platform.
requiredReturns:
Name Type Descriptionint int The ptr size in bytes.
Source code inlibdebug/utils/platform_utils.py def get_platform_gp_register_size(arch: str) -> int:\n \"\"\"Get the ptr size of the platform.\n\n Args:\n arch (str): The architecture of the platform.\n\n Returns:\n int: The ptr size in bytes.\n \"\"\"\n match arch:\n case \"amd64\":\n return 8\n case \"aarch64\":\n return 8\n case \"i386\":\n return 4\n case _:\n raise ValueError(f\"Architecture {arch} not supported.\")\n"},{"location":"from_pydoc/generated/utils/posix_spawn/","title":"libdebug.utils.posix_spawn","text":""},{"location":"from_pydoc/generated/utils/posix_spawn/#libdebug.utils.posix_spawn.posix_spawn","title":"posix_spawn(file, argv, env, file_actions, setpgroup)","text":"Spawn a new process, emulating the POSIX spawn function.
Source code inlibdebug/utils/posix_spawn.py def posix_spawn(file: str, argv: list, env: dict, file_actions: list, setpgroup: bool) -> int:\n \"\"\"Spawn a new process, emulating the POSIX spawn function.\"\"\"\n child_pid = os.fork()\n if child_pid == 0:\n for element in file_actions:\n if element[0] == POSIX_SPAWN_CLOSE:\n os.close(element[1])\n elif element[0] == POSIX_SPAWN_DUP2:\n os.dup2(element[1], element[2])\n elif element[0] == POSIX_SPAWN_OPEN:\n fd, path, flags, mode = element[1:]\n os.dup2(os.open(path, flags, mode), fd)\n else:\n raise ValueError(\"Invalid file action\")\n if setpgroup == 0:\n os.setpgid(0, 0)\n os.execve(file, argv, env)\n\n return child_pid\n"},{"location":"from_pydoc/generated/utils/pprint_primitives/","title":"libdebug.utils.pprint_primitives","text":""},{"location":"from_pydoc/generated/utils/pprint_primitives/#libdebug.utils.pprint_primitives.get_colored_saved_address_util","title":"get_colored_saved_address_util(return_address, maps, external_symbols=None)","text":"Pretty prints a return address for backtrace pprint.
Source code inlibdebug/utils/pprint_primitives.py def get_colored_saved_address_util(\n return_address: int,\n maps: MemoryMapList | MemoryMapSnapshotList,\n external_symbols: SymbolList = None,\n) -> str:\n \"\"\"Pretty prints a return address for backtrace pprint.\"\"\"\n filtered_maps = maps.filter(return_address)\n\n return_address_symbol = resolve_symbol_name_in_maps_util(return_address, external_symbols)\n\n permissions = filtered_maps[0].permissions\n if \"rwx\" in permissions:\n style = f\"{ANSIColors.UNDERLINE}{ANSIColors.RED}\"\n elif \"x\" in permissions:\n style = f\"{ANSIColors.RED}\"\n elif \"w\" in permissions:\n # This should not happen, but it's here for completeness\n style = f\"{ANSIColors.YELLOW}\"\n elif \"r\" in permissions:\n # This should not happen, but it's here for completeness\n style = f\"{ANSIColors.GREEN}\"\n if return_address_symbol[:2] == \"0x\":\n return f\"{style}{return_address:#x} {ANSIColors.RESET}\"\n else:\n return f\"{style}{return_address:#x} <{return_address_symbol}> {ANSIColors.RESET}\"\n"},{"location":"from_pydoc/generated/utils/pprint_primitives/#libdebug.utils.pprint_primitives.pad_colored_string","title":"pad_colored_string(string, length)","text":"Pads a colored string with spaces to the specified length.
Parameters:
Name Type Description Defaultstring str The string to pad.
requiredlength int The desired length of the string.
requiredReturns:
Name Type Descriptionstr str The padded string.
Source code inlibdebug/utils/pprint_primitives.py def pad_colored_string(string: str, length: int) -> str:\n \"\"\"Pads a colored string with spaces to the specified length.\n\n Args:\n string (str): The string to pad.\n length (int): The desired length of the string.\n\n Returns:\n str: The padded string.\n \"\"\"\n stripped_string = strip_ansi_codes(string)\n padding_length = length - len(stripped_string)\n if padding_length > 0:\n return string + \" \" * padding_length\n return string\n"},{"location":"from_pydoc/generated/utils/pprint_primitives/#libdebug.utils.pprint_primitives.pprint_backtrace_util","title":"pprint_backtrace_util(backtrace, maps, external_symbols=None)","text":"Pretty prints the current backtrace of the thread.
Source code inlibdebug/utils/pprint_primitives.py def pprint_backtrace_util(\n backtrace: list,\n maps: MemoryMapList | MemoryMapSnapshotList,\n external_symbols: SymbolList = None,\n) -> None:\n \"\"\"Pretty prints the current backtrace of the thread.\"\"\"\n for return_address in backtrace:\n print(get_colored_saved_address_util(return_address, maps, external_symbols))\n"},{"location":"from_pydoc/generated/utils/pprint_primitives/#libdebug.utils.pprint_primitives.pprint_diff_line","title":"pprint_diff_line(content, is_added)","text":"Prints a line of a diff.
Source code inlibdebug/utils/pprint_primitives.py def pprint_diff_line(content: str, is_added: bool) -> None:\n \"\"\"Prints a line of a diff.\"\"\"\n color = ANSIColors.GREEN if is_added else ANSIColors.RED\n\n prefix = \">>>\" if is_added else \"<<<\"\n\n print(f\"{prefix} {color}{content}{ANSIColors.RESET}\")\n"},{"location":"from_pydoc/generated/utils/pprint_primitives/#libdebug.utils.pprint_primitives.pprint_diff_substring","title":"pprint_diff_substring(content, start, end)","text":"Prints a diff with only a substring highlighted.
Source code inlibdebug/utils/pprint_primitives.py def pprint_diff_substring(content: str, start: int, end: int) -> None:\n \"\"\"Prints a diff with only a substring highlighted.\"\"\"\n color = ANSIColors.ORANGE\n\n print(f\"{content[:start]}{color}{content[start:end]}{ANSIColors.RESET}{content[end:]}\")\n"},{"location":"from_pydoc/generated/utils/pprint_primitives/#libdebug.utils.pprint_primitives.pprint_inline_diff","title":"pprint_inline_diff(content, start, end, correction)","text":"Prints a diff with inline changes.
Source code inlibdebug/utils/pprint_primitives.py def pprint_inline_diff(content: str, start: int, end: int, correction: str) -> None:\n \"\"\"Prints a diff with inline changes.\"\"\"\n print(\n f\"{content[:start]}{ANSIColors.RED}{ANSIColors.STRIKE}{content[start:end]}{ANSIColors.RESET} {ANSIColors.GREEN}{correction}{ANSIColors.RESET}{content[end:]}\"\n )\n"},{"location":"from_pydoc/generated/utils/pprint_primitives/#libdebug.utils.pprint_primitives.pprint_maps_util","title":"pprint_maps_util(maps)","text":"Prints the memory maps of the process.
Source code inlibdebug/utils/pprint_primitives.py def pprint_maps_util(maps: MemoryMapList | MemoryMapSnapshotList) -> None:\n \"\"\"Prints the memory maps of the process.\"\"\"\n header = f\"{'start':>18} {'end':>18} {'perm':>6} {'size':>8} {'offset':>8} {'backing_file':<20}\"\n print(header)\n for memory_map in maps:\n info = (\n f\"{memory_map.start:#18x} \"\n f\"{memory_map.end:#18x} \"\n f\"{memory_map.permissions:>6} \"\n f\"{memory_map.size:#8x} \"\n f\"{memory_map.offset:#8x} \"\n f\"{memory_map.backing_file}\"\n )\n if \"rwx\" in memory_map.permissions:\n print(f\"{ANSIColors.RED}{ANSIColors.UNDERLINE}{info}{ANSIColors.RESET}\")\n elif \"x\" in memory_map.permissions:\n print(f\"{ANSIColors.RED}{info}{ANSIColors.RESET}\")\n elif \"w\" in memory_map.permissions:\n print(f\"{ANSIColors.YELLOW}{info}{ANSIColors.RESET}\")\n elif \"r\" in memory_map.permissions:\n print(f\"{ANSIColors.GREEN}{info}{ANSIColors.RESET}\")\n else:\n print(info)\n"},{"location":"from_pydoc/generated/utils/pprint_primitives/#libdebug.utils.pprint_primitives.pprint_memory_diff_util","title":"pprint_memory_diff_util(address_start, extract_before, extract_after, word_size, maps, integer_mode=False)","text":"Pretty prints the memory diff.
Source code inlibdebug/utils/pprint_primitives.py def pprint_memory_diff_util(\n address_start: int,\n extract_before: bytes,\n extract_after: bytes,\n word_size: int,\n maps: MemoryMapSnapshotList,\n integer_mode: bool = False,\n) -> None:\n \"\"\"Pretty prints the memory diff.\"\"\"\n # Loop through each word-sized chunk\n for i in range(0, len(extract_before), word_size):\n # Calculate the current address\n current_address = address_start + i\n\n # Extract word-sized chunks from both extracts\n word_before = extract_before[i : i + word_size]\n word_after = extract_after[i : i + word_size]\n\n # Convert each byte in the chunks to hex and compare\n formatted_before = []\n formatted_after = []\n for byte_before, byte_after in zip(word_before, word_after, strict=False):\n # Check for changes and apply color\n if byte_before != byte_after:\n formatted_before.append(f\"{ANSIColors.RED}{byte_before:02x}{ANSIColors.RESET}\")\n formatted_after.append(f\"{ANSIColors.GREEN}{byte_after:02x}{ANSIColors.RESET}\")\n else:\n formatted_before.append(f\"{ANSIColors.RESET}{byte_before:02x}{ANSIColors.RESET}\")\n formatted_after.append(f\"{ANSIColors.RESET}{byte_after:02x}{ANSIColors.RESET}\")\n\n # Join the formatted bytes into a string for each column\n if not integer_mode:\n before_str = \" \".join(formatted_before)\n after_str = \" \".join(formatted_after)\n else:\n # Right now libdebug only considers little-endian systems, if this changes,\n # this code should be passed the endianness of the system and format the bytes accordingly\n before_str = \"0x\" + \"\".join(formatted_before[::-1])\n after_str = \"0x\" + \"\".join(formatted_after[::-1])\n\n current_address_str = _get_colored_address_string(current_address, maps)\n\n # Print the memory diff with the address for this word\n print(f\"{current_address_str}: {before_str} {after_str}\")\n"},{"location":"from_pydoc/generated/utils/pprint_primitives/#libdebug.utils.pprint_primitives.pprint_memory_util","title":"pprint_memory_util(address_start, extract, word_size, maps, integer_mode=False)","text":"Pretty prints the memory.
Source code inlibdebug/utils/pprint_primitives.py def pprint_memory_util(\n address_start: int,\n extract: bytes,\n word_size: int,\n maps: MemoryMapList,\n integer_mode: bool = False,\n) -> None:\n \"\"\"Pretty prints the memory.\"\"\"\n # Loop through each word-sized chunk\n for i in range(0, len(extract), word_size):\n # Calculate the current address\n current_address = address_start + i\n\n # Extract word-sized chunks from both extracts\n word = extract[i : i + word_size]\n\n # Convert each byte in the chunks to hex and compare\n formatted_word = [f\"{byte:02x}\" for byte in word]\n\n # Join the formatted bytes into a string for each column\n out = \" \".join(formatted_word) if not integer_mode else \"0x\" + \"\".join(formatted_word[::-1])\n\n current_address_str = _get_colored_address_string(current_address, maps)\n\n # Print the memory diff with the address for this word\n print(f\"{current_address_str}: {out}\")\n"},{"location":"from_pydoc/generated/utils/pprint_primitives/#libdebug.utils.pprint_primitives.pprint_reg_diff_large_util","title":"pprint_reg_diff_large_util(curr_reg_tuple, reg_tuple_before, reg_tuple_after)","text":"Pretty prints a register diff.
Source code inlibdebug/utils/pprint_primitives.py def pprint_reg_diff_large_util(\n curr_reg_tuple: (str, str),\n reg_tuple_before: (int, int),\n reg_tuple_after: (int, int),\n) -> None:\n \"\"\"Pretty prints a register diff.\"\"\"\n print(f\"{ANSIColors.BLUE}\" + \"{\" + f\"{ANSIColors.RESET}\")\n for reg_name, value_before, value_after in zip(curr_reg_tuple, reg_tuple_before, reg_tuple_after, strict=False):\n has_changed = value_before != value_after\n\n # Print the old and new values\n if has_changed:\n formatted_value_before = (\n f\"{ANSIColors.RED}{ANSIColors.STRIKE}\"\n + (f\"{value_before:#x}\" if isinstance(value_before, int) else str(value_before))\n + f\"{ANSIColors.RESET}\"\n )\n\n formatted_value_after = (\n f\"{ANSIColors.GREEN}\"\n + (f\"{value_after:#x}\" if isinstance(value_after, int) else str(value_after))\n + f\"{ANSIColors.RESET}\"\n )\n\n print(\n f\" {ANSIColors.RED}{reg_name}{ANSIColors.RESET}\\t{formatted_value_before}\\t->\\t{formatted_value_after}\"\n )\n else:\n formatted_value = f\"{value_before:#x}\" if isinstance(value_before, int) else str(value_before)\n\n print(f\" {ANSIColors.RED}{reg_name}{ANSIColors.RESET}\\t{formatted_value}\")\n\n print(f\"{ANSIColors.BLUE}\" + \"}\" + f\"{ANSIColors.RESET}\")\n"},{"location":"from_pydoc/generated/utils/pprint_primitives/#libdebug.utils.pprint_primitives.pprint_reg_diff_util","title":"pprint_reg_diff_util(curr_reg, maps_before, maps_after, before, after)","text":"Pretty prints a register diff.
Source code inlibdebug/utils/pprint_primitives.py def pprint_reg_diff_util(\n curr_reg: str,\n maps_before: MemoryMapList,\n maps_after: MemoryMapList,\n before: int,\n after: int,\n) -> None:\n \"\"\"Pretty prints a register diff.\"\"\"\n before_str = _get_colored_address_string(before, maps_before)\n after_str = _get_colored_address_string(after, maps_after)\n\n print(f\"{ANSIColors.RED}{curr_reg.ljust(12)}{ANSIColors.RESET}\\t{before_str}\\t{after_str}\")\n"},{"location":"from_pydoc/generated/utils/pprint_primitives/#libdebug.utils.pprint_primitives.pprint_registers_all_util","title":"pprint_registers_all_util(registers, maps, gen_regs, spec_regs, vec_fp_regs)","text":"Pretty prints all the thread's registers.
Source code inlibdebug/utils/pprint_primitives.py def pprint_registers_all_util(\n registers: Registers,\n maps: MemoryMapList,\n gen_regs: list[str],\n spec_regs: list[str],\n vec_fp_regs: list[str],\n) -> None:\n \"\"\"Pretty prints all the thread's registers.\"\"\"\n pprint_registers_util(registers, maps, gen_regs)\n\n for t in spec_regs:\n _pprint_reg(registers, maps, t)\n\n for t in vec_fp_regs:\n print(f\"{ANSIColors.BLUE}\" + \"{\" + f\"{ANSIColors.RESET}\")\n for register in t:\n value = getattr(registers, register)\n formatted_value = f\"{value:#x}\" if isinstance(value, int) else str(value)\n print(f\" {ANSIColors.RED}{register}{ANSIColors.RESET}\\t{formatted_value}\")\n\n print(f\"{ANSIColors.BLUE}\" + \"}\" + f\"{ANSIColors.RESET}\")\n"},{"location":"from_pydoc/generated/utils/pprint_primitives/#libdebug.utils.pprint_primitives.pprint_registers_util","title":"pprint_registers_util(registers, maps, gen_regs)","text":"Pretty prints the thread's registers.
Source code inlibdebug/utils/pprint_primitives.py def pprint_registers_util(registers: Registers, maps: MemoryMapList, gen_regs: list[str]) -> None:\n \"\"\"Pretty prints the thread's registers.\"\"\"\n for curr_reg in gen_regs:\n _pprint_reg(registers, maps, curr_reg)\n"},{"location":"from_pydoc/generated/utils/pprint_primitives/#libdebug.utils.pprint_primitives.strip_ansi_codes","title":"strip_ansi_codes(string)","text":"Strips ANSI escape codes from a string.
Parameters:
Name Type Description Defaultstring str The string to strip.
requiredReturns:
Name Type Descriptionstr str The string without the ANSI escape codes.
Source code inlibdebug/utils/pprint_primitives.py def strip_ansi_codes(string: str) -> str:\n \"\"\"Strips ANSI escape codes from a string.\n\n Args:\n string (str): The string to strip.\n\n Returns:\n str: The string without the ANSI escape codes.\n \"\"\"\n ansi_escape = re.compile(r\"\\x1B[@-_][0-?]*[ -/]*[@-~]\")\n return ansi_escape.sub(\"\", string)\n"},{"location":"from_pydoc/generated/utils/process_utils/","title":"libdebug.utils.process_utils","text":""},{"location":"from_pydoc/generated/utils/process_utils/#libdebug.utils.process_utils.disable_self_aslr","title":"disable_self_aslr()","text":"Disables ASLR for the current process.
Source code inlibdebug/utils/process_utils.py def disable_self_aslr() -> None:\n \"\"\"Disables ASLR for the current process.\"\"\"\n retval = libdebug_linux_binding.disable_aslr()\n\n if retval == -1:\n raise RuntimeError(\"Failed to disable ASLR.\")\n"},{"location":"from_pydoc/generated/utils/process_utils/#libdebug.utils.process_utils.get_open_fds","title":"get_open_fds(process_id) cached","text":"Returns the file descriptors of the specified process.
Parameters:
Name Type Description Defaultprocess_id int The PID of the process whose file descriptors should be returned.
requiredReturns:
Name Type Descriptionlist list[int] A list of integers, each representing a file descriptor of the specified process.
Source code inlibdebug/utils/process_utils.py @functools.cache\ndef get_open_fds(process_id: int) -> list[int]:\n \"\"\"Returns the file descriptors of the specified process.\n\n Args:\n process_id (int): The PID of the process whose file descriptors should be returned.\n\n Returns:\n list: A list of integers, each representing a file descriptor of the specified process.\n \"\"\"\n return [int(fd) for fd in os.listdir(f\"/proc/{process_id}/fd\")]\n"},{"location":"from_pydoc/generated/utils/process_utils/#libdebug.utils.process_utils.get_process_maps","title":"get_process_maps(process_id) cached","text":"Returns the memory maps of the specified process.
Parameters:
Name Type Description Defaultprocess_id int The PID of the process whose memory maps should be returned.
requiredReturns:
Name Type Descriptionlist MemoryMapList[MemoryMap] A list of MemoryMap objects, each representing a memory map of the specified process.
libdebug/utils/process_utils.py @functools.cache\ndef get_process_maps(process_id: int) -> MemoryMapList[MemoryMap]:\n \"\"\"Returns the memory maps of the specified process.\n\n Args:\n process_id (int): The PID of the process whose memory maps should be returned.\n\n Returns:\n list: A list of `MemoryMap` objects, each representing a memory map of the specified process.\n \"\"\"\n with Path(f\"/proc/{process_id}/maps\").open() as maps_file:\n maps = maps_file.readlines()\n\n return MemoryMapList([MemoryMap.parse(vmap) for vmap in maps])\n"},{"location":"from_pydoc/generated/utils/process_utils/#libdebug.utils.process_utils.get_process_tasks","title":"get_process_tasks(process_id)","text":"Returns the tasks of the specified process.
Parameters:
Name Type Description Defaultprocess_id int The PID of the process whose tasks should be returned.
requiredReturns:
Name Type Descriptionlist list[int] A list of integers, each representing a task of the specified process.
Source code inlibdebug/utils/process_utils.py def get_process_tasks(process_id: int) -> list[int]:\n \"\"\"Returns the tasks of the specified process.\n\n Args:\n process_id (int): The PID of the process whose tasks should be returned.\n\n Returns:\n list: A list of integers, each representing a task of the specified process.\n \"\"\"\n tids = []\n if Path(f\"/proc/{process_id}/task\").exists():\n tids = [int(task) for task in os.listdir(f\"/proc/{process_id}/task\")]\n return tids\n"},{"location":"from_pydoc/generated/utils/process_utils/#libdebug.utils.process_utils.invalidate_process_cache","title":"invalidate_process_cache()","text":"Invalidates the cache of the functions in this module. Must be executed any time the process executes code.
Source code inlibdebug/utils/process_utils.py def invalidate_process_cache() -> None:\n \"\"\"Invalidates the cache of the functions in this module. Must be executed any time the process executes code.\"\"\"\n get_process_maps.cache_clear()\n get_open_fds.cache_clear()\n"},{"location":"from_pydoc/generated/utils/search_utils/","title":"libdebug.utils.search_utils","text":""},{"location":"from_pydoc/generated/utils/search_utils/#libdebug.utils.search_utils.find_all_overlapping_occurrences","title":"find_all_overlapping_occurrences(pattern, data, abs_address=0)","text":"Find all overlapping occurrences of a pattern in a data.
Source code inlibdebug/utils/search_utils.py def find_all_overlapping_occurrences(pattern: bytes, data: bytes, abs_address: int = 0) -> list[int]:\n \"\"\"Find all overlapping occurrences of a pattern in a data.\"\"\"\n start = 0\n occurrences = []\n while True:\n start = data.find(pattern, start)\n if start == -1:\n # No more occurrences\n break\n occurrences.append(start + abs_address)\n # Increment start to find overlapping matches\n start += 1\n return occurrences\n"},{"location":"from_pydoc/generated/utils/signal_utils/","title":"libdebug.utils.signal_utils","text":""},{"location":"from_pydoc/generated/utils/signal_utils/#libdebug.utils.signal_utils.create_signal_mappings","title":"create_signal_mappings() cached","text":"Create mappings between signal names and numbers.
Source code inlibdebug/utils/signal_utils.py @functools.cache\ndef create_signal_mappings() -> tuple[dict, dict]:\n \"\"\"Create mappings between signal names and numbers.\"\"\"\n signal_to_number = {}\n number_to_signal = {}\n\n for name in dir(signal):\n if name.startswith(\"SIG\") and not name.startswith(\"SIG_\"):\n number = getattr(signal, name)\n signal_to_number[name] = number\n number_to_signal[number] = name\n\n # RT signals have a different convention\n for i in range(1, signal.SIGRTMAX - signal.SIGRTMIN):\n name = f\"SIGRTMIN+{i}\"\n number = signal.SIGRTMIN + i\n signal_to_number[name] = number\n number_to_signal[number] = name\n\n return signal_to_number, number_to_signal\n"},{"location":"from_pydoc/generated/utils/signal_utils/#libdebug.utils.signal_utils.get_all_signal_numbers","title":"get_all_signal_numbers() cached","text":"Get all the signal numbers.
Returns:
Name Type Descriptionlist list the list of signal numbers.
Source code inlibdebug/utils/signal_utils.py @functools.cache\ndef get_all_signal_numbers() -> list:\n \"\"\"Get all the signal numbers.\n\n Returns:\n list: the list of signal numbers.\n \"\"\"\n _, number_to_signal = create_signal_mappings()\n\n return list(number_to_signal.keys())\n"},{"location":"from_pydoc/generated/utils/signal_utils/#libdebug.utils.signal_utils.resolve_signal_name","title":"resolve_signal_name(number) cached","text":"Resolve a signal number to its name.
Parameters:
Name Type Description Defaultnumber int the signal number.
requiredReturns:
Name Type Descriptionstr str the signal name.
Source code inlibdebug/utils/signal_utils.py @functools.cache\ndef resolve_signal_name(number: int) -> str:\n \"\"\"Resolve a signal number to its name.\n\n Args:\n number (int): the signal number.\n\n Returns:\n str: the signal name.\n \"\"\"\n if number == -1:\n return \"ALL\"\n\n _, number_to_signal = create_signal_mappings()\n\n try:\n return number_to_signal[number]\n except KeyError as e:\n raise ValueError(f\"Signal {number} not found.\") from e\n"},{"location":"from_pydoc/generated/utils/signal_utils/#libdebug.utils.signal_utils.resolve_signal_number","title":"resolve_signal_number(name) cached","text":"Resolve a signal name to its number.
Parameters:
Name Type Description Defaultname str the signal name.
requiredReturns:
Name Type Descriptionint int the signal number.
Source code inlibdebug/utils/signal_utils.py @functools.cache\ndef resolve_signal_number(name: str) -> int:\n \"\"\"Resolve a signal name to its number.\n\n Args:\n name (str): the signal name.\n\n Returns:\n int: the signal number.\n \"\"\"\n if name in [\"ALL\", \"all\", \"*\", \"pkm\"]:\n return -1\n\n signal_to_number, _ = create_signal_mappings()\n\n try:\n return signal_to_number[name]\n except KeyError as e:\n raise ValueError(f\"Signal {name} not found.\") from e\n"},{"location":"from_pydoc/generated/utils/syscall_utils/","title":"libdebug.utils.syscall_utils","text":""},{"location":"from_pydoc/generated/utils/syscall_utils/#libdebug.utils.syscall_utils.fetch_remote_syscall_definition","title":"fetch_remote_syscall_definition(arch)","text":"Fetch the syscall definition file from the remote server.
Source code inlibdebug/utils/syscall_utils.py def fetch_remote_syscall_definition(arch: str) -> dict:\n \"\"\"Fetch the syscall definition file from the remote server.\"\"\"\n url = get_remote_definition_url(arch)\n\n response = requests.get(url, timeout=1)\n response.raise_for_status()\n\n # Save the response to a local file\n with Path(f\"{LOCAL_FOLDER_PATH}/{arch}.json\").open(\"w\") as f:\n f.write(response.text)\n\n return response.json()\n"},{"location":"from_pydoc/generated/utils/syscall_utils/#libdebug.utils.syscall_utils.get_all_syscall_numbers","title":"get_all_syscall_numbers(architecture) cached","text":"Retrieves all the syscall numbers.
Source code inlibdebug/utils/syscall_utils.py @functools.cache\ndef get_all_syscall_numbers(architecture: str) -> list[int]:\n \"\"\"Retrieves all the syscall numbers.\"\"\"\n definitions = get_syscall_definitions(architecture)\n\n return [syscall[\"number\"] for syscall in definitions[\"syscalls\"]]\n"},{"location":"from_pydoc/generated/utils/syscall_utils/#libdebug.utils.syscall_utils.get_remote_definition_url","title":"get_remote_definition_url(arch)","text":"Get the URL of the remote syscall definition file.
Source code inlibdebug/utils/syscall_utils.py def get_remote_definition_url(arch: str) -> str:\n \"\"\"Get the URL of the remote syscall definition file.\"\"\"\n match arch:\n case \"amd64\":\n return f\"{SYSCALLS_REMOTE}/x86/64/x64/latest/table.json\"\n case \"aarch64\":\n return f\"{SYSCALLS_REMOTE}/arm64/64/aarch64/latest/table.json\"\n case \"i386\":\n return f\"{SYSCALLS_REMOTE}/x86/32/ia32/latest/table.json\"\n case _:\n raise ValueError(f\"Architecture {arch} not supported\")\n"},{"location":"from_pydoc/generated/utils/syscall_utils/#libdebug.utils.syscall_utils.get_syscall_definitions","title":"get_syscall_definitions(arch) cached","text":"Get the syscall definitions for the specified architecture.
Source code inlibdebug/utils/syscall_utils.py @functools.cache\ndef get_syscall_definitions(arch: str) -> dict:\n \"\"\"Get the syscall definitions for the specified architecture.\"\"\"\n LOCAL_FOLDER_PATH.mkdir(parents=True, exist_ok=True)\n\n if (LOCAL_FOLDER_PATH / f\"{arch}.json\").exists():\n try:\n with (LOCAL_FOLDER_PATH / f\"{arch}.json\").open() as f:\n return json.load(f)\n except json.decoder.JSONDecodeError:\n pass\n\n return fetch_remote_syscall_definition(arch)\n"},{"location":"from_pydoc/generated/utils/syscall_utils/#libdebug.utils.syscall_utils.resolve_syscall_arguments","title":"resolve_syscall_arguments(architecture, number) cached","text":"Resolve a syscall number to its argument definition.
Source code inlibdebug/utils/syscall_utils.py @functools.cache\ndef resolve_syscall_arguments(architecture: str, number: int) -> list[str]:\n \"\"\"Resolve a syscall number to its argument definition.\"\"\"\n definitions = get_syscall_definitions(architecture)\n\n for syscall in definitions[\"syscalls\"]:\n if syscall[\"number\"] == number:\n return syscall[\"signature\"]\n\n raise ValueError(f'Syscall number \"{number}\" not found')\n"},{"location":"from_pydoc/generated/utils/syscall_utils/#libdebug.utils.syscall_utils.resolve_syscall_name","title":"resolve_syscall_name(architecture, number) cached","text":"Resolve a syscall number to its name.
Source code inlibdebug/utils/syscall_utils.py @functools.cache\ndef resolve_syscall_name(architecture: str, number: int) -> str:\n \"\"\"Resolve a syscall number to its name.\"\"\"\n definitions = get_syscall_definitions(architecture)\n\n if number == -1:\n return \"all\"\n\n for syscall in definitions[\"syscalls\"]:\n if syscall[\"number\"] == number:\n return syscall[\"name\"]\n\n raise ValueError(f'Syscall number \"{number}\" not found')\n"},{"location":"from_pydoc/generated/utils/syscall_utils/#libdebug.utils.syscall_utils.resolve_syscall_number","title":"resolve_syscall_number(architecture, name) cached","text":"Resolve a syscall name to its number.
Source code inlibdebug/utils/syscall_utils.py @functools.cache\ndef resolve_syscall_number(architecture: str, name: str) -> int:\n \"\"\"Resolve a syscall name to its number.\"\"\"\n definitions = get_syscall_definitions(architecture)\n\n if name in [\"all\", \"*\", \"ALL\", \"pkm\"]:\n return -1\n\n for syscall in definitions[\"syscalls\"]:\n if syscall[\"name\"] == name:\n return syscall[\"number\"]\n\n raise ValueError(f'Syscall \"{name}\" not found')\n"},{"location":"logging/liblog/","title":"Logging","text":"Debugging an application with the freedom of a rich API can lead to flows which are hard to unravel. To aid the user in the debugging process, libdebug provides logging. The logging system is implemented in the submodule liblog and adheres to the Python logging system.
By default, libdebug only prints critical logs such as warnings and errors. However, the user can enable more verbose logging by setting the argv parameter of the script.
The available logging modes for events are:
Mode Descriptiondebugger Logs related to the debugging operations performed on the process by libdebug. pipe Logs related to interactions with the process pipe: bytes received and bytes sent. dbg Combination of the pipe and debugger options. pwntools compatibility
As reported in this documentation, the argv parameters passed to libdebug are lowercase. This choice is made to avoid conflicts with pwntools, which intercepts all uppercase arguments.
The debugger option displays all logs related to the debugging operations performed on the process by libdebug.
The pipe option, on the other hand, displays all logs related to interactions with the process pipe: bytes received and bytes sent.
The dbg option is the combination of the pipe and debugger options. It displays all logs related to the debugging operations performed on the process by libdebug, as well as interactions with the process pipe: bytes received and bytes sent.
libdebug defines logging levels and information types to allow the user to filter the granularity of the the information they want to see. Logger levels for each event type can be changed at runtime using the libcontext module.
Example of setting logging levels
from libdebug import libcontext\n\nlibcontext.general_logger = 'DEBUG'\nlibcontext.pipe_logger = 'DEBUG'\nlibcontext.debugger_logger = 'DEBUG'\n Logger Description Supported Levels Default Level general_logger Logger used for general libdebug logs, different from the pipe and debugger logs. DEBUG, INFO, WARNING, SILENT INFO pipe_logger Logger used for pipe logs. DEBUG, SILENT SILENT debugger_logger Logger used for debugger logs. DEBUG, SILENT SILENT Let's see what each logging level actually logs:
Log Level Debug Logs Information Logs Warnings DEBUG INFO WARNING SILENT","boost":4},{"location":"logging/liblog/#temporary-logging-level-changes","title":"Temporary logging level changes","text":"Logger levels can be temporarily changed at runtime using a with statement, as shown in the following example.
from libdebug import libcontext\n\nwith libcontext.tmp(pipe_logger='SILENT', debugger_logger='DEBUG'):\n r.sendline(b'gimme the flag')\n","boost":4},{"location":"multithreading/multi-stuff/","title":"The Family of the Process","text":"Debugging is all fun and games until you have to deal with a process that spawns children.
So...how are children born? In the POSIX standard, children of a process can be either threads or processes. Threads share the same virtual address space, while processes have their own. POSIX-compliant systems such as Linux supply a variety of system calls to create children of both types.
flowchart TD\n P[Parent Process] -->|\"fork()\"| CP1[Child Process]\n P -->|\"clone()\"| T((Thread))\n P -->|\"vfork()\"| CP2[Child<br>Process]\n P -->|\"clone3()\"| T2((Thread))\n\n CP1 -->|\"fork()\"| GP[Grandchild<br>Process]\n T -->|\"clone()\"| ST((Sibling<br>Thread)) Example family tree of a process in the Linux kernel.","boost":4},{"location":"multithreading/multi-stuff/#processes","title":"Processes","text":"Child processes are created by system calls such as fork, vfork, clone, and clone3. The clone and clone3 system calls are configurable, as they allow the caller to specify the resources to be shared between the parent and child.
In the Linux kernel, the ptrace system call allows a tracer to handle events like process creation and termination.
Since version 0.8 Chutoro Nigiri , libdebug supports handling children processes. Read more about it in the dedicated Multiprocessing section.
","boost":4},{"location":"multithreading/multi-stuff/#threads","title":"Threads","text":"Threads of a running process in the POSIX Threads standard are children of the main process. They are created by the system calls clone and clone3. What distinguishes threads from processes is that threads share the same virtual address space.
libdebug offers a simple API to work with children threads. Read more about it in the dedicated Multithreading section.
","boost":4},{"location":"multithreading/multiprocessing/","title":"Debugging Multiprocess Applications","text":"Since version 0.8 Chutoro Nigiri , libdebug supports debugging multiprocess applications. This feature allows you to attach to multiple processes and debug them simultaneously. This document explains how to use this feature and provides examples to help you get started.
","boost":4},{"location":"multithreading/multiprocessing/#a-child-process-is-born","title":"A Child Process is Born","text":"By default, libdebug will monitor all new children processes created by the tracee process. Of course, it will not retrieve past forked processes that have been created before an attach.
A new process is a big deal. For this reason, libdebug will provide you with a brand new Debugger object for each new child process. This object will be available in the list children attribute of the parent Debugger object.
Usage Example
from libdebug import debugger\n\nd = debugger(\"test\")\nd.run()\n\n[...]\n\nprint(f\"The process has spawned {len(d.children)} children\")\n\nfor child in d.children: # (1)!\n print(f\"Child PID: {child.pid}\")\n children attribute is a regular list. Indexing, slicing, and iterating are all supported.When a child process is spawned, it inherits the properties of the parent debugger. This includes whether ASLR is enabled, fast memory reading, and [other properties}../../basics/libdebug101/#what-else-can-i-do). However, the child debugger from that moment on will act independently. As such, any property changes made to the parent debugger will not affect the child debugger, and vice versa.
In terms of registered Stopping Events, the new debugger will be a blank slate. This means the debugger will not inherit breakpoints, watchpoints, syscall handlers, or signal catchers.
","boost":4},{"location":"multithreading/multiprocessing/#focusing-on-the-main-process","title":"Focusing on the Main Process","text":"Some applications may spawn a large number of children processes, and you may only be interested in debugging the main process. In this case, you can disable the automatic monitoring of children processes by setting the follow_children parameter to False when creating the Debugger object.
Usage Example
d = debugger(\"test\", follow_children=False)\nd.run()\n In this example, libdebug will only monitor the main process and ignore any child processes spawned by the tracee. However, you can also decide to stop monitoring child processes at any time during debugging by setting the follow_children attribute to False in a certain Debugger object.
When creating a snapshot of a process from the corresponding Debugger object, the snapshot will not include children processes, but only children threads. Read more about snapshots in the Save States section.
","boost":4},{"location":"multithreading/multiprocessing/#pipe-redirection","title":"Pipe Redirection","text":"By default, libdebug will redirect the standard input, output, and error of the child processes to pipes. This is how you can interact with these file descriptors using I/O commands. If you keep this parameter enabled, you will be able to interact with the child processes's standard I/O using the same PipeManager object that is provided upon creation of the root Debugger object. This is consistent with limitations of forking in the POSIX standard, where the child process inherits the file descriptors of the parent process.
Read more about disabling pipe redirection in the dedicated section.
","boost":4},{"location":"multithreading/multithreading/","title":"Debugging Multithreaded Applications","text":"Debugging multi-threaded applications can be a daunting task, particularly in an interactive debugger that is designed to operate on one thread at a time. libdebug offers a few features that will help you debug multi-threaded applications more intuitively and efficiently.
","boost":4},{"location":"multithreading/multithreading/#child-threads","title":"Child Threads","text":"libdebug automatically registers new threads and exposes their state with the same API as the main Debugger object. While technically threads can be running or stopped independently, libdebug will enforce a coherent state. This means that if a live thread is stopped, all other live threads will be stopped as well and if a continuation command is issued, all threads will be resumed.
stateDiagram-v2\n state fork_state <<fork>>\n [*] --> fork_state: d.interrupt()\n fork_state --> MainThread: STOP\n fork_state --> Child1: STOP\n fork_state --> Child2: STOP\n\n state join_state <<join>>\n MainThread --> join_state\n Child1 --> join_state\n Child2 --> join_state\n\n state fork_state1 <<fork>>\n join_state --> fork_state1: d.cont()\n fork_state1 --> MainThread_2: CONTINUE\n fork_state1 --> Child11: CONTINUE\n fork_state1 --> Child22: CONTINUE\n\n state join_state2 <<join>>\n MainThread_2 --> join_state2\n Child11 --> join_state2\n Child22 --> join_state2\n\n state fork_state2 <<fork>>\n join_state2 --> fork_state2: Breakpoint on Child 2\n fork_state2 --> MainThread_3: STOP\n fork_state2 --> Child111: STOP\n fork_state2 --> Child222: STOP\n\n state join_state3 <<join>>\n MainThread_3 --> join_state3\n Child111 --> join_state3\n Child222 --> join_state3\n\n %% State definitions with labels\n state \"Main Thread\" as MainThread\n state \"Child 1\" as Child1\n state \"Child 2\" as Child2\n state \"Main Thread\" as MainThread_2\n state \"Child 1\" as Child11\n state \"Child 2\" as Child22\n state \"Main Thread\" as MainThread_3\n state \"Child 1\" as Child111\n state \"Child 2\" as Child222 All live threads are synchronized in their execution state.","boost":4},{"location":"multithreading/multithreading/#libdebug-api-for-multithreading","title":"libdebug API for Multithreading","text":"To access the threads of a process, you can use the threads attribute of the Debugger object. This attribute will return a list of ThreadContext objects, each representing a thread of the process.
If you're already familiar with the Debugger object, you'll find the ThreadContext straightforward to use. The Debugger has always acted as a facade for the main thread, enabling you to access registers, memory, and other thread state fields exactly as you would for the main thread. The difference you will notice is that the ThreadContext object is missing a couple of fields that just don't make sense in the context of a single thread (e.g. symbols, which belong to the binary, and memory maps, since they are shared for the whole process).
from libdebug import debugger\n\nd = debugger(\"./so_many_threads\")\nd.run()\n\n# Reach the point of interest\nd.breakpoint(\"loom\", file=\"binary\")\nd.cont()\nd.wait()\n\nfor thread in d.threads:\n print(f\"Thread {thread.tid} stopped at {hex(thread.regs.rip)}\")\n print(\"Function frame:\")\n\n # Retrieve frame boundaries\n frame_start = thread.regs.rbp\n frame_end = thread.regs.rsp\n\n # Print function frame\n for addr in range(frame_end, frame_start, 8):\n print(f\" {addr:#16x}: {thread.memory[addr:addr+8].hex()}\")\n\n[...]\n","boost":4},{"location":"multithreading/multithreading/#properties-of-the-threadcontext","title":"Properties of the ThreadContext","text":"Property Type Description regs Registers The thread's registers. debugger Debugger The debugging context this thread belongs to. memory AbstractMemoryView The memory view of the debugged process (mem is an alias). instruction_pointer int The thread's instruction pointer. process_id int The process ID (pid is an alias). thread_id int The thread ID (tid is an alias). running bool Whether the process is running. saved_ip int The return address of the current function. dead bool Whether the thread is dead. exit_code int The thread's exit code (if dead). exit_signal str The thread's exit signal (if dead). syscall_arg0 int The thread's syscall argument 0. syscall_arg1 int The thread's syscall argument 1. syscall_arg2 int The thread's syscall argument 2. syscall_arg3 int The thread's syscall argument 3. syscall_arg4 int The thread's syscall argument 4. syscall_arg5 int The thread's syscall argument 5. syscall_number int The thread's syscall number. syscall_return int The thread's syscall return value. signal str The signal will be forwarded to the thread. signal_number int The signal number to forward to the thread. zombie bool Whether the thread is in a zombie state.","boost":4},{"location":"multithreading/multithreading/#methods-of-the-threadcontext","title":"Methods of the ThreadContext","text":"Method Description Return Type set_as_dead() Set the thread as dead. None step() Executes a single instruction of the process (si is an alias). None step_until(position: int, max_steps: int = -1, file: str = \"hybrid\") Executes instructions of the process until the specified location is reached (su is an alias). None finish(heuristic: str = \"backtrace\") Continues execution until the current function returns or the process stops (fin is an alias). None next() Executes the next instruction of the process. If the instruction is a call, the debugger will continue until the called function returns (fin is an alias). None backtrace(as_symbols: bool = False) Returns the current backtrace of the thread (see Stack Frame Utils). list pprint_backtrace() Pretty prints the current backtrace of the thread (see Pretty Printing). None pprint_registers() Pretty prints the thread's registers (see Pretty Printing). None pprint_regs() Alias for the pprint_registers method (see Pretty Printing). None pprint_registers_all() Pretty prints all the thread's registers (see Pretty Printing). None pprint_regs_all() Alias for the pprint_registers_all method (see Pretty Printing). None Meaning of the debugger object
When accessing state fields of the Debugger object (e.g. registers, memory), the debugger will act as an alias for the main thread. For example, doing d.regs.rax will be equivalent to doing d.threads[0].regs.rax.
","boost":4},{"location":"multithreading/multithreading/#shared-and-unshared-state","title":"Shared and Unshared State","text":"Each thread has its own register set, stack, and instruction pointer. However, the virtual address space is shared among all threads. This means that threads can access the same memory and share the same code.
How to access TLS?
While the virtual address space is shared between threads, each thread has its own Thread Local Storage (TLS) area. As it stands, libdebug does not provide a direct interface to the TLS area.
Let's see a couple of things to keep in mind when debugging multi-threaded applications with libdebug.
","boost":4},{"location":"multithreading/multithreading/#software-breakpoints","title":"Software Breakpoints","text":"Software breakpoints are implemented through code patching in the process memory. This means that a breakpoint set in one thread will be replicated across all threads.
When using synchronous breakpoints, you will need to \"diagnose\" the stopping event to determine which thread triggered the breakpoint. You can do this by checking the return value of the hit_on() method of the Breakpoint object. Passing the ThreadContext as an argument will return True if the breakpoint was hit by that thread.
Diagnosing a Synchronous Breakpoint
thread = d.threads[2]\n\nfor addr, bp in d.breakpoints.items():\n if bp.hit_on(thread):\n print(f\"Thread {thread.tid} hit breakpoint {addr:#x}\")\n When using asynchronous breakpoints, the breakpoint will be more intuitive to handle, as the signature of the callback function includes the ThreadContext object that triggered the breakpoint.
Handling an Asynchronous Breakpoint
def on_breakpoint_hit(t, bp):\n print(f\"Thread {t.tid} hit breakpoint {bp.address:#x}\")\n\nd.breakpoint(0x10ab, callback=on_breakpoint_hit, file=\"binary\")\n","boost":4},{"location":"multithreading/multithreading/#hardware-breakpoints-and-watchpoints","title":"Hardware Breakpoints and Watchpoints","text":"While hardware breakpoints are thread-specific, libdebug mirrors them across all threads. This is done to avoid asymmetries with software breakpoints. Watchpoints are hardware breakpoints, so this applies to them as well.
For consistency, syscall handlers are also enabled across all threads. The same considerations for synchronous and asynchronous breakpoints apply here as well.
Concurrency in Syscall Handling
When debugging entering and exiting events in syscalls, be mindful of the scheduling. The kernel may schedule a different thread to handle the syscall exit event right after the enter event of another thread.
","boost":4},{"location":"multithreading/multithreading/#signal-catching","title":"Signal Catching","text":"Who will receive the signal?Signal Catching is also shared among threads. Apart from consistency, this is a necessity. In fact, the kernel does not guarantee that a signal sent to a process will be dispatched to a specific thread. By contrast, when sending arbitrary signals through the ThreadContext object, the signal will be sent to the requested thread.
","boost":4},{"location":"multithreading/multithreading/#snapshot-behavior","title":"Snapshot Behavior","text":"When creating a snapshot of a process from the corresponding Debugger object, the snapshot will also save the state of all threads. You can also create a snapshot of a single thread by calling the create_snapshot() method from the ThreadContext object instead. Read more about snapshots in the Save States section.
When a thread or process terminates, it enters a zombie state. This is a temporary condition where the process is effectively dead but awaiting reaping by the parent or debugger, which involves reading its status. Reaping traced zombie threads can become complicated due to certain edge cases.
While libdebug automatically handles the reaping of zombie threads, it provides a property named zombie within the ThreadContext object, indicating whether the thread is in a zombie state. The same property is also available in the Debugger object, indicating whether the main thread is in a zombie state.
Example Code
if d.threads[1].zombie:\n print(\"The thread is a zombie\")\n sequenceDiagram\n participant Parent as Parent Process\n participant Child as Child Thread\n participant Kernel as Linux Kernel\n\n Note over Parent,Kernel: Normal Execution Phase\n Parent->>Child: clone()\n activate Child\n Child->>Kernel: Task added to the Process Table\n Kernel-->>Child: Thread ID\n\n Note over Parent,Kernel: Zombie Creation Phase\n Child->>Kernel: exit(statusCode)\n deactivate Child\n Note right of Kernel: Parent will be<br/>notified of exit\n Kernel->>Parent: SIGCHLD\n Note right of Parent: Parent Busy<br/>Cannot Process Signal\n\n Note over Parent,Kernel: Zombie State\n Note right of Child: Thread becomes<br/>zombie (defunct)<br/>- Maintains TID<br/>- Keeps exit status<br/>- Consumes minimal resources\n\n Note over Parent,Kernel: Reaping Phase\n Parent->>Kernel: waitpid()\n Kernel-->>Parent: Return Exit Status\n Kernel->>Kernel: Remove Zombie Entry<br/>from Process Table\n Note right of Kernel: Resources Released","boost":4},{"location":"quality_of_life/anti_debugging/","title":"Evasion of Anti-Debugging","text":"","boost":4},{"location":"quality_of_life/anti_debugging/#automatic-evasion-of-anti-debugging-techniques","title":"Automatic Evasion of Anti-Debugging Techniques","text":"A common anti-debugging technique for Linux ELF binaries is to invoke the ptrace syscall with the PTRACE_TRACEME argument. The syscall will fail if the binary is currently being traced by a debugger, as the kernel forbids a process from being traced by multiple debuggers.
Bypassing this technique involves intercepting such syscalls and altering the return value to make the binary believe that it is not being traced. While this can absolutely be performed manually, libdebug comes with a pre-made implementation that can save you precious time.
To enable this feature, set the escape_antidebug property to True when creating the debugger object. The debugger will take care of the rest.
Example
> C source code
#include <stdio.h>\n#include <stdlib.h>\n#include <sys/ptrace.h>\n\nint main()\n{\n\n if (ptrace(PTRACE_TRACEME, 0, NULL, 0) == -1) // (1)!\n {\n puts(\"No cheating! Debugger detected.\"); // (2)!\n exit(1);\n }\n\n puts(\"Congrats! Here's your flag:\"); // (3)!\n puts(\"flag{y0u_sn3aky_guy_y0u_tr1ck3d_m3}\");\n\n return 0;\n}\n PTRACE_TRACEME to detect if we are being debugged> libdebug script
from libdebug import debugger\n\nd = debugger(\"evasive_binary\",\n escape_antidebug=True)\n\npipe = d.run()\n\nd.cont()\nout = pipe.recvline(numlines=2)\nd.wait()\n\nprint(out.decode())\n Execution of the script will print the flag, even if the binary is being debugged.
","boost":4},{"location":"quality_of_life/memory_maps/","title":"Memory Maps","text":"Virtual memory is a fundamental concept in operating systems. It allows the operating system to provide each process with its own address space, which is isolated from other processes. This isolation is crucial for security and stability reasons. The memory of a process is divided into regions called memory maps. Each memory map has a starting address, an ending address, and a set of permissions (read, write, execute).
In libdebug, you can access the memory maps of a process using the maps attribute of the Debugger object.
The maps attribute returns a list of MemoryMap objects, which contain the following attributes:
start int The start address of the memory map. There is also an equivalent alias called base. end int The end address of the memory map. permissions str The permissions of the memory map. size int The size of the memory map. offset int The offset of the memory map relative to the backing file. backing_file str The backing file of the memory map, or the symbolic name of the memory map.","boost":4},{"location":"quality_of_life/memory_maps/#filtering-memory-maps","title":"Filtering Memory Maps","text":"You can filter memory maps based on their attributes using the filter() method of the maps attribute. The filter() method accepts a value that can be either a memory address (int) or a symbolic name (str) and returns a list of MemoryMap objects that match the criteria.
Function Signature
d.maps.filter(value: int | str) -> MemoryMapList[MemoryMap]:\n The behavior of the memory map filtering depends on the type of the value parameter:
libdebug offers utilities to visualize the process's state in a human-readable format and with color highlighting. This can be especially useful when debugging complex binaries or when you need to quickly understand the behavior of a program.
","boost":4},{"location":"quality_of_life/pretty_printing/#registers-pretty-printing","title":"Registers Pretty Printing","text":"There are two functions available to print the registers of a thread: pprint_registers() and print_registers_all(). The former will print the current values of the most commonly-interesting registers, while the latter will print all available registers.
Aliases
If you don't like long function names, you can use aliases for the two register pretty print functions. The shorter aliases are pprint_regs() and print_regs_all().
When debugging a binary, it is often much faster to guess what the intended functionality is by looking at the syscalls that are being invoked. libdebug offers a function that will intercept any syscall and print its arguments and return value. This can be done by setting the property pprint_syscalls = True in the Debugger object or ThreadContext object and resuming the process.
Syscall Trace PPrint Syntax
d.pprint_syscalls = True\nd.cont()\n The output will be printed to the console in color according to the following coding:
Format Description blue Syscall name red Syscall was intercepted and handled by a callback (either a basic handler or a hijack) yellow Value given to a syscall argument in hexadecimal strikethrough Syscall was hijacked or a value was changed, the new syscall or value follows the striken textHandled syscalls with a callback associated with them will be listed as such. Additionally, syscalls hijacked through the libdebug API will be highlighted as striken through, allowing you to monitor both the original behavior and your own changes to the flow. The id of the thread that made the syscall will be printed in the beginning of the line in white bold.
","boost":4},{"location":"quality_of_life/pretty_printing/#memory-maps-pretty-printing","title":"Memory Maps Pretty Printing","text":"To pretty print the memory maps of a process, you can simply use the pprint_maps() function. This will print the memory maps of the process in a human-readable format, with color highlighting to distinguish between different memory regions.
To pretty print the stack trace (backtrace) of a process, you can use the pprint_backtrace() function. This will print the stack trace of the process in a human-readable format.
The pprint_memory() function will print the contents of the process memory within a certain range of addresses.
Function signature
d.pprint_memory(\n start: int,\n end: int,\n file: str = \"hybrid\",\n override_word_size: int = None,\n integer_mode: bool = False,\n) -> None:\n Parameter Data Type Description start int The start address of the memory range to print. end int The end address of the memory range to print. file str (optional) The file to use for the memory content. Defaults to hybrid mode (see memory access). override_word_size int (optional) The word size to use to align memory contents. By default, it uses the ISA register size. integer_mode bool (optional) Whether to print the memory content in integer mode. Defaults to False Start after End
For your convenience, if the start address is greater than the end address, the function will swap the values.
Here is a visual example of the memory content pretty printing (with and without integer mode):
Integer mode disabledInteger mode enabled ","boost":4},{"location":"quality_of_life/quality_of_life/","title":"Quality of Life Features","text":"For your convenience, libdebug offers a few functions that will speed up your debugging process.
","boost":4},{"location":"quality_of_life/quality_of_life/#pretty-printing","title":"Pretty Printing","text":"Visualizing the state of the process you are debugging can be a daunting task. libdebug offers utilities to print registers, memory maps, syscalls, and more in a human-readable format and with color highlighting.
","boost":4},{"location":"quality_of_life/quality_of_life/#symbol-resolution","title":"Symbol Resolution","text":"libdebug can resolve symbols in the binary and shared libraries. With big binaries, this can be a computationally intensive, especially if your script needs to be run multiple types. You can set symbol resolution levels and specify where to look for symbols according to your needs.
","boost":4},{"location":"quality_of_life/quality_of_life/#memory-maps","title":"Memory Maps","text":"libdebug offers utilities to retrieve the memory maps of a process. This can be useful to understand the memory layout of the process you are debugging.
","boost":4},{"location":"quality_of_life/quality_of_life/#stack-frame-utils","title":"Stack Frame Utils","text":"libdebug offers utilities to resolve the return addresses of a process.
","boost":4},{"location":"quality_of_life/quality_of_life/#evasion-of-anti-debugging","title":"Evasion of Anti-Debugging","text":"libdebug offers a few functions that will help you evade simple anti-debugging techniques. These functions can be used to bypass checks for the presence of a debugger.
","boost":4},{"location":"quality_of_life/stack_frame_utils/","title":"Stack Frame Utils","text":"Function calls in a binary executable are made according to a system calling convention. One constant in these conventions is the use of a stack frame to store the return addresses to resume at the end of the function.
Different architectures have slightly different ways to retrieve the return address (for example, in AArch64, the latest return address is stored in x30, the Link Register). To abstract these differences, libdebug provides common utilities to resolve the stack trace (backtrace) of the running process (or thread).
libdebug's backtrace is structured like a LIFO stack, with the top-most value being the current instruction pointer. Subsequent values are the return addresses of the functions that were called to reach the current instruction pointer.
Backtrace usage example
from libdebug import debugger\n\nd = debugger(\"test_backtrace\")\nd.run()\n\n# A few calls later...\n[...]\n\ncurrent_ip = d.backtrace()[0]\nreturn_address = d.backtrace()[1]\nother_return_addresses = d.backtrace()[2:]\n Additionally, the field saved_ip of the Debugger or ThreadContext objects will contain the return address of the current function.
As described in the memory access section, many functions in libdebug accept symbols as an alternative to actual addresses or offsets.
You can list all resolved symbols in the binary and shared libraries using the symbols attribute of the Debugger object. This attribute returns a SymbolList object.
This object grants the user hybrid access to the symbols: as a dict or as a list. Tor example, the following lines of code all have a valid syntax:
d.symbols['printf'] #(1)!\nd.symbols[0] #(2)!\nd.symbols['printf'][0] #(3)!\n printf exactly.printf exactly.Please note that the dict-like access returns exact matches with the symbol name. If you want to filter for symbols that contain a specific string, read the dedicated section.
C++ Demangling
Reverse-engineering of C++ binaries can be a struggle. To help out, libdebug automatically demangles C++ symbols.
","boost":4},{"location":"quality_of_life/symbols/#symbol-resolution-levels","title":"Symbol Resolution Levels","text":"With large binaries and libraries, parsing symbols can become an expensive operation. Because of this, libdebug offers the possibility of choosing among 5 levels of symbol resolution. To set the symbol resolution level, you can use the sym_lvl property of the libcontext module. The default value is level 5.
debuginfod. The file is cached in the default folder for debuginfod. Upon searching for symbols, libdebug will proceed from the lowest level to the set maximum.
Example of setting the symbol resolution level
from libdebug import libcontext\n\nlibcontext.sym_lvl = 3\nd.breakpoint('main')\n If you want to change the symbol resolution level temporarily, you can use a with statement along with the tmp method of the libcontext module.
Example of temporary resolution level change
from libdebug import libcontext\n\nwith libcontext.tmp(sym_lvl = 5):\n d.breakpoint('main')\n","boost":4},{"location":"quality_of_life/symbols/#symbol-filtering","title":"Symbol Filtering","text":"The symbols attribute of the Debugger object allows you to filter symbols in the binary and shared libraries.
Function Signature
d.symbols.filter(value: int | str) -> SymbolList[Symbol]\n Given a symbol name or address, this function returns a SymbolList. The list will contain all symbols that match the given value.
Symbol objects contain the following attributes:
Attribute Type Descriptionstart int The start offset of the symbol. end int The end offset of the symbol. name str The name of the symbol. backing_file str The file where the symbol is defined (e.g., binary, libc, ld). Slow Symbol Resolution
Please keep in mind that symbol resolution can be an expensive operation on large binaries and shared libraries. If you are experiencing performance issues, you can set the symbol resolution level to a lower value.
","boost":4},{"location":"save_states/save_states/","title":"Save States","text":"Save states are a powerful feature in libdebug to save the current state of the process.
There is no single way to define a save state. The state of a process in an operating system, is not just its memory and register contents. The process interacts with shared external resources, such as files, sockets, and other processes. These resources cannot be restored in a reliable way. Still, there are many interesting use cases for saving and restoring all that can be saved.
So...what is a save state in libdebug? Although we plan on supporting multiple types of save states for different use cases in the near future, libdebug currently supports only snapshots.
","boost":4},{"location":"save_states/snapshot_diffs/","title":"Snapshot Diffs","text":"Snapshot diffs are objects that represent what changed between two snapshots. They are created through the diff() method of a snapshot.
The level of a diff is resolved as the lowest level of the two snapshots being compared. For example, if a diff is created between a full snapshot and a base snapshot, their diff will be of base level. For more information on the different levels of snapshots, see the Snapshots page.
ASLR Mess
If Address Space Layout Randomization (ASLR) is enabled, the memory addresses in the diffs may appear inconsistent or messy. libdebug will remind you of this when you diff snapshots with ASLR enabled. See here for more information.
","boost":4},{"location":"save_states/snapshot_diffs/#api","title":"API","text":"Just like snapshots themselves, diffs try to mimic the API of the Debugger and ThreadContext objects. The main difference is that returned objects represent a change in state, rather than the state itself.
","boost":4},{"location":"save_states/snapshot_diffs/#register-diffs","title":"Register Diffs","text":"The regs attribute of a diff object (aliased as registers) is a RegisterDiffAccessor object that allows you to access the register values of the snapshot. The accessor will return a RegisterDiff object that represents the difference between the two snapshots.
You can access each diff with any of the architecture-specific register names. For a full list, refer to the Register Access page.
Example usage
print(ts_diff.regs.rip)\n Output: RegisterDiff(old_value=0x56148d577130, new_value=0x56148d577148, has_changed=True)\n Each register diff is an object with the following attributes:
Attribute Data Type Descriptionold_value int | float The value of the register in the first snapshot. new_value int | float The value of the register in the second snapshot. has_changed bool Whether the register value has changed.","boost":4},{"location":"save_states/snapshot_diffs/#memory-map-diffs","title":"Memory Map Diffs","text":"The maps attribute of a diff object is a MemoryMapDiffList object that contains the memory maps of the process in each of the snapshots.
Here is what a MemoryMapDiff object looks like:
Example usage
print(ts_diff.maps[-2])\n Output (indented for readability): MemoryMapDiff(\n old_map_state=MemoryMap(\n start=0x7fff145ea000,\n end=0x7fff1460c000,\n permissions=rw-p,\n size=0x22000,\n offset=0x0,\n backing_file=[stack]\n ) [snapshot with content],\n new_map_state=MemoryMap(\n start=0x7fff145ea000,\n end=0x7fff1460c000,\n permissions=rw-p,\n size=0x22000,\n offset=0x0,\n backing_file=[stack]\n ) [snapshot with content],\n has_changed=True,\n _cached_diffs=None\n)\n The map diff contains the following attributes:
Attribute Data Type Descriptionold_map_state MemoryMap The memory map in the first snapshot. new_map_state MemoryMap The memory map in the second snapshot. has_changed bool Whether the memory map has changed. Memory Map Diff Levels
If the diff is of base level, the has_changed attribute will only consider superficial changes in the memory map (e.g., permissions, end address). Under the writable and full levels, the diff will also consider the contents of the memory map.
If the diff is of full or writable level, the MemoryMapDiff object exposes a useful utility to track blocks of differing memory contents in a certain memory map: the content_diff attribute.
Example usage
stack_page_diff = ts_diff.maps.filter(\"stack\")[0]\n\nfor current_slice in stack_page_diff.content_diff:\n print(f\"Memory diff slice: {hex(current_slice.start)}:{hex(current_slice.stop)}\")\n Output: Memory diff slice: 0x20260:0x20266\nMemory diff slice: 0x20268:0x2026e\n The attribute will return a list of slice objects that represent the blocks of differing memory contents in the memory map. Each slice will contain the start and end addresses of the differing memory block relative to the memory map.
","boost":4},{"location":"save_states/snapshot_diffs/#attributes","title":"Attributes","text":"Attribute Data Type Level Description Aliases Commonsnapshot1 Snapshot All The earliest snapshot being compared (recency is determined by id ordering). snapshot2 Snapshot All The latest snapshot being compared (recency is determined by id ordering). level str All The diff level. maps MemoryMapDiffList All The memory maps of the process. Each map will also have the contents of the memory map under the appropriate snapshot level. Thread Snapshot Diff regs RegisterDiffAccessor All The register values of the thread. registers Process Snapshot Diff born_threads list[LightweightThreadSnapshot] All Snapshots of all threads of the process. dead_threads list[LightweightThreadSnapshot] All Snapshots of all threads of the process. threads list[LightweightThreadSnapshotDiff] All Snapshots of all threads of the process. regs RegsterDiffAccessor All The register values of the main thread of the process. registers","boost":4},{"location":"save_states/snapshot_diffs/#pretty-printing","title":"Pretty Printing","text":"Pretty Printing is a feature of some libdebug objects that allows you to print the contents of a snapshot in a colorful and eye-catching format. This is useful when you want to inspect the state of the process at a glance.
Diff objects have the following pretty printing functions:
Function Descriptionpprint_registers() Prints changed general-purpose register values pprint_registers_all() Prints all changed register values (including special and vector registers) pprint_maps() Prints memory maps which have changed between snapshots (highlights if only the content or the end address have changed). pprint_memory() Prints the memory content diffs of the snapshot. See next section for more information pprint_backtrace() Prints the diff of the backtrace between the two snapshots. Here are some visual examples of the pretty printing functions:
","boost":4},{"location":"save_states/snapshot_diffs/#register-diff-pretty-printing","title":"Register Diff Pretty Printing","text":"The pprint_registers() function of a diff object will print the changed general-purpose register values.
Here is a visual example of the register diff pretty printing:
","boost":4},{"location":"save_states/snapshot_diffs/#memory-map-diff-pretty-printing","title":"Memory Map Diff Pretty Printing","text":"The pprint_maps() function of a diff object will print the memory maps which have changed between snapshots. It also hi
Here is a visual example of the memory map diff pretty printing:
","boost":4},{"location":"save_states/snapshot_diffs/#memory-content-diff-pretty-printing","title":"Memory Content Diff Pretty Printing","text":"The pprint_memory() function of a diff object will print the content diffs within a certain range of memory addresses.
Function signature
ts_diff.pprint_memory(\n start: int,\n end: int,\n file: str = \"hybrid\",\n override_word_size: int = None,\n integer_mode: bool = False,\n) -> None:\n Parameter Data Type Description start int The start address of the memory range to print. end int The end address of the memory range to print. file str (optional) The file to use for the memory content. Defaults to hybrid mode (see memory access). override_word_size int (optional) The word size to use to align memory contents. By default, it uses the ISA register size. integer_mode bool (optional) Whether to print the memory content in integer mode. Defaults to False Start after End
For your convenience, if the start address is greater than the end address, the function will swap the values.
Here is a visual example of the memory content diff pretty printing (with and without integer mode):
Integer mode disabledInteger mode enabled ","boost":4},{"location":"save_states/snapshot_diffs/#stack-trace-diff-pretty-printing","title":"Stack Trace Diff Pretty Printing","text":"To pretty print the stack trace diff (backtrace) of a process, you can use the pprint_backtrace() function. Return addresses are printed from the most to the least recent. They are placed in three columns. The center one is the common part of the backtrace, while the left and right columns are the differing parts. The following image shows an example of a backtrace diff:
Snapshots are a static type of save state in libdebug. They allow you to save the current state of the process in terms of registers, memory, and other process properties. Snapshots can be saved to disk as a file and loaded for future use. Finally, snapshots can be diffed to compare the differences between the state of the process at two different moments or executions.
Snapshots are static
Snapshots are static in the sense that they capture the state of the process at a single moment in time. They can be loaded and inspected at any time and across different architectures. They do not, however, allow to restore their state to the process.
There are three available levels of snapshots in libdebug, which differ in the amount of information they store:
Level Registers Memory Pages Memory Contentsbase writable writable pages only full Since memory content snapshots can be large, the default level is base.
You can create snapshots of single threads or the entire process.
","boost":4},{"location":"save_states/snapshots/#api","title":"API","text":"Register Access
You can access a snapshot's registers using the regs attribute, just like you would when debugging the process.
API Reference
Memory Access
When the snapshot level is appropriate, you can access the memory of the process using the memory attribute.
API Reference
Memory Maps
Memory maps are always available. When the snapshot level is appropriate, you can access the contents as a bytes-like object.
API Reference
Stack Trace
When the snapshot level is appropriate, you can access the backtrace of the process or thread.
API Reference
The function used to create a snapshot is create_snapshot(). It behaves differently depending on the object it is called from.
The following is the signature of the function:
Function Signature
d.create_snapshot(level: str = \"base\", name: str = None) -> ProcessSnapshot\n or t.create_snapshot(level: str = \"base\", name: str = None) -> ThreadSnapshot\n Where d is a Debugger object and t is a ThreadContext object. The following is an example usage of the function in both cases:
d = debugger(\"program\")\n\nmy_thread = d.threads[1]\n\n# Thread Snapshot\nts = my_thread.create_snapshot(level=\"full\", name=\"cool snapshot\") #(1)!\n\n# Process Snapshot\nps = d.create_snapshot(level=\"writable\", name=\"very cool snapshot\") #(2)!\n my_thread and name it \"cool snapshot\".Naming Snapshots
When creating a snapshot, you can optionally specify a name for it. The name will be useful when comparing snapshots in diffs or when saving them to disk.
","boost":4},{"location":"save_states/snapshots/#saving-and-loading-snapshots","title":"Saving and Loading Snapshots","text":"You can save a snapshot to disk using the save() method of the Snapshot object. The method will create a serializable version of the snapshot and export a json file to the specified path.
Example usage
ts = d.threads[1].create_snapshot(level=\"full\")\nts.save(\"path/to/save/snapshot.json\")\n You can load a snapshot from disk using the load_snapshot() method of the Debugger object. The method will read the json file from the specified path and create a Snapshot object from it.
Example usage
ts = d.load_snapshot(\"path/to/load/snapshot.json\")\n The snapshot type will be inferred from the json file, so you can easily load both thread and process snapshots from the same method.
","boost":4},{"location":"save_states/snapshots/#resolving-diffs","title":"Resolving Diffs","text":"Thanks to their static nature, snapshots can be easily compared to find differences in saved properties.
You can diff a snapshot against another using the diff() method. The method will return a Diff object that represents the differences between the two snapshots. The diff will be of the lowest level of the two snapshots being compared in terms.
Example usage
ts1 = d.threads[1].create_snapshot(level=\"full\")\n\n[...] # (1)!\n\nts2 = d.threads[1].create_snapshot(level=\"full\")\n\nts_diff = ts1.diff(ts2) # (2)!\n Diffs have a rich and detailed API that allows you to inspect the differences in registers, memory, and other properties. Read more in the dedicated section.
","boost":4},{"location":"save_states/snapshots/#pretty-printing","title":"Pretty Printing","text":"Pretty Printing is a feature of some libdebug objects that allows you to print the contents of a snapshot in a colorful and eye-catching format. This is useful when you want to inspect the state of the process at a glance.
Pretty printing utilities of snapshots are \"mirrors\" of pretty pretting functions available for the Debugger and ThreadContext. Here is a list of available pretty printing functions and their equivalent for the running process:
Function Description Referencepprint_registers() Prints the general-purpose registers of the snapshot. API Reference pprint_registers_all() Prints all registers of the snapshot. API Reference pprint_maps() Prints the memory of the snapshot. API Reference pprint_backtrace() Prints the backtrace of the snapshot. API Reference","boost":4},{"location":"save_states/snapshots/#attributes","title":"Attributes","text":"Attribute Data Type Level Description Aliases Common name str (optional) All The name of the snapshot. arch str All The ISA under which the snapshot process was running. snapshot_id int All Progressive id counted from 0. Process and Thread snapshots have separate counters. level str All The snapshot level. maps MemoryMapSnapshotList All The memory maps of the process. Each map will also have the contents of the memory map under the appropriate snapshot level. memory SnapshotMemoryView writable / full Interface to the memory of the process. mem aslr_enabled bool All Whether ASLR was enabled at the time of the snapshot. Thread Snapshot thread_id int All The ID of the thread the snapshot was taken from. tid regs SnapshotRegisters All The register values of the thread. registers Process Snapshot process_id int All The ID of the process the snapshot was taken from. pid threads list[LightweightThreadSnapshot] All Snapshots of all threads of the process. regs SnapshotRegisters All The register values of the main thread of the process. registers","boost":4},{"location":"stopping_events/breakpoints/","title":"Breakpoints","text":"Breakpoints are the killer feature of any debugger, the fundamental stopping event. They allow you to stop the execution of your code at a specific point and inspect the state of your program to find bugs or understand its design.
Multithreading and Breakpoints
libdebug breakpoints are shared across all threads. This means that any thread can hit the breakpoint and cause the process to stop. You can use the hit_on() method of a breakpoint object to determine which thread hit the breakpoint (provided that the stop was indeed caused by the breakpoint).
A breakpoint can be inserted at any of two levels: software or hardware.
","boost":4},{"location":"stopping_events/breakpoints/#software-breakpoints","title":"Software Breakpoints","text":"Software breakpoints in the Linux kernel are implemented by patching the code in memory at runtime. The instruction at the chosen address is replaced with an interrupt instruction that is conventionally used for debugging. For example, in the i386 and AMD64 instruction sets, int3 (0xCC) is reserved for this purpose.
When the int3 instruction is executed, the CPU raises a SIGTRAP signal, which is caught by the debugger. The debugger then stops the process and restores the original instruction to its rightful place.
Pros and Cons of Software Breakpoints
Software breakpoints are unlimited, but they can break when the program uses self-modifying code. This is because the patched code could be overwritten by the program. On the other hand, software breakpoints are slower than their hardware counterparts on most modern CPUs.
","boost":4},{"location":"stopping_events/breakpoints/#hardware-breakpoints","title":"Hardware Breakpoints","text":"Hardware breakpoints are a more reliable way to set breakpoints. They are made possible by the existence of special registers in the CPU that can be used to monitor memory accesses. Differently from software breakpoints, their hardware counterparts allows the debugger to monitor read and write accesses on top of code execution. This kind of hardware breakpoint is also called a watchpoint. More information on watchpoints can be found in the dedicated documentation.
Pros and Cons of Hardware Breakpoints
Hardware breakpoints are not affected by self-modifying code. They are also usually faster and more flexible. However, hardware breakpoints are limited in number and are hardware-dependent, so their support may vary across different systems.
Hardware Breakpoint Alignment in AArch64
Hardware breakpoints have to be aligned to 4 bytes (which is the size of an ARM instruction).
","boost":4},{"location":"stopping_events/breakpoints/#libdebug-api-for-breakpoints","title":"libdebug API for Breakpoints","text":"The breakpoint() function in the Debugger object sets a breakpoint at a specific address.
Function Signature
d.breakpoint(address, hardware=False, condition='x', length=1, callback=None, file='hybrid')\n Parameters:
Argument Type Descriptionaddress int | str The address or symbol where the breakpoint will be set. hardware bool Set to True to set a hardware breakpoint. condition str The type of access in case of a hardware breakpoint. length int The size of the word being watched in case of a hardware breakpoint. callback Callable | bool (see callback signature here) Used to create asyncronous breakpoints (read more on the debugging flow of stopping events). file str The backing file for relative addressing. Refer to the memory access section for more information on addressing modes. Returns:
Return Type DescriptionBreakpoint Breakpoint The breakpoint object created. Limited Hardware Breakpoints
Hardware breakpoints are limited in number. If you exceed the number of hardware breakpoints available on your system, a RuntimeError will be raised.
Usage Example
from libdebug import debugger\n\nd = debugger(\"./test_program\")\n\nd.run()\n\nbp = d.breakpoint(0x10ab, file=\"binary\") # (1)!\nbp1 = d.breakpoint(\"main\", file=\"binary\") # (3)!\nbp2 = d.breakpoint(\"printf\", file=\"libc\") # (4)!\n\nd.cont()\n\nprint(f\"RAX: {d.regs.rax:#x} at the breakpoint\") # (2)!\nif bp.hit_on(d):\n print(\"Breakpoint at 0x10ab was hit\")\nelif bp1.hit_on(d):\n print(\"Breakpoint at main was hit\")\nelif bp2.hit_on(d):\n print(\"Breakpoint at printf was hit\")\n main symbolprintf symbol in the libc libraryIf you wish to create an asynchronous breakpoint, you will have to provide a callback function. If you want to leave the callback empty, you can set callback to True.
Callback Signature
def callback(t: ThreadContext, bp: Breakpoint):\n Parameters:
Argument Type Descriptiont ThreadContext The thread that hit the breakpoint. bp Breakpoint The breakpoint object that triggered the callback. Example usage of asynchronous breakpoints
def on_breakpoint_hit(t, bp):\n print(f\"RAX: {t.regs.rax:#x}\")\n\n if bp.hit_count == 100:\n print(\"Hit count reached 100\")\n bp.disable()\n\nd.breakpoint(0x11f0, callback=on_breakpoint_hit, file=\"binary\")\n","boost":4},{"location":"stopping_events/breakpoints/#the-breakpoints-dict","title":"The Breakpoints Dict","text":"The breakpoints attribute of the Debugger object is a dictionary that contains all the breakpoints set by the user. The keys are the addresses of the breakpoints, and the values are the corresponding Breakpoint objects. This is useful to retrieve breakpoints in \\(O(1)\\) time complexity.
Usage Example - Massive Breakpoint Insertion
from libdebug import debugger\n\ndef hook_callback(t, bp):\n [...]\n\nd = debugger(\"example_binary\")\nd.run()\n\n# Massive breakpoint insertion\nwith open(\"example_binary\", \"rb\") as f:\n binary_data = f.read()\n\ncursor = 0\nwhile cursor < len(binary_data):\n if binary_data[cursor:cursor+2] == b\"\\xD9\\xC9\":\n d.breakpoint(cursor, callback=hook_callback, file=\"binary\") # (1)!\n cursor += 1\n\nd.cont()\n\n[...]\n\nip = d.regs.rip\n\nif d.memory[0x10, 4, \"binary\"] == b\"\\x00\\xff\\x00\\xab\":\n d.breakpoints[ip].disable() # (2)!\n[...]\n FXCH instruction in the binary (at least ones found through static analysis)Before diving into each libdebug stopping event, it's crucial to understand the debugging flow that these events introduce, based on the mode selected by the user.
The flow of all stopping events is similar and adheres to a mostly uniform API structure. Upon placing a stopping event, the user is allowed to specify a callback function for the stopping event. If a callback is passed, the event will trigger asynchronously. Otherwise, if the callback is not passed, the event will be synchronous. The following flowchart shows the difference between the two flows.
Flowchart of different handling modes for stopping eventsWhen a synchronous event is hit, the process will stop, awaiting further commands. When an asynchronous event is hit, libdebug temporarily stops the process and invokes the user callback. Process execution is automatically resumed right after.
Tip: Use cases of asynchronous stopping events
The asynchronous mode for stopping events is particularly useful for events being repeated as a result of a loop in the executed code.
When attempting side-channel reverse engineering, this mode can save a lot of your time.
","boost":4},{"location":"stopping_events/debugging_flow/#types-of-stopping-events","title":"Types of Stopping Events","text":"libdebug supports the following types of stopping events:
Event Type Description Notes Breakpoint Stops the process when a certain address is executed Can be a software or a hardware breakpoint Watchpoint Stops the process when a memory area is read or written Alias for a hardware breakpoint Syscall Stops the process when a syscall is made Two events are supported: syscall start and end Signal Stops the process when a signal is receivedMultiple callbacks or hijacks
Please note that there can be at most one user-defined callback or hijack for each instance of a stopping event (the same syscall, signal or breakpoint address). If a new stopping event is defined for the same thing, the new stopping event will replace the old one, and a warning will be printed.
Internally, hijacks are considered callbacks, so you cannot have a callback and hijack registered for the same event.
","boost":4},{"location":"stopping_events/debugging_flow/#common-apis-of-stopping-events","title":"Common APIs of Stopping Events","text":"All libdebug stopping events share some common attributes that can be employed in debugging scripts.
","boost":4},{"location":"stopping_events/debugging_flow/#enabledisable","title":"Enable/Disable","text":"All stopping events can be enabled or disabled at any time. You can read the enabled attribute to check the current state of the event. To enable or disable the event, you can call the enable() or disable() methods respectively.
The callback function of the event can be set, changed or removed (set to None) at any time. Please be mindful of the event mode resulting from the change on the callback parameter. Additionally, you can set the callback to True to register an empty callback.
Stopping events have attributes that can help you keep track of hits. For example, the hit_count attribute stores the number of times the event has been triggered.
The hit_on() function is used to check if the stopping event was the cause of the process stopping. It is particularly useful when debugging multithreaded applications, as it takes a ThreadContext as a parameter. Refer to multithreading for more information.
Hijacking is a powerful feature that allows you to change the flow of the process when a stopping event is hit. It is available for both syscalls and signals, but currently not for other stopping events. When registering a hijack for a compatible stopping event, that execution flow will be replaced with another.
Example hijacking of a SIGALRM to a SIGUSR1For example, in the case of a signal, you can specify that a received SIGALRM signal should be replaced with a SIGUSR1 signal. This can be useful when you want to prevent a process from executing a certain code path. In fact, you can even use the hijack feature to \"NOP\" the syscall or signal altogether, avoiding it to be executed / forwarded to the processed. More information on how to use this feature in each stopping event can be found in their respective documentation.
Mixing asynchronous callbacks and hijacking can become messy. Because of this, libdebug provides users with the choice of whether to execute the callback for an event that was triggered by a callback or hijack.
This behavior is enabled by the parameter recursive, available when instantiating a syscall handler, a signal catcher, or their respective hijackers. By default, recursion is disabled.
Recursion Loop Detection
When carelessly doing recursive callbacks and hijacking, it could happen that loops are created. libdebug automatically performs checks to avoid these situations and raises an exception if an infinite loop is detected.
For example, the following code raises a RuntimeError:
handler = d.hijack_syscall(\"read\", \"write\", recursive=True)\nhandler = d.hijack_syscall(\"write\", \"read\", recursive=True)\n","boost":4},{"location":"stopping_events/signals/","title":"Signals","text":"Signals are a feature of POSIX systems like (e.g., the Linux kernel) that provide a mechanism for asynchronous communication between processes and the operating system. When certain events occur (e.g., hardware interrupts, illegal operations, or termination requests) the kernel can send a signal to a process to notify it of the event. Each signal is identified by a unique integer and corresponds to a specific type of event. For example, SIGINT (usually triggered by pressing Ctrl+C) is used to interrupt a process, while SIGKILL forcefully terminates a process without cleanup.
Processes can handle these signals in different ways: they may catch and define custom behavior for certain signals, ignore them, or allow the default action to occur.
Restrictions on Signal Catching
libdebug does not support catching SIGSTOP and SIGKILL, since kernel-level restrictions prevent these signals from being caught or ignored. While SIGTRAP can be caught, it is used internally by libdebug to implement stopping events and should be used with caution.
libdebug allows you to intercept signals sent to the tracee. Specifically, you can choose to catch or hijack a specific signal (read more on hijacking).
","boost":4},{"location":"stopping_events/signals/#signal-catchers","title":"Signal Catchers","text":"Signal catchers can be created to register stopping events for when a signal is received.
Multiple catchers for the same signal
Please note that there can be at most one user-defined catcher or hijack for each signal. If a new catcher is defined for a signal that is already caught or hijacked, the new catcher will replace the old one, and a warning will be printed.
","boost":4},{"location":"stopping_events/signals/#libdebug-api-for-signal-catching","title":"libdebug API for Signal Catching","text":"The catch_signal() function in the Debugger object registers a catcher for the specified signal.
Function Signature
d.catch_signal(signal, callback=None, recursive=False) \n Parameters:
Argument Type Descriptionsignal int | str The signal number or name to catch. If set to \"*\" or \"all\", all signals will be caught. callback Callable | bool (see callback signature here) The callback function to be executed when the signal is received. recursive bool If set to True, the catcher's callback will be executed even if the signal was triggered by a hijack. Returns:
Return Type DescriptionSignalCatcher SignalCatcher The catcher object created. Inside a callback or when the process stops on hitting your catcher, you can retrieve the signal number that triggered the catcher by accessing the signal_number attribute of the ThreadContext object. Alternatively, if one exists, the signal attribute of the will contain the signal mnemonic corresponding to the signal number. This is particularly useful when your catcher is registered for multiple signals (e.g., with the all option) and accessing the signal number from it will not represent the signal that triggered the catcher.
Callback Signature
def callback(t: ThreadContext, catcher: SignalCatcher):\n Parameters:
Argument Type Descriptiont ThreadContext The thread that received the signal. catcher SignalCatcher The SignalCatcher object that triggered the callback. Signals in multi-threaded applications
In the Linux kernel, an incoming signal could be delivered to any thread in the process. Please do not assume that the signal will be delivered to a specific thread in your scripts.
Example usage of asynchronous signal catchers
from libdebug import debugger\n\nd = debugger(\"./test_program\")\nd.run()\n\n# Define the callback function\ndef catcher_SIGUSR1(t, catcher):\n t.signal = 0x0 # (1)!\n print(\"Look mum, I'm catching a signal\")\n\ndef catcher_SIGINT(t, catcher):\n print(\"Look mum, I'm catching another signal\")\n\n# Register the signal catchers\ncatcher1 = d.catch_signal(10, callback=catcher_SIGUSR1)\ncatcher2 = d.catch_signal('SIGINT', callback=catcher_SIGINT)\n\nd.cont()\nd.wait()\n 0x0 to prevent the signal from being delivered to the process. (Equivalent to filtering the signal).Example of synchronous signal catching
from libdebug import debugger\n\nd = debugger(\"./test_program\")\nd.run()\n\ncatcher = d.catch_signal(10)\nd.cont()\n\nif catcher.hit_on(d):\n print(\"Signal 10 was caught\")\n The script above will print \"Signal 10 was entered\".
Example of all signal catching
from libdebug import debugger\n\ndef catcher(t, catcher):\n print(f\"Signal {t.signal_number} ({t.signal}) was caught\")\n\nd = debugger(\"./test_program\")\nd.run()\n\ncatcher = d.catch_signal(\"all\")\nd.cont()\nd.wait()\n The script above will print the number and mnemonic of the signal that was caught.
","boost":4},{"location":"stopping_events/signals/#hijacking","title":"Hijacking","text":"When hijacking a signal, the user can provide an alternative signal to be executed in place of the original one. Internally, the hijack is implemented by registering a catcher for the signal and replacing the signal number with the new one.
Function Signature
d.hijack_signal(original_signal, new_signal, recursive=False) \n Parameters:
Argument Type Descriptionoriginal_signal int | str The signal number or name to be hijacked. If set to \"*\" or \"all\", all signals except the restricted ones will be hijacked. new_signal int | str The signal number or name to be delivered instead. recursive bool If set to True, the catcher's callback will be executed even if the signal was dispached by a hijack. Returns:
Return Type DescriptionSignalCatcher SignalCatcher The catcher object created. Example of hijacking a signal
#include <stdio.h>\n#include <stdlib.h>\n#include <unistd.h>\n#include <signal.h>\n\n// Handler for SIGALRM\nvoid handle_sigalrm(int sig) {\n printf(\"You failed. Better luck next time\\n\");\n exit(1);\n}\n\n// Handler for SIGUSR1\nvoid handle_sigusr1(int sig) {\n printf(\"Congrats: flag{pr1nt_pr0vol4_1s_th3_w4y}\\n\");\n exit(0);\n}\n\nint main() {\n // Set up the SIGALRM handler\n struct sigaction sa_alrm;\n sa_alrm.sa_handler = handle_sigalrm;\n sigemptyset(&sa_alrm.sa_mask);\n sa_alrm.sa_flags = 0;\n sigaction(SIGALRM, &sa_alrm, NULL);\n\n // Set up the SIGUSR1 handler\n struct sigaction sa_usr1;\n sa_usr1.sa_handler = handle_sigusr1;\n sigemptyset(&sa_usr1.sa_mask);\n sa_usr1.sa_flags = 0;\n sigaction(SIGUSR1, &sa_usr1, NULL);\n\n // Set an alarm to go off after 10 seconds\n alarm(10);\n\n printf(\"Waiting for a signal...\\n\");\n\n // Infinite loop, waiting for signals\n while (1) {\n pause(); // Suspend the program until a signal is caught\n }\n\n return 0;\n}\n from libdebug import debugger\n\nd = debugger(\"./test_program\")\nd.run()\n\nhandler = d.hijack_signal(\"SIGALRM\", \"SIGUSR1\")\n\nd.cont()\n\n# Will print \"Waiting for a signal...\"\nout = pipe.recvline()\nprint(out.decode())\n\nd.wait()\n\n# Will print the flag\nout = pipe.recvline()\nprint(out.decode())\n","boost":4},{"location":"stopping_events/signals/#signal-filtering","title":"Signal Filtering","text":"Instead of setting a catcher on signals, you might want to filter which signals are not to be forwarded to the debugged process during execution.
Example of signal filtering
d.signals_to_block = [10, 15, 'SIGINT', 3, 13]\n","boost":4},{"location":"stopping_events/signals/#arbitrary-signals","title":"Arbitrary Signals","text":"You can also send an arbitrary signal to the process. The signal will be forwarded upon resuming execution. As always, you can specify the signal number or name.
Example of sending an arbitrary signal
d.signal = 10\nd.cont()\n In multithreaded applications, the same syntax applies when using a ThreadContext object instead of the Debugger object.
","boost":4},{"location":"stopping_events/stopping_events/","title":"Stopping Events","text":"Debugging a process involves stopping the execution at specific points to inspect the state of the program. libdebug provides several ways to stop the execution of a program, such as breakpoints, syscall handling and signal catching. This section covers the different stopping events available in libdebug.
","boost":4},{"location":"stopping_events/stopping_events/#is-the-process-running","title":"Is the process running?","text":"Before we dive into the different stopping events, it is important to understand how to check if the process is running. The running attribute of the Debugger object returns True if the process is running and False otherwise.
Example
from libdebug import debugger\n\nd = debugger(\"program\")\n\nd.run()\n\nif d.running:\n print(\"The process is running\")\nelse:\n print(\"The process is not running\")\n In this example, the script should print The process is not running, since the run() command gives you control over a stopped process, ready to be debugged.
To know more on how to wait for the process to stop or forcibly cause it to stop, please read about control flow commands.
","boost":4},{"location":"stopping_events/syscalls/","title":"Syscalls","text":"System calls (a.k.a. syscalls or software interrupts) are the interface between user space and kernel space. They are used to request services from the kernel, such as reading from a file or creating a new process. libdebug allows you to trace syscalls invoked by the debugged program. Specifically, you can choose to handle or hijack a specific syscall (read more on hijacking).
For extra convenience, the Debugger and the ThreadContext objects provide a system-agnostic interface to the arguments and return values of syscalls. Interacting directly with these parameters enables you to create scripts that are independent of the syscall calling convention specific to the target architecture.
Field Descriptionsyscall_number The number of the syscall. syscall_arg0 The first argument of the syscall. syscall_arg1 The second argument of the syscall. syscall_arg2 The third argument of the syscall. syscall_arg3 The fourth argument of the syscall. syscall_arg4 The fifth argument of the syscall. syscall_arg5 The sixth argument of the syscall. syscall_return The return value of the syscall. Example of Syscall Parameters
[...] # (1)!\n\nbinsh_str = d.memory.find(b\"/bin/sh\\x00\", file=\"libc\")[0]\n\nd.syscall_arg0 = binsh_str\nd.syscall_arg1 = 0x0\nd.syscall_arg2 = 0x0\nd.syscall_number = 0x3b\n\nd.step() # (2)!\n execve('/bin/sh', 0, 0) will be executed in place of the previous syscall.Syscall handlers can be created to register stopping events for when a syscall is entered and exited.
Do I have to handle both on enter and on exit?
When using asynchronous syscall handlers, you can choose to handle both or only one of the two events. However, when using synchronous handlers, both events will stop the process.
","boost":4},{"location":"stopping_events/syscalls/#libdebug-api-for-syscall-handlers","title":"libdebug API for Syscall Handlers","text":"The handle_syscall() function in the Debugger object registers a handler for the specified syscall.
Function Signature
d.handle_syscall(syscall, on_enter=None, on_exit=None, recursive=False) \n Parameters:
Argument Type Descriptionsyscall int | str The syscall number or name to be handled. If set to \"*\" or \"all\" or \"ALL\", all syscalls will be handled. on_enter Callable | bool (see callback signature here) The callback function to be executed when the syscall is entered. on_exit Callable | bool (see callback signature here) The callback function to be executed when the syscall is exited. recursive bool If set to True, the handler's callback will be executed even if the syscall was triggered by a hijack or caused by a callback. Returns:
Return Type DescriptionSyscallHandler SyscallHandler The handler object created.","boost":4},{"location":"stopping_events/syscalls/#callback-signature","title":"Callback Signature","text":"Callback Signature
def callback(t: ThreadContext, handler: HandledSyscall) -> None:\n Parameters:
Argument Type Descriptiont ThreadContext The thread that hit the syscall. handler SyscallHandler The SyscallHandler object that triggered the callback. Nuances of Syscall Handling
The syscall handler is the only stopping event that can be triggered by the same syscall twice in a row. This is because the handler is triggered both when the syscall is entered and when it is exited. As a result the hit_on() method of the SyscallHandler object will return True in both instances.
You can also use the hit_on_enter() and hit_on_exit() functions to check if the cause of the process stop was the syscall entering or exiting, respectively.
As for the hit_count attribute, it only stores the number of times the syscall was exited.
Example usage of asynchronous syscall handlers
def on_enter_open(t, handler):\n print(\"entering open\")\n t.syscall_arg0 = 0x1\n\ndef on_exit_open(t, handler):\n print(\"exiting open\")\n t.syscall_return = 0x0\n\nhandler = d.handle_syscall(syscall=\"open\", on_enter=on_enter_open, on_exit=on_exit_open)\n Example of synchronous syscall handling
from libdebug import debugger\n\nd = debugger(\"./test_program\")\nd.run()\n\nhandler = d.handle_syscall(syscall=\"open\")\nd.cont()\n\nif handler.hit_on_enter(d):\n print(\"open syscall was entered\")\nelif handler.hit_on_exit(d):\n print(\"open syscall was exited\")\n The script above will print \"open syscall was entered\".
","boost":4},{"location":"stopping_events/syscalls/#resolution-of-syscall-numbers","title":"Resolution of Syscall Numbers","text":"Syscall handlers can be created with the identifier number of the syscall or by the syscall's common name. In the second case, syscall names are resolved from a definition list for Linux syscalls on the target architecture. The list is fetched from mebeim's syscall table. We thank him for hosting such a precious resource. Once downloaded, the list is cached internally.
","boost":4},{"location":"stopping_events/syscalls/#hijacking","title":"Hijacking","text":"When hijacking a syscall, the user can provide an alternative syscall to be executed in place of the original one. Internally, the hijack is implemented by registering a handler for the syscall and replacing the syscall number with the new one.
Function Signature
d.hijack_syscall(original_syscall, new_syscall, recursive=False, **kwargs) \n Parameters:
Argument Type Descriptionoriginal_syscall int | str The syscall number or name to be hijacked. If set to \"*\" or \"all\" or \"ALL\", all syscalls will be hijacked. new_syscall int | str The syscall number or name to be executed instead. recursive bool If set to True, the handler's callback will be executed even if the syscall was triggered by a hijack or caused by a callback. **kwargs (int, optional) Additional arguments to be passed to the new syscall. Returns:
Return Type DescriptionSyscallHandler SyscallHandler The handler object created. Example of hijacking a syscall
#include <unistd.h>\n\nchar secretBuffer[32] = \"The password is 12345678\";\n\nint main(int argc, char** argv)\n{\n [...]\n\n read(0, secretBuffer, 31);\n\n [...]\n return 0;\n}\n from libdebug import debugger\n\nd = debugger(\"./test_program\")\nd.run()\n\nhandler = d.hijack_syscall(\"read\", \"write\")\n\nd.cont()\nd.wait()\n\nout = pipe.recvline()\nprint(out.decode())\n In this case, the secret will be leaked to the standard output instead of being overwritten with content from the standard input.
For your convenience, you can also easily provide the syscall parameters to be used when the hijacked syscall is executed:
Example of hijacking a syscall with parameters
#include <unistd.h>\n\nchar manufacturerName[32] = \"libdebug\";\nchar secretKey[32] = \"provola\";\n\nint main(int argc, char** argv)\n{\n [...]\n\n read(0, manufacturerName, 31);\n\n [...]\n return 0;\n}\n from libdebug import debugger\n\nd = debugger(\"./test_program\")\nd.run()\n\nmanufacturerBuffer = ...\n\nhandler = d.hijack_syscall(\"read\", \"write\",\n syscall_arg0=0x1,\n syscall_arg1=manufacturerBuffer,\n syscall_arg2=0x100\n)\n\nd.cont()\nd.wait()\n\nout = pipe.recvline()\nprint(out.decode())\n Again, the secret will be leaked to the standard output.
","boost":4},{"location":"stopping_events/watchpoints/","title":"Watchpoints","text":"Watchpoints are a special type of hardware breakpoint that triggers when a specific memory location is accessed. You can set a watchpoint to trigger on certain memory access conditions, or upon execution (equivalent to a hardware breakpoint).
Features of watchpoints are shared with breakpoints, so you can set asynchronous watchpoints and use properties in the same way.
","boost":4},{"location":"stopping_events/watchpoints/#libdebug-api-for-watchpoints","title":"libdebug API for Watchpoints","text":"The watchpoint() function in the Debugger object sets a watchpoint at a specific address. While you can also use the breakpoint API to set up a watchpoint, a specific API is provided for your convenience:
Function Signature
d.watchpoint(position, condition='w', length=1, callback=None, file='hybrid') \n Parameters:
Argument Type Descriptionposition int | str The address or symbol where the watchpoint will be set. condition str The type of access (see later section). length int The size of the word being watched (see later section). callback Callable | bool (see callback signature here) Used to create asyncronous watchpoints (read more on the debugging flow of stopping events). file str The backing file for relative addressing. Refer to the memory access section for more information on addressing modes. Returns:
Return Type DescriptionBreakpoint Breakpoint The breakpoint object created.","boost":4},{"location":"stopping_events/watchpoints/#valid-access-conditions","title":"Valid Access Conditions","text":"The condition parameter specifies the type of access that triggers the watchpoint. Default is write access.
\"r\" Read access AArch64 \"w\" Write access AMD64, AArch64 \"rw\" Read/write access AMD64, AArch64 \"x\" Execute access AMD64","boost":4},{"location":"stopping_events/watchpoints/#valid-word-lengths","title":"Valid Word Lengths","text":"The length parameter specifies the size of the word being watched. By default, the watchpoint is set to watch a single byte.
Watchpoint alignment in AArch64
The address of the watchpoint on AArch64-based CPUs needs to be aligned to 8 bytes. Instead, basic hardware breakpoints have to be aligned to 4 bytes (which is the size of an ARM instruction).
","boost":4},{"location":"stopping_events/watchpoints/#callback-signature","title":"Callback Signature","text":"If you wish to create an asynchronous watchpoint, you will have to provide a callback function. Since internally watchpoints are implemented as hardware breakpoints, the callback signature is the same as for breakpoints. As for breakpoints, if you want to leave the callback empty, you can set callback to True.
Callback Signature
def callback(t: ThreadContext, bp: Breakpoint):\n Parameters:
Argument Type Descriptiont ThreadContext The thread that hit the breakpoint. bp Breakpoint The breakpoint object that triggered the callback. Example usage of asynchronous watchpoints
def on_watchpoint_hit(t, bp):\n print(f\"RAX: {t.regs.rax:#x}\")\n\n if bp.hit_count == 100:\n print(\"Hit count reached 100\")\n bp.disable()\n\nd.watchpoint(0x11f0, condition=\"rw\", length=8, callback=on_watchpoint_hit, file=\"binary\")\n","boost":4},{"location":"blog/archive/2025/","title":"2025","text":""},{"location":"blog/archive/2024/","title":"2024","text":""}]}
\ No newline at end of file
diff --git a/0.8.1/sitemap.xml b/0.8.1/sitemap.xml
index 0a49e15..0ca5692 100644
--- a/0.8.1/sitemap.xml
+++ b/0.8.1/sitemap.xml
@@ -2,270 +2,566 @@