Skip to content

pyinstaller hook script#709

Open
earonesty wants to merge 3 commits intoabetlen:mainfrom
earonesty:main
Open

pyinstaller hook script#709
earonesty wants to merge 3 commits intoabetlen:mainfrom
earonesty:main

Conversation

@earonesty
Copy link
Copy Markdown
Contributor

@earonesty earonesty commented Sep 13, 2023

copies the dll around so pyinstaller works

if anyone needs it

works on windows/linux

osx seems to work too.

@903124
Copy link
Copy Markdown

903124 commented Oct 6, 2023

Very helpful thanks!

@bishwenduk029
Copy link
Copy Markdown

@earonesty , till this PR gets merged, can we do this manually by modifying the existing .spec file generated from pyinstaller?

@earonesty
Copy link
Copy Markdown
Contributor Author

You can just specify an additional hooks directory in the command line when you build

@inferense
Copy link
Copy Markdown

this is great, thank you

antoine-lizee pushed a commit to antoine-lizee/llama-cpp-python that referenced this pull request Oct 30, 2023
@robertritz
Copy link
Copy Markdown

FYI on Mac I'm also seeing libllama.dylib. I edited the hook file like so and it's working great.

elif sys.platform == 'darwin':  # Mac
    so_path = os.path.join(package_path, 'llama_cpp', 'libllama.dylib')
    datas.append((so_path, 'llama_cpp'))

@abetlen abetlen force-pushed the main branch 2 times, most recently from 8c93cf8 to cc0fe43 Compare November 14, 2023 20:24
@demattosanthony
Copy link
Copy Markdown

On mac I'm having issues when setting n_gpu_layers to 1. Any ideas on how to fix? I added the ggml-metal.metal file to the datas array but still no luck

llama_new_context_with_model: kv self size  = 1000.00 MiB
llama_build_graph: non-view tensors processed: 740/740
ggml_metal_init: allocating
ggml_metal_init: found device: Apple M1 Pro
ggml_metal_init: picking default device: Apple M1 Pro
ggml_metal_init: default.metallib not found, loading from source
ggml_metal_init: error: could not use bundle path to find ggml-metal.metal, falling back to trying cwd
ggml_metal_init: loading 'ggml-metal.metal'
ggml_metal_init: error: Error Domain=NSCocoaErrorDomain Code=260 "The file “ggml-metal.metal” couldn’t be opened because there is no such file." UserInfo={NSFilePath=ggml-metal.metal, NSUnderlyingError=0x13fe76d20 {Error Domain=NSPOSIXErrorDomain Code=2 "No such file or directory"}}
llama_new_context_with_model: ggml_metal_init() failed
AVX = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 1 | ARM_FMA = 1 | F16C = 0 | FP16_VA = 1 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 0 | SSSE3 = 0 | VSX = 0 | 

@earonesty
Copy link
Copy Markdown
Contributor Author

earonesty commented Dec 4, 2023 via email

@demattosanthony
Copy link
Copy Markdown

demattosanthony commented Dec 4, 2023

@earonesty I added it to the datas array and rebuilt it but still failing

@eric-prog
Copy link
Copy Markdown

eric-prog commented Jan 6, 2024

Hi @earonesty! I get an error when running pyinstaller --additional-hooks-dir=./hooks main.py with the hooks folder created and your script file in the folder:

Unable to find '/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/llama_cpp/llama.so' when adding binary and data files.

Know how to solve it potentially?

I am trying to package a tkinter file using pyinstaller and my tkinter file has llama-cpp-python installed and imported.

@averypfeiffer
Copy link
Copy Markdown

@eric-prog Just a guess, but I believe the name for the artifact was changed from "llama.so" to "libllama.so". Same goes for the dylib and dll artifacts.

Making that small change in the script worked for me! But you can verify in your own environment by checking ".venv/lib/python3.11/site-packages/llama_cpp" in your project (Note: you may need to replace python3.11 with the version you're using in your venv) to see the names of the build artifacts

@alexeygridnev
Copy link
Copy Markdown

alexeygridnev commented Aug 3, 2024

Unfortunately, this pull request doesn't fix the issue for me. Adding the above-mentioned ./hooks folder with the hook-llama_cpp.py file (as per commit 3a9227c, with libllama.so) doesn't fix the problem for me on Linux. PyInstaller produces the executable but when you try to run it, it fails with the same error

FileNotFoundError: Shared library with base name 'llama' not found

as in issue #1475 .

@gudarzi
Copy link
Copy Markdown

gudarzi commented Sep 20, 2024

Cool, but I had to change this:

dll_path = os.path.join(package_path, 'llama_cpp', 'llama.dll')

To this:

dll_path = os.path.join(package_path, 'llama_cpp', 'lib', 'llama.dll')

For my project to work!

@JulienElkaim
Copy link
Copy Markdown

JulienElkaim commented Jan 29, 2025

kudos to this solution ! I was importing manually the binary like:
binaries=[('/path-to-my-python-env-folder/site-packages/llama_cpp/lib/libllama.dylib', 'llama_cpp/lib/')],
Which is a pain.... and not shareable to a team.

For future developers reading this:

  • The name of the file is important for Pyinstaller to get the hook ! Should be hook-<package_name>, here do as the commit.
  • If you keep the SAME impl as this commit, at list add 'lib' in the path.join and rename the files like linux, i.e "libllama.dylib" . For example : dll_path = os.path.join(package_path, 'llama_cpp', 'lib', 'libllama.dylib')
  • As of today, using the last pyinstaller and llama_cpp, your hook can be drastically reduced to:
from PyInstaller.utils.hooks import collect_dynamic_libs

# Automatically collect all shared libraries
binaries = collect_dynamic_libs('llama_cpp')
print(f"🚀 Hook executed at compile time: {binaries}")

After running the pyinstaller command, you will see this print and the binaries it includes (dylib etc are successfully added !)

@movingJin
Copy link
Copy Markdown

It works to me, Thanks!

@Eros483
Copy link
Copy Markdown

Eros483 commented May 23, 2025

When I am running the pyinstaller command
pyinstaller --name binary-name --additional-hooks-dir=./hooks frontend.py
I am not even getting frontend.exe
Am i using the command wrong?
@JulienElkaim with your file placed inside the hooks folder and named as hook_llama_cpp.py, how would i go about running the pyinstaller command?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.