Loading model file models/open_llama_7b_preview_200bt/open_llama_7b_preview_200bt_transformers_weights/pytorch_model-00001-of-00002.bin
Traceback (most recent call last):
File "convert-pth-to-ggml.py", line 11, in <module>
convert.main(['--outtype', 'f16' if args.ftype == 1 else 'f32', '--', args.dir_model])
File "/Volumes/mac/Dev/llama.cpp/convert.py", line 1145, in main
model_plus = load_some_model(args.model)
File "/Volumes/mac/Dev/llama.cpp/convert.py", line 1071, in load_some_model
models_plus.append(lazy_load_file(path))
File "/Volumes/mac/Dev/llama.cpp/convert.py", line 865, in lazy_load_file
return lazy_load_torch_file(fp, path)
File "/Volumes/mac/Dev/llama.cpp/convert.py", line 737, in lazy_load_torch_file
model = unpickler.load()
Totally agree! That's why I want to help open source maintainers monetize the ecosystem they have built. The first step: Sell add-on products/plugins/tools with a super easy Checkout paywall: https://basetools.io
I've been using VS Code for most of my software development lately. However, there were always some issues where the Vim integration did not allow keyboard navigation everywhere. This is a list of settings I collected to allow a deeper integration.
Thanks for your feedback! This are some very good points you make. We will definitely add a diagram like that to our page.
Regarding your questions: We take the pain away of managing all these different channels. All the incoming messages are stored in suplify and you and your colleagues can reply right there. And you can see what your colleagues wrote.
Is that immediately clear from your landing page? Is that your main message? Shouldn't your landing page shout:
Overwhelmed by input channels?
Manage them simply with Suplify!
Combine - correlate - instant access.
Control and manage your information
flow from email, twitter, facebook,
and more.