Logo
Explore Help
Sign In
matt/ollama37
1
0
Fork 0
You've already forked ollama37
mirror of https://github.com/dogkeeper886/ollama37.git synced 2025-12-11 00:07:07 +00:00
Code Issues Packages Projects Releases Wiki Activity
Files
18d9a7e1f19fd2467b98e59a348ee1fa102f632b
ollama37/llm
History
Jeffrey Morgan 18d9a7e1f1 update llama.cpp submodule to f364eb6 (#4060)
2024-04-30 17:25:39 -04:00
..
ext_server
update llama.cpp submodule to f364eb6 (#4060)
2024-04-30 17:25:39 -04:00
generate
Do not build AVX runners on ARM64
2024-04-26 23:55:32 -06:00
llama.cpp @ f364eb6fb5
update llama.cpp submodule to f364eb6 (#4060)
2024-04-30 17:25:39 -04:00
patches
Fix clip log import
2024-04-26 09:43:46 -07:00
ggla.go
refactor tensor query
2024-04-10 11:37:20 -07:00
ggml.go
fix: mixtral graph
2024-04-22 17:19:44 -07:00
gguf.go
fixes for gguf (#3863)
2024-04-23 20:57:20 -07:00
llm_darwin_amd64.go
Switch back to subprocessing for llama.cpp
2024-04-01 16:48:18 -07:00
llm_darwin_arm64.go
Switch back to subprocessing for llama.cpp
2024-04-01 16:48:18 -07:00
llm_linux.go
Switch back to subprocessing for llama.cpp
2024-04-01 16:48:18 -07:00
llm_windows.go
Move nested payloads to installer and zip file on windows
2024-04-23 16:14:47 -07:00
llm.go
Add import declaration for windows,arm64 to llm.go
2024-04-26 23:23:53 -06:00
memory.go
fix gemma, command-r layer weights
2024-04-26 15:00:55 -07:00
payload.go
Move nested payloads to installer and zip file on windows
2024-04-23 16:14:47 -07:00
server.go
llm: dont cap context window limit to training context window (#3988)
2024-04-29 10:07:30 -04:00
status.go
Switch back to subprocessing for llama.cpp
2024-04-01 16:48:18 -07:00
Powered by Gitea Version: 1.25.1 Page: 68ms Template: 3ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API