subprocess llama.cpp server (#401)

* remove c code
* pack llama.cpp
* use request context for llama_cpp
* let llama_cpp decide the number of threads to use
* stop llama runner when app stops
* remove sample count and duration metrics
* use go generate to get libraries
* tmp dir for running llm
This commit is contained in:
Bruce MacDonald
2023-08-30 16:35:03 -04:00
committed by GitHub
parent f4432e1dba
commit 42998d797d
37 changed files with 958 additions and 43928 deletions

View File

@@ -158,7 +158,7 @@ function restart() {
app.on('before-quit', () => {
if (proc) {
proc.off('exit', restart)
proc.kill()
proc.kill('SIGINT') // send SIGINT signal to the server, which also stops any loaded llms
}
})