Streaming chat
Hopfield provides a simple way to interact with streaming chat models. You can use various API providers with type guarantees with Zod.
Usage
Use streaming chat models from OpenAI with a few lines of code:
ts
ts
importhop from "hopfield";importopenai from "hopfield/openai";importOpenAI from "openai";consthopfield =hop .client (openai ).provider (newOpenAI ());constchat =hopfield .chat ().streaming ();constmessages :hop .inferMessageInput <typeofchat >[] = [{role : "user",content : "What's the coolest way to count to ten?",},];constresponse = awaitchat .get ({messages ,},{onChunk : async (value ) => {console .log (`Received chunk type: ${value .choices [0].__type }`);// do something on the server with each individual chunk as it is// streamed in},onDone : async (chunks ) => {console .log (`Total chunks received: ${chunks .length }`);// do something on the server when the chat completion is done// this can be caching the response, storing in a database, etc.//// `chunks` is an array of all the streamed responses, so you// can access the raw content and combine how you'd like},});// store all of the streaming chat chunksconstparts :hop .inferResult <typeofchat >[] = [];for await (constpart ofresponse ) {parts .push (part );// if the streaming delta contains new text contentif (part .choices [0].__type === "content") {// action based on the delta for the streaming message contentawaittakeAction (part .choices [0].delta .content );}}
ts
importhop from "hopfield";importopenai from "hopfield/openai";importOpenAI from "openai";consthopfield =hop .client (openai ).provider (newOpenAI ());constchat =hopfield .chat ().streaming ();constmessages :hop .inferMessageInput <typeofchat >[] = [{role : "user",content : "What's the coolest way to count to ten?",},];constresponse = awaitchat .get ({messages ,},{onChunk : async (value ) => {console .log (`Received chunk type: ${value .choices [0].__type }`);// do something on the server with each individual chunk as it is// streamed in},onDone : async (chunks ) => {console .log (`Total chunks received: ${chunks .length }`);// do something on the server when the chat completion is done// this can be caching the response, storing in a database, etc.//// `chunks` is an array of all the streamed responses, so you// can access the raw content and combine how you'd like},});// store all of the streaming chat chunksconstparts :hop .inferResult <typeofchat >[] = [];for await (constpart ofresponse ) {parts .push (part );// if the streaming delta contains new text contentif (part .choices [0].__type === "content") {// action based on the delta for the streaming message contentawaittakeAction (part .choices [0].delta .content );}}
Learn more
See how to use streaming results combined with type-driven prompt templates in the next section.