Streaming chat response
Middlebop also supports streaming chat response for that "word for word" live writing look. When revieving the response you'll get it as soon as the AI models has processed it. Every bit/event is called a chunk. To simplify the stream data structure it follows the same message type syntax as a non streaming chat response.
Create an streaming chat
Input is the same as when creating a non streaming chat response. When calling the streaming chat function you pass a callback handler that recieves every chunk. You can also pass in an error handler.
import { MiddlebopChatMessage, startChatStream } from "@middlebop/client";
const middlebopApiKey = "mb-yourSuperSecretApiKey";
const messages: MiddlebopChatMessage[] = [
{
role: "system",
content: {
type: "text",
text: "You are a helpful assistant called beep-boop",
},
},
{
role: "user",
content: {
type: "text",
text: "Hello, who are you?",
},
},
];
const onStreamResponse = (chunk: MiddlebopChatCompletionStreamChunk) => {
console.log(chunk);
};
const onStreamError = (error: unknown) => {
console.error(chunk);
};
startChatStream(
{
messages,
model: "gpt-4", // pass in any supported model
middlebopApiKey,
},
onStreamResponse,
onStreamError
);
Response
The response is a MiddlebopChatCompletionStreamChunk that is passed to your response handler one at a time. It looks like this
// first chunk
{
"type": "chunk",
"model": "gpt-4",
"messages": [
{
"role": "assistant",
"content": {
"type": "text",
"text": "Hello"
}
}
]
}
// second chunk
{
"type": "chunk",
"model": "gpt-4",
"messages": [
{
"role": "assistant",
"content": {
"type": "text",
"text": "!"
}
}
]
}
// third chunk
{
"type": "chunk",
"model": "gpt-4",
"messages": [
{
"role": "assistant",
"content": {
"type": "text",
"text": " I"
}
}
]
}
// etc...