3

📎 Extending #SemanticKernel using OllamaSharp for chat and text completion

 1 month ago
source link: https://dev.to/azure/extending-semantickernel-using-ollamasharp-for-chat-and-text-completion-4m10
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
El Bruno for Microsoft Azure

Posted on Apr 1

• Originally published at elbruno.com on Apr 1

📎 Extending #SemanticKernel using OllamaSharp for chat and text completion

In previous posts I shared how to host and chat with a Llama 2 model hosted locally with Ollama. (view post).

And then I also found OllamaSharp (nuget package and repo).

_OllamaSharp is a .NET binding for the Ollama API, making it easy to interact with Ollama using your favorite .NET languages.

So, I decided to try it, and create a Chat Completion and a Text Generation specific implementation for Semantic Kernel using this library.

The full test is a console app using both services with Semantic Kernel.

demo with the console showing the output from the test

Text Generation Service

The Text Generation Service is an easy one. Just implement the interface Microsoft.SemanticKernel.TextGeneration.ITextGenerationService , and the generated code looks like this:

Chat Completion Service

The chat completion, requires the implementation of the interface: IChatCompletionService. The code looks like this:

Test Chat Completion and Text Generation Services

With both services implemented, we can now code with Semantic Kernel to access these services.

The following code:

  • Creates 2 services: text and chat, both with ollamasharp implementation
  • Create a semantic kernel builder, register both services, and build a kernel.
  • Using the kernel run a text generation sample, and later a chat history sample.
  • In the chat sample, it also uses a System Message to define the chat behavior for the conversation.
  • This is a test, there are a lot of improvements that can be made here.

The full code is available here: https://github.com/elbruno/semantickernel-localLLMs. And the main readme of the repo also needs to be updated.

Happy coding!

Greetings

El Bruno

More posts in my blog ElBruno.com.

More info in https://beacons.ai/elbruno



About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK