writing.exchange is one of the many independent Mastodon servers you can use to participate in the fediverse.
A small, intentional community for poets, authors, and every kind of writer.

Administered by:

Server stats:

322
active users

#GoogleColab

0 posts0 participants0 posts today

I can already announce that next IC_Null stream, this tuesday at 3 PM EST, is about the #accessibility of notebooks like #Jupyter, #googleColab, the #VSCode implementation of this concept. If you do #dataScience, #machineLearning, even the polarizing #AI stuff, you are going to run into this at some point. Can a #screenReader user use these properly? Where's the #accessibility hurdles? Do any of these work better than the others? These are all things I'll try to go into.
See you Tuesday, 3 PM ESt over at https;//twitch.tv/ic_null or youtube.com/@blindlyCoding #selfPromo

consent.youtube.comAvant d'accéder à YouTube

The library for simple creation of chatbots that I am developing for python has a strange problem when working on #GoogleColab, namely that on my computers and laptops the model is loaded directly into RAM when executing the prompt() function, on Google Colab it looks like the model did not load for 10 minutes (suspiciously low RAM usage), after this time the normal amount of RAM is used (looks like entire model is loaded) and after a short time the text is generated.

The library is written in Rust and uses llama.cpp and rustformers llm

If anyone knows how to solve this, I will be very grateful.

Link to issue:
https://github.com/Hukasx0/ai-companion-py/issues/6

Link to the repository:
https://github.com/Hukasx0/ai-companion-py

#OpenSource #GitHub #llama2 #llama #programming #coding #rust #rustlang #typescript #llm #artificialintelligence #python #google #chatbot #chatbots

GitHubSlow performance on Google Colab · Issue #6 · Hukasx0/ai-companion-pyBy Hukasx0

The library for simple creation of chatbots that I am developing for python has a strange problem when working on #GoogleColab, namely that on my computers and laptops the model is loaded directly into RAM when executing the prompt() function, on Google Colab it looks like the model did not load for 10 minutes (suspiciously low RAM usage), after this time the normal amount of RAM is used (looks like entire model is loaded) and after a short time the text is generated.

The library is written in Rust and uses llama.cpp and rustformers llm

If anyone knows how to solve this, I will be very grateful.

Link to issue:
github.com/Hukasx0/ai-companio

Link to the repository:
github.com/Hukasx0/ai-companio

GitHubSlow performance on Google Colab · Issue #6 · Hukasx0/ai-companion-pyBy Hukasx0