Alright, my buddies, I’m again with one other put up primarily based on my learnings and exploration of AI and the way it’ll match into our work as community engineers. In at the moment’s put up, I wish to share the primary (of what’s going to possible be many) “nerd knobs” that I feel all of us ought to concentrate on and the way they’ll influence our use of AI and AI instruments. I can already sense the thrill within the room. In any case, there’s not a lot a community engineer likes greater than tweaking a nerd knob within the community to fine-tune efficiency. And that’s precisely what we’ll be doing right here. Fantastic-tuning our AI instruments to assist us be simpler.
First up, the requisite disclaimer or two.
- There are SO MANY nerd knobs in AI. (Shocker, I do know.) So, should you all like this sort of weblog put up, I’d be completely happy to return in different posts the place we take a look at different “knobs” and settings in AI and the way they work. Effectively, I’d be completely happy to return as soon as I perceive them, no less than. 🙂
- Altering any of the settings in your AI instruments can have dramatic results on outcomes. This consists of growing the useful resource consumption of the AI mannequin, in addition to growing hallucinations and lowering the accuracy of the data that comes again out of your prompts. Take into account yourselves warned. As with all issues AI, go forth and discover and experiment. However accomplish that in a protected, lab surroundings.
For at the moment’s experiment, I’m as soon as once more utilizing LMStudio operating regionally on my laptop computer moderately than a public or cloud-hosted AI mannequin. For extra particulars on why I like LMStudio, take a look at my final weblog, Making a NetAI Playground for Agentic AI Experimentation.
Sufficient of the setup, let’s get into it!
The influence of working reminiscence measurement, a.ok.a. “context”
Let me set a scene for you.
You’re in the course of troubleshooting a community problem. Somebody reported, or seen, instability at some extent in your community, and also you’ve been assigned the joyful process of attending to the underside of it. You captured some logs and related debug data, and the time has come to undergo all of it to determine what it means. However you’ve additionally been utilizing AI instruments to be extra productive, 10x your work, impress your boss, you understand all of the issues which might be occurring proper now.
So, you determine to see if AI will help you’re employed by the info quicker and get to the basis of the difficulty.
You fireplace up your native AI assistant. (Sure, native—as a result of who is aware of what’s within the debug messages? Finest to maintain all of it protected in your laptop computer.)
You inform it what you’re as much as, and paste within the log messages.


After getting 120 or so traces of logs into the chat, you hit enter, kick up your ft, attain in your Arnold Palmer for a refreshing drink, and look forward to the AI magic to occur. However earlier than you possibly can take a sip of that iced tea and lemonade goodnessyou see this has instantly popped up on the display:


Oh my.
“The AI has nothing to say.”!?! How may that be?
Did you discover a query so tough that AI can’t deal with it?
No, that’s not the issue. Take a look at the useful error message that LMStudio has kicked again:
“Attempting to maintain the primary 4994 tokens when context the overflows. Nevertheless, the mannequin is loaded with context size of solely 4096 tokens, which isn’t sufficient. Attempt to load the mannequin with a bigger context size, or present shorter enter.”
And we’ve gotten to the basis of this completely scripted storyline and demonstration. Each AI device on the market has a restrict to how a lot “working reminiscence” it has. The technical time period for this working reminiscence is “context size.” For those who attempt to ship extra knowledge to an AI device than can match into the context size, you’ll hit this error, or one thing prefer it.
The error message signifies that the mannequin was “loaded with context size of solely 4096 tokens.” What’s a “token,” you surprise? Answering that might be a subject of a wholly totally different weblog put up, however for now, simply know that “tokens” are the unit of measurement for the context size. And the very first thing that’s accomplished whenever you ship a immediate to an AI device is that the immediate is transformed into “tokens”.
So what will we do? Effectively, the message offers us two attainable choices: we are able to improve the context size of the mannequin, or we are able to present shorter enter. Typically it isn’t an enormous deal to supply shorter enter. However different instances, like after we are coping with massive log information, that possibility isn’t sensible—the entire knowledge is vital.
Time to show the knob!
It’s that first possibility, to load the mannequin with a bigger context size, that’s our nerd knob. Let’s flip it.
From inside LMStudio, head over to “My Fashions” and click on to open up the configuration settings interface for the mannequin.


You’ll get an opportunity to view all of the knobs that AI fashions have. And as I discussed, there are a number of them.


However the one we care about proper now could be the Context Size. We will see that the default size for this mannequin is 4096 tokens. But it surely helps as much as 8192 tokens. Let’s max it out!


LMStudio gives a useful warning and possible purpose for why the mannequin doesn’t default to the max. The context size takes reminiscence and sources. And elevating it to “a excessive worth” can influence efficiency and utilization. So if this mannequin had a max size of 40,960 tokens (the Qwen3 mannequin I take advantage of typically has that prime of a max), you may not wish to simply max it out straight away. As an alternative, improve it by a little bit at a time to seek out the candy spot: a context size large enough for the job, however not outsized.
As community engineers, we’re used to fine-tuning knobs for timers, body sizes, and so many different issues. That is proper up our alley!
When you’ve up to date your context size, you’ll must “Eject” and “Reload” the mannequin for the setting to take impact. However as soon as that’s accomplished, it’s time to make the most of the change we’ve made!


And take a look at that, with the bigger context window, the AI assistant was in a position to undergo the logs and provides us a pleasant write-up about what they present.
I notably just like the shade it threw my manner: “…think about looking for help from … a certified community engineer.” Effectively performed, AI. Effectively performed.
However bruised ego apart, we are able to proceed the AI assisted troubleshooting with one thing like this.


And we’re off to the races. We’ve been in a position to leverage our AI assistant to:
- Course of a big quantity of log and debug knowledge to determine attainable points
- Develop a timeline of the issue (that might be tremendous helpful within the assist desk ticket and root trigger evaluation paperwork)
- Establish some subsequent steps we are able to do in our troubleshooting efforts.
All tales should finish…
And so you’ve gotten it, our first AI Nerd Knob—Context Size. Let’s assessment what we discovered:
- AI fashions have a “working reminiscence” that’s known as “context size.”
- Context Size is measured in “tokens.”
- Oftentimes instances an AI mannequin will assist a better context size than the default setting.
- Rising the context size would require extra sources, so make adjustments slowly, don’t simply max it out utterly.
Now, relying on what AI device you’re utilizing, you might NOT be capable of alter the context size. For those who’re utilizing a public AI like ChatGPT, Gemini, or Claude, the context size will rely upon the subscription and fashions you’ve gotten entry to. Nevertheless, there most positively IS a context size that can issue into how a lot “working reminiscence” the AI device has. And being conscious of that reality, and its influence on how you need to use AI, is vital. Even when the knob in query is behind a lock and key. 🙂
For those who loved this look beneath the hood of AI and wish to study extra choices, please let me know within the feedback: Do you’ve gotten a favourite “knob” you want to show? Share it with all of us. Till subsequent time!
PS… For those who’d prefer to be taught extra about utilizing LMStudio, my buddy Jason Belk put a free tutorial collectively known as Run Your Personal LLM Domestically For Free and with Ease that may get you began in a short time. Test it out!
Join Cisco U. | Be part of the Cisco Studying Community at the moment without spending a dime.
Study with Cisco
X | Threads | Fb | LinkedIn | Instagram | YouTube
Use #Ciscou and#CiscoCert to affix the dialog.
Learn subsequent:
Making a NetAI Playground for Agentic AI Experimentation
Take an AI Break and Let the Agent Heal the Community
Share:

