• Re: AI LLM Artificial intelligence infrastructure

    From Agent@VERT/DMINE to BBSING on Mon Mar 9 19:16:45 2026
    On 19/03/2026 22:51, bbsing wrote:

    I've just spoke with a friend who is a software engineer, and he is saying its eat or be eaten in the software development world. The company he works for CEO is working with only 1/3 the current dev staff, and still making significant progress using AI LLM tools.

    It's mostly hype and a lot of failed businesses (those keen on letting developers go) are going to be realised sooner than later.

    AI is a toolset in a developer's toolbox. LLMs are one of those tools. In that way, they are no more or less than a compiler, debugger, or IDE. Should we shun those things and go back to punchcards?

    People need carpenters to build a wooden chair from scratch with quality craftsmanship; they'll still need software developers in the future to ensure software is built with the same concept of quality craftsmanship.

    The other thing to bear in mind: no one wants or needs yet another Photoshop, there's plenty of alternatives. The idea that "anyone can make their own" is not only false, but would also be a productivity failure for a company to engage in, not a productivity boost. They should have focused on their business, not reinventing someone else's (unless they're genuinely going to compete.)

    Then there's the liability issue. Say a company relies on Excel, but they decide to vibe their own - now if it gets math wrong, who is liable? The Post Office Horizon scandal in the UK is a pre-AI example of this kind of ownership of responsibility meets liability issue.
    ---
    þ Synchronet þ Diamond Mine Online BBS - bbs.dmine.net:24 - Fredericksburg, VA USA
  • From phigan@VERT/TACOPRON to bbsing on Tue Mar 10 13:57:53 2026
    Re: AI LLM Artificial intelligence infrastructure
    By: bbsing to All on Sun Mar 08 2026 09:47 pm

    Is anyone out there doing self hosted LLM Artificial Intelligence for infrastucture use?

    Haha, a friend of mine tried it, even with some model that was supposed to be good at doing stuff for you, and it totally messed up his OS install.

    He tried it several times, even.

    I would definitely not trust LLMs to do any live systems managing. You might be able to utilize one to create network install images or a one-time setup that you can actively work on with it, but you certainly wouldn't want to let it just do its own thing. For someone that knows what they are doing, it would probably take much less time for them to do it themselves than to try to have an LLM do things for them. It might save the time of someone who is clueless and has never been a sysadmin at all, but for a pro it would be a huge waste of time to have to constantly correct the LLM every time it does something wrong.

    Speaking of which, I'm an unemployed infrastructure sysadmin, if you should happen to see anything open ;).

    ---
    þ Synchronet þ TIRED of waiting 2 hours for a taco? GO TO TACOPRONTO.bbs.io
  • From phigan@VERT/TACOPRON to bbsing on Tue Mar 10 14:01:15 2026
    Re: AI LLM Artificial intelligence infrastructure
    By: bbsing to MRO on Mon Mar 09 2026 09:23 am

    Trying to get a system capable of running LLM models equals lots of cash, an I'm on the fence if I should get into it, so I'm wondering if anyone out her

    I've got a system set up with ollama and 24gb vram. I can show you what a hosted model can do via chat or something. It's really not very good.
    A cloud-hosted one might do better, but the risk is still extremely high of it doing things you don't want.

    ---
    þ Synchronet þ TIRED of waiting 2 hours for a taco? GO TO TACOPRONTO.bbs.io
  • From bbsing@VERT/LUNAROUT to phigan on Tue Mar 10 22:31:04 2026
    Re: AI LLM Artificial intelligence infrastructure
    By: phigan to bbsing on Tue Mar 10 2026 14:01:15

    Hi phigan,

    Re: AI LLM Artificial intelligence infrastructure
    By: bbsing to MRO on Mon Mar 09 2026 09:23 am

    Trying to get a system capable of running LLM models equals lots of cash, an I'm on the fence if I should get into it, so I'm wondering if anyone out her

    I've got a system set up with ollama and 24gb vram. I can show you what a

    What models are you liking, and what sizes are you getting good TPS?
    What do you think is good TPS?
    Have you tried Deepseek-v3?
    vllm?

    hosted model can do via chat or something. It's really not very good.
    A cloud-hosted one might do better, but the risk is still extremely high of it doing things you don't want.


    13 years ago messing things up was my concern with scripting aspects of sys admin work. I worked with a Jr admin, and he was really liking powershell, but at the time was new to it. My worry was scripts have a great way have doing things quickly but also screwing things up quickly. He was a great team member and his scripting use inspired me to adopt his ideas. He left for a different job closer to home. I miss working with him. Now most of my work is via scripting because I have too much to do and without scripts it takes too long. I really get to know individual systems better with full hands on vs scripts, and I like knowing each system, its behavior, but efficeincy demands quickery.

    I've been contemplating building an system for LLM personal lab work on infrastructure, with the idea of discriptive text for total orchistration, including builds, configuration, and management on windows/linux/bsd type systems. I'm having a hell of a time getting costs down. 24vgb is expensive, about as expensive as 32vgb rtx 5090.

    I've looked at so many builds with rtx 3060, rtx 4090, rtx 5090. 3060's are gone, 4090's are way overpriced, and if they burn up, are likely not going to be the card to replace a broken device. rtx 5080s are around but the older cards are just as good, but their way over MSRP. Its been pretty dang frustrating.

    Back in 2000, I was facing a similar issue. I was priced out of hardware and OS due to cost, so I couldn't build a lab to learn on. Now I'm in the same place. I don't really want to spend $8K on a build that is obsolete in two years. I also don't want to miss out on all the hype/fun/understanding.

    I was hoping distributive computing would have been the thing instead of giant LLM type shtuff.

    ---
    þ Synchronet þ Lunar Outpost BBS
  • From Arelor@VERT/PALANTIR to bbsing on Wed Mar 11 14:23:49 2026
    Re: AI LLM Artificial intelligence infrastructure
    By: bbsing to All on Sun Mar 08 2026 09:47 pm


    I've just spoke with a friend who is a software engineer, and he is saying its eat or be eaten in the software development world. The company he works for CEO is working with only 1/3 the current dev staff, and still making significant progress using AI LLM tools.


    I get contradictory signals in that regard.

    I think the people asking to get AI integrated with workflows and selecting employees according to their suitability for AI work are typically management types rather than people doing the actual work. In the end you get AI pushed into both things that benefit from it and things that don't.

    You can get some LLMs to write good boilerplate for your Terraform/OpenTofu/Ansible/Whatever deployments. However, at this point you would be crazy to let an AI agent act as an orchrestrator for you.


    --
    gopher://gopher.richardfalken.com/1/richardfalken

    ---
    þ Synchronet þ Palantir BBS * palantirbbs.ddns.net * Pensacola, FL
  • From phigan@VERT/TACOPRON to bbsing on Wed Mar 11 17:13:54 2026
    Re: AI LLM Artificial intelligence infrastructure
    By: bbsing to phigan on Tue Mar 10 2026 10:31 pm

    What models are you liking, and what sizes are you getting good TPS?
    What do you think is good TPS?
    Have you tried Deepseek-v3?
    vllm?

    I'm not liking any of them. I haven't tried qwen2.5 out yet, I just downloaded it, but the rest of them aren't any good. At least for what I've been trying, which is code.

    devstral:24b
    codellama:34b
    qwen3-coder:30b

    I had the most success using 'Zed' as a front end to the qwen3-coder model. Codellama actually refused to write code at all. Devstral says it's going to write code but never does and gets stuck in a loop. I also tried installing claude and launching that with those same models. Codellama wouldn't open at all, saying it didn't support tools or whatever. Devstral again got stuck in a loop telling me it was going to do things but not actually doing them. qwen3-coder kept going back and forth on fixing one thing, breaking another, then breaking the first thing when it fixed the second thing, etc.

    My GPU was second hand off Craigslist after scouring that and eBay quite a bit.

    Still haven't found a good front end to use, but I have not tried Open WebUI just yet.

    ---
    þ Synchronet þ TIRED of waiting 2 hours for a taco? GO TO TACOPRONTO.bbs.io
  • From bbsing@VERT/LUNAROUT to Arelor on Wed Mar 11 21:41:55 2026
    Re: AI LLM Artificial intelligence infrastructure
    By: Arelor to bbsing on Wed Mar 11 2026 14:23:49

    I think the people asking to get AI integrated with workflows and selecting employees according to their suitability for AI work are typically management types rather than people doing the actual work. In the end you get AI pushed into both things that benefit from it and things that don't.

    Management's dream is have no employees.

    In the mind of those who don't do the work, it really seems these tools can do whatever anyone asks. They don't think about the quality and accuracy as a significant differentiator for sucess with competitors in the market. A lot of time management is focused on speed and efficency for protoctivity/cost reduction. Make widgets faster = more money. When mamagement recieves negative feedback from the markets, its usually too late for the employees they've let go.


    You can get some LLMs to write good boilerplate for your Terraform/OpenTofu/Ansible/Whatever deployments. However, at this point you would be crazy to let an AI agent act as an orchrestrator for you.

    I used to hear a lot about infrastructure as code, not as much over the past 3 years.

    Probablistic determinates are a problem that doesn't seem to be going away.


    But the media and hype machine is saying all jobs done on a screen will be targets for AI/LLMs to take over.

    Whats a person to do?
    Learn these new tools?
    Stick with traditional methods?
    A bit of both?

    Cost is such an issue these days in the LLM space.

    ---
    þ Synchronet þ Lunar Outpost BBS
  • From Lonewolf@VERT/BINARYDR to phigan on Sat Mar 14 04:37:48 2026
    Re: AI LLM Artificial intelligence infrastructure
    By: phigan to bbsing on Wed Mar 11 2026 05:13 pm

    Re: AI LLM Artificial intelligence infrastructure
    What models are you liking, and what sizes are you getting good TPS?
    What do you think is good TPS?
    Have you tried Deepseek-v3?
    vllm?

    I'm not liking any of them. I haven't tried qwen2.5 out yet, I just downloaded it, but the rest of them aren't any good. At least for what I've been trying, which is code.

    devstral:24b
    codellama:34b
    qwen3-coder:30b

    My GPU was second hand off Craigslist after scouring that and eBay quite a bit.
    Still haven't found a good front end to use, but I have not tried Open WebUI just yet.

    How you tried llama.cpp? I've got Ollama, LM Studio and llama.cpp on my 24gb vram system. llama.cpp is definetly better in all areas and it has a nice web UI too.

    Lonewolf
    ---
    þ Synchronet þ Fireside BBS - AI-WX - firesidebbs.com:23231