Ho-ly shit… I had forgotten this particular bit.
But yeah, me too. Undead Mummy Ernie…
Ho-ly shit… I had forgotten this particular bit.
But yeah, me too. Undead Mummy Ernie…
Just saying.
… Saying what, exactly?
I said that we should
And you argued… The same thing? Just in the reverse order?
USA: Real barbeque. I don’t mean braised meat slathered in a sticky sauce, either. I mean tough cuts of meat, cooked slow and low over woodsmoke until it is fall-off-the-bone tender. No sauce required.
Much easier to find this in the southern US, with Texas, Missouri, and the Carolinas all being particularly famous BBQ regions. In the northern states, your best bet is gonna be to find someone local with a smoker - not just a grill.
Bro - no mention of Texas BBQ? Beef brisket with Texas-style BBQ beans (savory, not sweet for those who haven’t had them) is amazing.
Have you ever been in an old house? Not old, like, on the Historic Register, well-preserved, rich bastard “old house”. Just a house that has been around awhile. A place that has seen a lot of living.
You’ll find light switches that don’t connect to anything; artwork hiding holes in the walls; sometimes walls have been added or removed and the floors no longer match.
Any construction that gets used, must change as needs change. Be it a house or a city or a program, these evolutions of need inevitably introduce complexity and flaws that are large enough to annoy, but small enough to ignore. Over time those issues accumulate until they reach a crisis point. Houses get remodeled or torn down, cities build or remove highways, and programs get refactored or replaced.
You can and should design for change, within reason, because all successful programs will need to change in ways you cannot predict. But the fact that a system eventually becomes complex and flawed is not due to engineering failures - it is inherent in the nature of changing systems.
… And can you fix it?
Yup. And that’s a great example of not relying on Deus Ex Machina - we watch Ender go through all his brutal training, learning to be the best and becomes a truly terrifying weapon of war. By the time Ender is, well, ending things, we’ve seen his growth and understand why he can do the things he does.
In the early days of Superman comics, dude couldn’t, e.g. fly. He could just jump really high. He didn’t have laser vision. Over time, the writers kept adding new powers until the only story they could tell was about Supes vs his own conscience. Nothing else (okay, besides Mr Mxyzptlk) can actually stand in his way.
All things Deus Ex Machina. I get it, endings are hard. Climaxes are hard to write. But the payoff feels cheap as hell when your protagonist just “digs a little deeper” and suddenly finds just enough power to save the day. When it comes out of nowhere, it feels unearned by the hero and is not only unsatisfying, it’s also a good way to give you hero power creep until there’s nothing on earth that can believably challenge them. See: Superman.
While I agree with you on the whole, there are some real world places with names that go hard.
Like Dead Man’s Pass, Oregon. Or Devil’s Gate, Utah.
…Maybe it’s just a Western US thing.
a quick web search uses much less power/resources compared to AI inference
Do you have a source for that? Not that I’m doubting you, just curious. I read once that the internet infrastructure required to support a cellphone uses about the same amount of electricity as an average US home.
Thinking about it, I know that LeGoog has yuge data centers to support its search engine. A simple web search is going to hit their massive distributed DB to return answers in subsecond time. Whereas running an LLM (NOT training one, which is admittedly cuckoo bananas energy intensive) would be executed on a single GPU, albeit a hefty one.
So on one hand you’ll have a query hitting multiple (comparatively) lightweight machines to lookup results - and all the networking gear between. One the other, a beefy single-GPU machine.
(All of this is from the perspective of handling a single request, of course. I’m not suggesting that Wikipedia would run this service on only one machine.)
This looks less like the LLM is making a claim so much as using an LLM to generate a search query and then read through the results in order to find anything that might relate to the section being searched.
It leans into the things LLMs are pretty good at (summarizing natural language; constructing queries according to a given pattern; checking through text for content that matches semantically instead of literally) and links directly to a source instead of leaning on the thing that LLMs only pretend to be good at (synthesizing answers).
Thank you for responding! I really liked this bit
with a (decently designed) UI, you merely have to remember the path you took to get to wherever you want to go, what buttons to press, what mouse movements to execute.
I think that’s very insightful. I certainly have developed muscle-memory for many of my most-frequent commands in the CLI or editor of choice.
I agree about Visual Studio as a preference. I’ve used (or at least tried) dozens of IDE setups down the years from vi/emacs to JetBrains/VS to more esoteric things like Code Bubbles. I’ve found my personal happy place but I’d never tell someone else their way of working was wrong.
(Except for emacs devs. (Excepting again evil-mode emacs devs - who are merely confused and are approaching the light.)) ;)
I hope you take this in good humor and at least consider a TUI for your next project.
Absolutely. I see what you did there… 😉
But seriously, thank you for your response!
I think your comment about GUIs being better at displaying the current state and context was very insightful. Most CLI work I do is generally about composing a pipeline and shoving some sort of data through it. As a class of work, that’s a common task, but certainly not the only thing I do with my PC.
Multistage operations like, say, Bluetooth pairing I definitely prefer to use the GUI for. I think it is partially because of the state tracking inherent in the process.
Thanks again!
As someone who genuinely loves the command line - I’d like to know more about your perspective. (Genuinely. I solemnly swear not to try to convince you of my perspective.)
What about GUIs appeals to you over a command line?
I like the CLI because it feels like a conversation with the computer. I explain what I want, combining commands as necessary, and the machine responds.
With GUIs I feel like I’m always relearning tools. Even something as straightforward as ‘find and replace’ has different keyboard shortcuts in most of the text-editing apps I use - and regex support is spotty.
Not to say that I think the terminal is best for all things. I do use an IDE and windowing environments. Just that - when there are CLI tools I tend to prefer them over an equivalent GUI tool.
Anyway, I’m interested to hear your perspective- what about GUIs works better for you? What about the CLI is failing you?
Thank you!
Net removal of 1500 LoC…
I’m gonna make you break this up into multiple PRs before reviewing, but honestly, if your refactoring reduced the surface area by 20% I’m a happy man.
Experience.
One of the best programmers I worked with was a hunt and peck typist.
His code was meticulous. I frequently learned things reading his PRs.
Pair programming with him otoh…
I used to work summers as an apprentice electrician. The amount of crazy wiring I saw in old houses was (heh) shocking. Sometimes it was just that it was old. Real old houses sometimes just had bare wire wrapped in silk. … And a few decades later that silk was frayed and crumbling in the walls and needed replacing.
My current house was wired at a time when copper was more precious, so it was wired up and down through the house, with circuits arranged by proximity, not necessarily logic. When a certain circuit in my house blows the breaker, my TV, PC and one wall of the master bedroom all lose power. The TV and PC are not in the same room either.