AI killed open source; it’s back to closed-source, NDAs, and vetted collaborators. Anything put online will automatically get slurped, re-packaged and re-distributed.
This was great info, but as someone who is concerned about passing great info on to others about my own products, I am suddenly less worried about my posts reading like LLM-speak. This post did very well.
Word and Excel seems to change behavior on every release. I don't think it's something Gimp should try to chase.
Gimp has save: "Save native gimp file" and export "Export an image file". It's a bit confusing to me why people find this confusing.
It's not "getting to know you" in that sense, it's getting to know the public face you present, whether I can trust you, and how I can interact with you most smoothly. If you're my coworker and you don't ever want to talk about your family or friends or personal interests or problems or anything, that's fine.
Exactly! This is what LLMs do: they bullshit you by coming across as extremely knowledgeable, but as soon as you understand 5% of the topic you realise you've been blatantly lied to.
The NYT almost went out of business before they were rescued by billionaire Carlos Slim. The NYT is probably doing okay because the local papers everywhere else are a pale shadow of what they once were, so people sign up for NYT looking for what they used to have.
I do wonder how people can be satisfied with automatic music playlists. I was entertained by this for maybe a few hours when Pandora was new, but they all seemingly always devolve into either playing weird shit, playing the same 50 songs over and over again, or playing whatever new release shilled crap the record companies are paying to promote. Yet it seems like everybody else these days is a Spotify addict. I guess most people are fine with it.
You've never seen project managers basically propose the equivalent of getting a baby delivered in 1 month instead of 9 months by adding more people to the project?
But yeah, if the recruiters start asking for "10 years experience with Claude Code", then I guess a tongue-in-cheek answer would be "sure, I did 10 projects in parallel in one year".
There is hardware that you can simply plug into your PC, which can read and write arbitrary kernel memory. I have a feeling that kernel level anticheat isn't stopping someone who really wants to cheat.
These days it seems best to not be in the US or any vassal country, in order to avoid this ridiculous overreaching of "we are the center of the world" lawmakers in the US.
When you send me a message, I’m reading for intent. Humans are good at pattern recognition. We pick up on whether someone is actually trying to communicate with us or just going through the motions. How much I care about the polish varies by context, but if I can tell what you’re trying to say, that usually matters more than how you said it.
LLMs are genuinely useful for what I’d call conversational rehearsal. You can hash out the tone of something difficult before you send it, think through how the other person might receive it, find words that match what you actually mean. How sincere the result feels is directly tied to how much you were thinking about the other person while revising. The LLM gave you the words, but you’re still the one choosing to say them. It’s less like copying someone’s homework and more like working through a draft with an editor who never gets tired.
The caveat is obvious. Garbage in, garbage out. If your prompt was “make this sound professional and corporate,” I’ll read the result the same way I read LinkedIn slop before any of this existed. I’ll note the information, set the tone aside, and focus on the actual problem. The tool doesn’t change the dynamic. Thoughtful input produces something worth reading. Lazy input produces filler, same as it always has
The noise observation is correct but worth understanding the mechanism, because it changes what you can actually do about it.
When someone asks ChatGPT, Gemini, or Perplexity to recommend a tool, those systems aren't ranking pages — they're generating answers from a training corpus and retrieval layer that rewards different signals than Google does. The key ones:
- Source authority: Which publications and communities have cited you in a way LLMs absorbed?
- Entity consistency: Does your brand appear with consistent descriptions across independent sources? Inconsistency reads as low confidence.
- Citation pathways: Reddit threads, Hacker News posts, niche forum discussions, and technical write-ups carry disproportionate weight in retrieval because they're semantically dense and high-trust domains.
The AI noise problem you're describing is real, but the countermeasure isn't 'make more content' — it's getting cited on surfaces LLMs were trained to trust. A single well-upvoted HN post or a thorough Reddit thread where your product genuinely solved someone's problem can influence LLM recommendations far more than 50 product directories.
Practically: if your product comes up in authentic discussions on high-trust surfaces, AI systems learn to associate it with specific use cases and user contexts. That's what makes you show up in generated recommendations — not keyword density.
In my country, state schools strictly forbid students from bringing devices to school. This rule was actually introduced because of the haves/have-nots issue here, because many kids are too poor to afford devices. The schools themselves don't provide devices because it would be prohibitively expensive due to the large student population. Most private schools don't allow devices either.
I briefly tried it when they first launched it, but in less than an hour decided I hated it.
Which I really should have anticipated since I generally dislike music radio "DJ"s too and Spotify's AI DJ is trying to be like one.
In particular it would do things like start playing tracks with no bearing on anything I'd ever listened to, like local South African music which is very far from universally preferred here. I also got the feeling it was pushing "promoted" tracks with little regard to what I would likely like, just like real life radio stations.
I also don't care to have some voice interrupting the music all the time.
I was hoping it would kind of be like their other "radio"s, but it would be more explorative to finding more "similar" tracks to what I have listened to, without seeming to get stuck in a repeating play list.
I suppose it's a cool gimmick for people who are prefer the broadcast radio experience.
Their last peak of subscriptions was when Trump got elected in 2016. It was a good time for newspapers and TV. The NYT is having an even higher peak now[1]. Surely the Sulzbergers aren't the only family competent enough to run a newspaper. Lots of papers aren't owned by billionaires and manage to do ok. Wapo didn't have games in 2016 and people still subscribed. It just lost it's value to readers, so people unsubscribed. I certainly did.
Bezos is literally just showing his incompetence at this point in running a paper, and the NYT is probably loving it. Sure billionaires can buy social networks and papers, but people can also subscribe to and use things not owned by billionaires.