Aurora: A Foundation Model for the Earth System
"Trained on vast amounts of data, it’s tuned to model the Earth’s systems. Aurora has already shown promise across multiple scenarios, including predicting the weather, tracking hurricanes and air quality, and modeling ocean waves and energy flows."
I first saw this in a Microsoft newsletter, and traced it through a Nature paper and a Github repo. It might be fun to play with!
GitHub - microsoft/aurora: Implementation of the Aurora model for Earth system forecasting
I suspect that LLM's reach their peak usefulness with HuggingFace, and everything beyond those models is just incremental improvements.
LLM's do a lousy job at analyzing a chess match. Of course they do; token generation doesn't account for the precision required. But I wonder if you had a good chess-match-analyzing API, could you use an LLM to turn the analysis into a well-written summary?
An AI read that people with Primary Headache Disorder were more likely to get Alzheimer's, and reported it as "People who have been awarded Ph.D.'s are more likely to get Alzheimer's". Beginning to think I just need to find my old Encyclopedia Britannicas and keep them handy.
I feel less guilty suddenly.
"A key part of the definition of vibe coding is that the user accepts code without full understanding. Programmer Simon Willison said: "If an LLM wrote every line of your code, but you've reviewed, tested, and understood it all, that's not vibe coding in my book—that's using an LLM as a typing assistant."
A writing teacher on ChatGPT
"My conversations with A.I. showcased its seductive cocktail of affirmation, perceptiveness, solicitousness and duplicity — and brought home how complicated this new era will be...With ChatGPT, I felt like I had an intern with the cheerful affect of a golden retriever and the speed of the Flash."
"Eight best AI learning assistants in 2025".
What possible distinction could they have from one another beyond a few words of prompt? No one is training a model specifically to be a learning assistant.
Looks like some people are using AI summarizing tools.
Generative AI has its uses. But has anyone ever really used it to summarize an article, or an email, or a bunch of stuff? I think I would spend longer verifying its conclusions than I would just reading the documents.
The Labor Demand Shocks of Artificial Intelligence
"The easy answer is that it will increase the demand for labor, in much the same way as the wheel, the steam loom, the automobile and the computer. That is, in a very nuanced way.
Technology doesn’t replace jobs; it replaces tasks. Almost always, the tasks replaced are the most mundane, routine and trainable ones. In so doing, the technology makes the uniquely human part of the job more valuable."
Accessing LM Studio Server from WSL Linux
(Not complicated, just tricky to find the settings)
Went to a talk today about, "Planning for AI in your organization." The guy spoke approvingly of 20th century robber barons like Rockefeller and Carnegie, and then compared them to Thiel and Musk. I think the whole talk was a propaganda effort for turning everything over to oligarchs and letting them run things. I wish I'd managed to articulate that before the Q&A period was over.
You hear a lot about book reports and things that start with, "Sure! Here's 300 words about The Scarlet Letter." ChatGPT generally doesn't say anything like that to me though. I wonder if they've reworked it to stop doing that.
How not to build an AI Institute
"The median founder, investor, or civil servant will cheerfully roll their eyes at the mention of the Alan Turing Institute, what’s less understood is why the UK’s national AI institute has proven such a flop, despite spending over quarter of a billion pounds since its inception. To piece this together, I’ve spoken to current and former Turing insiders, figures from the world of research funding, academics, and civil servants."
https://www.chalmermagne.com/p/how-not-to-build-an-ai-institute
Portlander creates AI-powered device to monitor street health
"Zajack’s Traffic Monitor is an, “open source roadway object detection and radar speed monitoring,” device made from simple and widely available components. It uses a wide-angle camera, a very small (Raspberry Pi) computer, and machine learning and artificial intelligence (AI) code to watch for objects in the street."
Sounds fun! I doubt it would make much sense in my little 'hood though.
I find the argument, "I asked an AI agent something and it gave me a wrong answer, therefore we should never use AI" to be a peculiar one.
I don't know why there's such a kerfuffle about AI's reading and writing emails. If I have anything important to say I send an instant message.
People who can think through and understand complicated problems will never be replaced.
The days of memorizing arcane syntax to get a computer to do what you want are quickly coming to an end.
Know what would fix stop-and-go traffic even better? Not driving.
"Boston Is Using AI for Good to Fix Stop-and-Go Traffic...with the help of Google Research's Green Light project, AI-optimized intersections have already seen traffic reduced by 50%."
#Boston #CarsRuinEverything #AI
Using AI in pursuit of better bike paths
"The AI Bike Mapping and Wayfinding Project aims to revolutionize bicycle infrastructure mapping and improve cyclists’ safety in Santa Barbara County. The project’s key objectives include AI training: Data from Google Street View, OpenStreetMap and an advisory committee will inform, train and develop a tested AI model."
Once again, I think there are easier ways. @markstos have an opinion?
Re: coding for kids: I suspect that, due to AI, we are somewhere near a sea change in coding practices, on the level of interpreters or object-oriented programming. I wonder how one would implement "prompt engineering for kids".
Just tried doing a search for "automatically tag photos" and discovered that there's nothing left on the web but sites generated by #ChatGPT. Search is dead. Anybody know a good tool for automatically tagging photos?