Beyond vibe coding: AI-assisted development đź”—

less than 1 minute read

Via Simon Willison:

Addy Osmani has a new book coming out called Beyond Vibe Coding. He writes:

Vibe coding was never meant to describe all AI-assisted coding. It’s a specific approach where you don’t read the AI’s code before running it. There’s much more to consider beyond the prototype for production systems.

What is AI-Assisted Engineering?

This goes beyond vibe coding. AI-assisted engineering is a more structured approach that combines the creativity of vibe coding with the rigor of traditional engineering practices. It involves specs, rigor and emphasizes collaboration between human developers and AI tools, ensuring that the final product is not only functional but also maintainable and secure.

Osmani describes practical strategies for collaborating well with AI—moving from quick prototypes to production-ready systems. If you’ve dismissed vibe coding as just a meme, this might change your perspective. It’s about blending speed with structure, creativity with discipline.

Take a look and see what resonates.

Are content credentials going mainstream? đź”—

less than 1 minute read

The Content Authenticity Initiative is a collaborative effort to bring transparency to digital media. By using cryptographic signatures and standardized metadata (via the C2PA specification), it allows images to carry verifiable information about their origin, authorship, and any edits made—making it easier to assess whether content is authentic and trustworthy.

While the most visible application is in fighting misinformation in journalism and social media, the potential for scientific research is equally compelling. Imagine a microscopy image or an electrophoresis gel being digitally signed the moment it’s captured, with every subsequent enhancement or transformation securely tracked from the lab bench to online publication. This kind of provenance could dramatically improve trust in visual data and help address concerns around image manipulation in research.

Widespread adoption will take time, but it’s encouraging to see growing support from major players in the imaging industries. As someone interested in research integrity – and as an amateur photographer – this could be a meaningful step toward restoring confidence in the images we rely on, whether for science or for society at large.

China Releases “AI Plus” Policy: A Brief Analysis 🔗

less than 1 minute read

China released their new “AI Plus” strategy document last week when I was in Beijing. Here is some context and a translation of the policy document (via Benedict Evans).

Science and technology research feature prominently:

Accelerate the pace of scientific discovery

Expedite the exploration of AI-driven paradigms for scientific research, shortening the journey from “0 to 1” breakthroughs. Advance the development and application of large-scale scientific models, upgrade fundamental research platforms and major scientific facilities with AI capabilities, build open, high-quality scientific datasets, and enhance the handling of complex cross-modal scientific data. Strengthen AI’s role as a cross-disciplinary catalyst to foster convergent development across multiple fields.

Transform R&D models and boost efficiency

Foster an integrated, AI-driven process that spans research, engineering and product roll-out, accelerating the “1 to N” deployment and iterative refinement of technologies and enabling rapid translation of innovations. Promote the adoption of intelligent R&D tools and platforms, intensify AI-enabled co-innovation with bio-manufacturing, quantum technologies, 6G and other frontier domains, ground new scientific achievements in real-world applications, and let emerging application needs steer further breakthroughs.

M365 Copilot + GPT-5 = big improvement

less than 1 minute read

Have you tried M365 Copilot lately? It has gotten seriously good.

Click on the “Try GPT-5” button on the top right, and you’ll get what seems to be the same features and models as in ChatGPT Teams, with autorouting to fast or reasoning models.

Click on the “Researcher” agent on the left sidebar, and you’ll get what feels like Deep Research mode on ChatGPT or Claude – it will create a research plan, then go away and search the web for 10 minutes and synthesize the results.

Notebooks lets you create a mini-RAG like Google NotebookLM – drop in files from OneDrive or your desktop and it will use those specifically to answer questions.

In the old version of Copilot especially the Work tab seemed underpowered, using some old model like GPT-3.5. The Work tab also uses GPT-5 now, so you can use the most current model with work documents, emails, chats, transcripts, notebooks, …

I dismissed Copilot as “ChatGPT Lite” in the past, but since the update I’ve switched to it as my daily driver, it’s that useful. Give it a try if you haven’t used it in a while.