Are content credentials going mainstream? đź”—

less than 1 minute read

The Content Authenticity Initiative is a collaborative effort to bring transparency to digital media. By using cryptographic signatures and standardized metadata (via the C2PA specification), it allows images to carry verifiable information about their origin, authorship, and any edits made—making it easier to assess whether content is authentic and trustworthy.

While the most visible application is in fighting misinformation in journalism and social media, the potential for scientific research is equally compelling. Imagine a microscopy image or an electrophoresis gel being digitally signed the moment it’s captured, with every subsequent enhancement or transformation securely tracked from the lab bench to online publication. This kind of provenance could dramatically improve trust in visual data and help address concerns around image manipulation in research.

Widespread adoption will take time, but it’s encouraging to see growing support from major players in the imaging industries. As someone interested in research integrity – and as an amateur photographer – this could be a meaningful step toward restoring confidence in the images we rely on, whether for science or for society at large.

China Releases “AI Plus” Policy: A Brief Analysis 🔗

less than 1 minute read

China released their new “AI Plus” strategy document last week when I was in Beijing. Here is some context and a translation of the policy document (via Benedict Evans).

Science and technology research feature prominently:

Accelerate the pace of scientific discovery

Expedite the exploration of AI-driven paradigms for scientific research, shortening the journey from “0 to 1” breakthroughs. Advance the development and application of large-scale scientific models, upgrade fundamental research platforms and major scientific facilities with AI capabilities, build open, high-quality scientific datasets, and enhance the handling of complex cross-modal scientific data. Strengthen AI’s role as a cross-disciplinary catalyst to foster convergent development across multiple fields.

Transform R&D models and boost efficiency

Foster an integrated, AI-driven process that spans research, engineering and product roll-out, accelerating the “1 to N” deployment and iterative refinement of technologies and enabling rapid translation of innovations. Promote the adoption of intelligent R&D tools and platforms, intensify AI-enabled co-innovation with bio-manufacturing, quantum technologies, 6G and other frontier domains, ground new scientific achievements in real-world applications, and let emerging application needs steer further breakthroughs.

M365 Copilot + GPT-5 = big improvement

less than 1 minute read

Have you tried M365 Copilot lately? It has gotten seriously good.

Click on the “Try GPT-5” button on the top right, and you’ll get what seems to be the same features and models as in ChatGPT Teams, with autorouting to fast or reasoning models.

Click on the “Researcher” agent on the left sidebar, and you’ll get what feels like Deep Research mode on ChatGPT or Claude – it will create a research plan, then go away and search the web for 10 minutes and synthesize the results.

Notebooks lets you create a mini-RAG like Google NotebookLM – drop in files from OneDrive or your desktop and it will use those specifically to answer questions.

In the old version of Copilot especially the Work tab seemed underpowered, using some old model like GPT-3.5. The Work tab also uses GPT-5 now, so you can use the most current model with work documents, emails, chats, transcripts, notebooks, …

I dismissed Copilot as “ChatGPT Lite” in the past, but since the update I’ve switched to it as my daily driver, it’s that useful. Give it a try if you haven’t used it in a while.

Piloting Claude for Chrome đź”—

less than 1 minute read

I’m not sure if we’re ready for agentic browser control. Yes, you can click each time to accept the risk, but how many of us read the T&Cs before we click accept?

Their 123 adversarial prompt injection test cases saw a 23.6% attack success rate when operating in “autonomous mode”. They added mitigations:

When we added safety mitigations to autonomous mode, we reduced the attack success rate of 23.6% to 11.2%

I would argue that 11.2% is still a catastrophic failure rate. In the absence of 100% reliable protection I have trouble imagining a world in which it’s a good idea to unleash this pattern.