While many educators in the West see AI as a threat they have to manage, more Chinese classrooms are treating it as a skill to be mastered. In fact, as the Chinese-developed model DeepSeek gains in popularity globally, people increasingly see it as a source of national pride. The conversation in Chinese universities has gradually shifted from worrying about the implications for academic integrity to encouraging literacy, productivity, and staying ahead.
FDA’s artificial intelligence is supposed to revolutionize drug approvals. It’s making up studies 🔗
The FDA’s head of AI, Jeremy Walsh, admitted that Elsa can hallucinate nonexistent studies.
“Elsa is no different from lots of [large language models] and generative AI,” he told CNN. “They could potentially hallucinate.”
Sounds like a need for some kind of tool to verify that studies are at least real, if not accurately represented, in the answers government scientists are using to make critical decisions.
Markitdown đź”—
This looks like a handy package for converting documents (PDF, .docx, .pptx
, and more) to .md
. There’s also a MCP server so you can use it with your LLM.
To install as a uv tool:
uv tool install markitdown --with 'markitdown[all]'
Reflections on OpenAI đź”—
A fascinating look into OpenAI the company:
[…] you probably shouldn’t view OpenAI as a single monolith. I think of OpenAI as an organization that started like Los Alamos. It was a group of scientists and tinkerers investigating the cutting edge of science. That group happened to accidentally spawn the most viral consumer app in history.
Empirical evidence of Large Language Model’s influence on human spoken communication 🔗
We detect a measurable and abrupt increase in the use of words preferentially generated by ChatGPT, such as delve, comprehend, boast, swift, and meticulous, after its release.
Just for the record, I’ve been using em dashes — correctly, I hope — for decades.