March 25th, 2026
posted by [syndicated profile] justin_mason_feed at 10:01am on 25/03/2026

Posted by Links

  • Debunking zswap and zram myths

    This is pretty compelling. I like this example:

    We have some concrete numbers to show this in practice. On Instagram, which runs on Django and is largely memory bound, we ran a test where we moved from their existing setup (with swap entirely disabled) to a setup with disk swap and zswap tiering. Django workers accumulate significant cold heap state over their lifetime, like forked processes with duplicated memory, growing request caches, Python object overhead, you get the idea. The results were twofold:

    • We achieved roughly 5:1 compression. That's a huge benefit for such a memory bound workload, and also enables us to consider further stacking workloads.
    • Enabling zswap reduced disk writes by up to 25% compared to having no swap at all(!).

    As you can imagine, as a result of this test, Instagram has been using zswap for many years now.

    Tags: kernel compression memory linux ops performance swap zswap zram

andrewducker: (dating curve)

I wonder at what birth year over half of people have never seen a western.

Obviously very young people won't - but if we look at people age 25-40, who have had a chance to watch a bunch of movies, I wonder if outside of classic movie afficionados you'll have seen many people see any. The last minor resurgence would have been Tarantino's Hateful Eight and Django Unchained, and I don't think either of those were that massive. Before that you're probably back to Dances with Wolves and Unforgiven, which is now around 35 years ago.

Which would mean that the main cultural touchstone for young people would be Red Dead Redemption 2, released in 2018 and the 4th best-selling game of all time.

(Curiosity triggered because in the most recent University Challenge nobody recognised John Wayne.)

Posted by Laerke Christensen

The Treasury announcement about lifting sanctions on purchases of Iranian oil came as the war continued to disrupt shipments from the Middle East.
galadhir: Against a backdrop of green leaves and gold sparkles the text "Tell us now the full tale" is written. Both a Celeborn quote and a request to know more (Tell us now the full tale)
posted by [personal profile] galadhir at 09:41am on 25/03/2026 under

90 discussion questions

What is a topic you could stand up and talk passionately about for five minutes?

Any number of fandom opinions:

  • Why we shouldn't overlook Celeborn of Lorien when we talk about Galadriel, and why everybody still does.
  • Why Colonel Young from Stargate SGU is actually the right man for the job even though he himself doesn't think so.
  • Why some people love the villains in media and why it doesn't make them villains themselves.

A few non-fannish subjects:

  • The history and typology of morris dancing - I regularly do give a talk about this.
  • The history of Roses and Castles painting and how to do it yourself - I have taught a class on this too.
  • Anglo-Saxon clothing and how to construct it - I might be a bit rusty on this one if I was asked to go into depth but I can easily fill 5 minutes.
oursin: Brush the Wandering Hedgehog by the fire (Default)
posted by [personal profile] oursin at 09:48am on 25/03/2026
Happy birthday, [personal profile] staranise!
nanila: me (Default)
posted by [personal profile] nanila at 08:34am on 25/03/2026 under
It's challenge time!

Comment with Just One Thing you've accomplished in the last 24 hours or so. It doesn't have to be a hard thing, or even a thing that you think is particularly awesome. Just a thing that you did.

Feel free to share more than one thing if you're feeling particularly accomplished! Extra credit: find someone in the comments and give them props for what they achieved!

Nothing is too big, too small, too strange or too cryptic. And in case you'd rather do this in private, anonymous comments are screened. I will only unscreen if you ask me to.

Go!

Posted by Farbod Shahinfar

Guest Post: eBPF has been widely leveraged to improve network function performance. Can similar benefits be achieved for web servers and microservices?

Posted by charlesarthur


In its latest briefing, NASA says it won’t build a lunar space station – it’ll build a base instead. Why? CC-licensed photo by NASA Johnson on Flickr.

You can sign up to receive each day’s Start Up post by email. You’ll need to click a confirmation link, so no spam.


A selection of 10 links for you. Lunatic. I’m @charlesarthur on Twitter. On Threads: charles_arthur. On Mastodon: https://newsie.social/@charlesarthur. On Bluesky: @charlesarthur.bsky.social. Observations and links welcome.


US bans new foreign-made consumer internet routers • BBC News

Kali Hays:

»

“Malicious actors have exploited security gaps in foreign-made routers to attack American households, disrupt networks, enable espionage, and facilitate intellectual property theft,” the FCC said. While people will still be able to use foreign-made routers they already own, the ban applies to all “new device models.”

The ban stems from growing concern over the last year that routers were a point of easy-access for malicious actors. TP-Link, a router brand made in China that is a best-seller on Amazon, became the subject of some US political anxiety last year after a spate of cyberattacks.

Any new router made outside the US will now need to be approved by the FCC before it can be imported, marketed, or sold in the country. In order to get that approval, companies manufacturing routers outside the US must apply for conditional approval in a process that will require the disclosure of the firm’s foreign investors or influence, as well as a plan to bring the manufacturing of the routers to the US.

Certain routers may be exempted from the list if they are deemed acceptable by the Department of Defense or the Department of Homeland Security, the FCC said. Neither agency has yet added any specific routers to its list of equipment exceptions.

The FCC’s move follows a decision on Friday by government agencies working on national security that internet routers made overseas “posed unacceptable risks” to the US.

The vast majority of Internet routers are assembled or manufactured outside of the US, often in Taiwan or China. The FCC ban applies even if a router is designed in the US, but built abroad.

Popular brands of router in the US include Netgear, a US company, which manufactures all of its products abroad. One exception to the general absence of US-made routers is the newer Starlink WiFi router. Starlink is part of Elon Musk’s company SpaceX. The company says the Starlink routers are made in Texas.

«

Better late than never? But if there are a gazillion routers still installed, it’s not really a big security move unless you replace all those. And that’s not going to be popular with the ISPs.
unique link to this extract


Google Search is now using AI to replace headlines • The Verge

Sean Hollister:

»

Since roughly the turn of the millennium, Google Search has been the bedrock of the web. People loved Google’s trustworthy “10 blue links” search experience and its unspoken promise: The website you click is the website you get.

Now, Google is beginning to replace news headlines in its search results with ones that are AI-generated. After doing something similar in its Google Discover news feed, it’s starting to mess with headlines in the traditional “10 blue links,” too. We’ve found multiple examples where Google replaced headlines we wrote with ones we did not, sometimes changing their meaning in the process.

For example, Google reduced our headline “I used the ‘cheat on everything’ AI tool and it didn’t help me cheat on anything” to just five words: “‘Cheat on everything’ AI tool.” It almost sounds like we’re endorsing a product we do not recommend at all.

What we are seeing is a “small” and “narrow” experiment, one that’s not yet approved for a fuller launch, Google spokespeople Jennifer Kutz, Mallory De Leon, and Ned Adriance tell The Verge. They would not say how “small” that experiment actually is. Over the past few months, multiple Verge staffers have seen examples of headlines that we never wrote appear in Google Search results — headlines that do not follow our editorial style, and without any indication that Google replaced the words we chose. And Google says it’s tweaking how other websites show up in search, too, not just news.

Like I wrote in January, when Google decided it wouldn’t stop replacing news headlines in Google Discover from The Verge and our competitors, this is like a bookstore ripping the covers off the books it puts on display and changing their titles. We spend a lot of time trying to write headlines that are true, interesting, fun, and worthy of your attention without resorting to clickbait, but Google seems to believe we don’t have an inherent right to market our own work that way.

«

Google might call it a “small” and “narrow” experiment, but that’s only in the context of what Google does; and in that, news headlines are “small” and “narrow”. There will surely be plenty of A/B testing of this, and the outcome will decide whether this is applied to all news, or just forgotten. But why would Google ever give up its delicious, tasty AI?
unique link to this extract


Senior European journalist suspended over AI-generated quotes • The Guardian

Dan Milmo:

»

The publisher of the Dutch newspaper De Telegraaf and the Irish Independent has suspended one of its senior journalists after he admitted using AI to “wrongly put words into people’s mouths”.

Peter Vandermeersch, the former head of the Irish operations at Mediahuis, said he “fell into the trap of hallucinations” – the term for AI-generated errors – when using the technology.

Vandermeersch, a fellow of “journalism and society” at the European publishing group, has been suspended from his role.

The experienced journalist said he had summarised reports using AI tools such as ChatGPT, Perplexity and Google’s NotebookLM, and not checked whether the quotes from those summaries were accurate. He subsequently published them in his Substack newsletter.

The errors were highlighted by an investigation by one of Mediahuis’s own titles, NRC, where Vandermeersch had been editor-in-chief in the 2010s. NRC alleged Vandermeersch had published “dozens” of quotes that were false and that seven quoted individuals in his posts said they had not made the statements attributed to them.

“I wrongly put words into people’s mouths, when I should have presented them as paraphrases. In some cases, it reflected my interpretation of their words. That was not just careless – it was wrong,” Vandermeersch wrote in a Substack post headlined “I am admitting my mistake”.

Vandermeersch added: “It is particularly painful that I made precisely the mistake I have repeatedly warned colleagues about: these language models are so good that they produce irresistible quotes you are tempted to use as an author. Of course, I should have verified them. The necessary ‘human oversight’, which I consistently advocate, fell short.”

Vandermeersch’s Press and Democracy blog writes regularly about “the vital connection between a free press and a healthy democracy”.

«

*head in hands emoji*
unique link to this extract


NASA kills lunar space station to focus on ambitious Moon base • Ars Technica

Eric Berger:

»

NASA Administrator Jared Isaacman on Tuesday laid out a sweeping vision for the space agency’s next decade during an event called “Ignition” in which he and other senior leaders set out their exploration plans.

Isaacman and his colleagues shared a number of major announcements, including outlining a nuclear-powered mission to Mars that will release three helicopters there and major changes to commercial space stations. However, most significantly, Isaacman outlined a detailed plan to construct a substantial Moon base over the next decade. He framed it as part of a “great power” challenge, saying that if NASA does not succeed now it will cede the Moon to China.

The base included long-range drones, multiple sources of power, sophisticated communications, permanent habitats, scientific laboratories, local manufacturing, and more. To accomplish this, NASA will work with a broad range of industry partners capable of sending medium-size and large cargos to the lunar surface. Isaacman also confirmed that NASA will no longer build a Lunar Gateway in orbit around the Moon, but would rather focus all of its energy and resources on the lunar surface.

Is this affordable? One of Isaacman’s fundamental beliefs is that NASA does not have a revenue problem. Rather, it has an expense problem.

“For too long we tried to satisfy every stakeholder, and the results of that are very well documented in Office of the Inspector General reports,” he said. “Billions of dollars wasted. Years lost. Hardware that never launched. Fewer flagship science missions. And fewer astronauts in space, which means fewer kids dressing up as astronauts for Halloween. I don’t like it. The president doesn’t like it. The American people have waited long enough.”

«

Would this be the same NASA that has managed to not put any astronauts into orbit around the Moon for more than 50 years, and has had multiple holdups in its latest attempt to reenact that achievement? And that NASA is going to start building a base on the Moon? Because otherwise China will get to sit on a trillion pounds of dust?
unique link to this extract


OpenAI just gave up on Sora and its billion-dollar Disney deal • The Verge

Richard Lawler:

»

On Tuesday afternoon, OpenAI announced “We’re saying goodbye to Sora,” the video generation tool that it launched at the end of 2024, and centered in a massive licensing deal with Disney only a few months ago. The Wall Street Journal reported the move earlier, saying that OpenAI boss Sam Altman had informed staff that both the TikTok-like Sora app and API access for developers would be discontinued, with no plans to roll the feature into ChatGPT as had previously been rumored.

According to The Hollywood Reporter, as a result, the deal Disney announced in December, saying it would invest $1bn in OpenAI, license its characters for use within Sora, and send AI-generated videos into Disney Plus, is also coming to an end.

…OpenAI hasn’t responded to a request for comment or otherwise explained the shift, but there have been signs that things are changing, following Altman’s declaration of a “code red” a few months ago over possible slippage of ChatGPT vs. Google Gemini.

«

Sora rocketed to popularity – and then vanished completely. People were OK with making AI videos of themselves for about five minutes, but then the novelty wore off, and the question became: what is the utility of this? Why do it? And people stopped using it, but the copyright headache didn’t go away.

Even so, turning away a billion dollars in income – not investment – from Disney is quite a move. Perhaps they didn’t want to have to generate the content because it would be a distraction.
unique link to this extract


Quadruple amputee, cornhole pro charged with murder • FOX 5 DC

Elissa Salamy and Isabel Soisson:

»

A professional cornhole player and quadruple amputee has been formally charged with murder and multiple related offenses in connection with a deadly shooting that occurred in Charles County on March 22, 2026.

Dayton James Webber, 27, of La Plata, Md., was arraigned in the District Court of Maryland for Charles County after being located in Charlottesville, Virginia, and arrested following the fatal shooting of 27‑year‑old Bradrick Michael Wells, according to court documents.

…According to the statement of charges filed by Det. M. Bigelow of the Charles County Sheriff’s Office, Dayton Webber picked up two witnesses from work in a vehicle, with Bradrick Wells already in the front passenger seat. The documents state that, while driving, an argument broke out between Webber and Wells.

The witnesses, identified in the charging documents as W1 and W2, told police that Webber pulled out a firearm and shot Wells twice in the head during the argument. The statement of charges says Webber then pulled the vehicle over and asked the passengers to remove Wells from the car, which they refused.

The two witnesses exited the vehicle and flagged down a police officer, the documents state, while Webber drove off with Wells still inside the car. According to the filing, around 12:41 a.m. on March 23, a resident at 10115 Newport Church Road in Charlotte Hall discovered Wells’ body on the side of the road.

The statement of charges notes that both W1 and W2 positively identified Webber as the shooter and Wells as the victim, providing the basis for the murder and assault charges currently pending in Charles County District Court.

Police say that Webber’s vehicle was later located in Charlottesville, Virginia, and Webber was found at a hospital seeking treatment. Webber is currently awaiting extradition to Charles County, Maryland, where he will face formal charges.

«

This might not yet be the weirdest story of the week, but it’s got to be up there.
unique link to this extract


VW to shift from cars to missile defence in deal with Israel’s Iron Dome maker • Financial Times

Laura Pitel, Anne-Sylvaine Chassany and Sebastien Ash:

»

Volkswagen is in talks with Israel’s Rafael Advanced Defence Systems over a deal that would switch production at one of the German group’s factories from cars to missile defence.

The two companies plan to convert the embattled Osnabrück plant to make components for the Israeli state-owned group’s Iron Dome air defence system, according to people familiar with the plan.

The tie-up would be the highest-profile example yet of the German car industry, where profits have plunged amid rising Chinese competition and a stuttering transition to electric vehicles, seeking partnerships with the booming defence sector.

The two companies hope to save all 2,300 jobs at the plant in the west German state of Lower Saxony, which has been under threat of closure, and hope to sell the systems to European governments.

“The aim is to save everybody, maybe even to grow,” said one of the people familiar with the plans. “The potential is so high. But it’s also an individual decision for the workers if they want to be part of the idea.”

«

Plowshares are outdated! Swords (and to some extent shields) are where it’s at now.
unique link to this extract


BASE experiment at CERN succeeds in transporting antimatter • CERN

»

CERN’s “antimatter factory” is the only place in the world where antiprotons can be produced, stored and studied. Two successive decelerators, the Antiproton Decelerator (AD) and the Extra Low Energy Antiproton ring (ELENA), provide several experiments with low-energy antiprotons – the lower their energy, the easier they can be stored and studied. Among these experiments, BASE holds long-standing records for containing antiprotons for more than one year, and the experiment has invented this pioneering approach in order to move on to the next stage: transporting antiprotons to an offline space for more precise experiments as well as sharing them with others. That’s why they developed the BASE-STEP trap: an apparatus designed to store and transport antiprotons.

“Our aim with BASE-STEP is to be able to trap antiprotons and deliver them to our precision laboratories at a dedicated space at CERN, HHU, Leibnitz University Hannover and perhaps other laboratories that are capable of performing very-high-precision antiproton measurements, which unfortunately is not possible in the antimatter factory,” explains Christian Smorra, the Leader of BASE-STEP. “We validated the feasibility of the project with protons last year, but what we achieved today with antiprotons is a huge leap forward towards our objective.”

BASE-STEP is small enough to be loaded onto a truck and fit through ordinary laboratory doors, and it can withstand the bumps and vibrations of transport. The current apparatus – which includes a superconducting magnet, liquid helium cryogenic cooling, power reserves and a vacuum chamber that traps the antiparticles using magnetic and electric fields – weighs 1,000 kilograms [one tonne]: much more compact than BASE or any other existing system used to study antimatter.

“To reach our first destination – our dedicated precision laboratory at HHU in Germany – would take us at least eight hours,” says Christian Smorra. “This means we’d have to keep the trap’s superconducting magnet at a temperature below 8.2K for that long. So, in addition to the liquid helium , we’d need to have a generator to power a cryocooler on the truck. We are currently investigating this possibility.” Nevertheless, the greatest challenge remains on arrival at the destination: to transfer the antiprotons to the experiment without them vanishing.

«

The transfer – only across the CERN site in this first attempt – was of 92 antiprotons. With a mass of 1.6×10^-27kg each, on annihilation by touching matter (using E=mc^2) they’d produce a rather small elimination of 3×10^-8 joules. That would (per ChatGPT, showing its working) lift a speck of dust about three metres.
unique link to this extract


‘Zombie’ tankers take Tehran Toll Booth route as more vessels make detour • Lloyd’s List

Richard Meade, Tomer Raanan and Ece Göksedef:

»

Traffic through the Strait of Hormuz is increasingly being diverted into Iranian territorial waters in what has been dubbed the “Tehran Toll Booth”, where the Islamic Revolutionary Guard Corps is understood to be verifying vessel details and, in some cases, extolling a passage fee.

More than 20 vessels of over 10,000 dwt [dead weight tonnage] have thus far made the detour, which goes between Iran’s Qeshm and Larak Islands.

Among them were two “zombie” tankers that transited while assuming the identity of dead vessels.

At least two vessels transiting through the strait are understood to have paid in exchange for safe passage, with one fee reported to have been around $2m.

While the Strait of Hormuz remains dramatically reduced as a result of the conflict, which has seen more than 20 maritime incidents involving commercial vessels and offshore infrastructure since February 28, the pace of vessel transits across the strait picked up over the weekend.

Analysis of Lloyd’s List Intelligence data reveals that at least 16 vessels have transited the strait since Friday. Thirteen vessels headed east out of the Middle East Gulf, while three entered westbound.

Twelve were tracked via Automatic Identification System data sailing through the new route that transits Iranian territorial waters; three either did not have enough AIS data to assess their route or transit date with confidence, while a fourth, an Iran-flagged bulker, transited the strait but stopped near Larak Island.

…On Monday, two India-flagged very large gas carriers transited, signalling their Indian ownership via their AIS signal — a trend that is increasingly prevalent among Indian and some China affiliated vessels.

India’s Ministry of Shipping said the two ships, carrying over 92,600 tonnes of liquefied petroleum gas, had transited and are scheduled to reach ports in the country between March 26 and 28.

Shortages of LPG in India, where the gas is primarily used for cooking has become a hot political issue, forcing the government to engage in talks with Tehran to secure cargoes.

«

The story has a graph of daily traffic through the Strait by sector (chemical, oil, gas, etc). It’s gone from more than 100 to single digits.
unique link to this extract


Three charged with conspiring to unlawfully divert cutting edge US artificial intelligence technology to China • US Department of Justice

»

Today [Thurs March 19], an indictment was unsealed charging Yih-Shyan “Wally” Liaw, Ruei-Tsang “Steven” Chang, and Ting-Wei “Willy” Sun, for allegedly conspiring to divert high-performance computer servers assembled in the United States and integrating sophisticated US artificial intelligence technology to China, in violation of US export controls laws.  Liaw, a US citizen, and Sun, a citizen of Taiwan, were arrested today and will be presented in the Northern District of California. Chang, a citizen of Taiwan, remains a fugitive.

“The indictment unsealed today details alleged efforts to evade US export laws through false documents, staged dummy servers to mislead inspectors, and convoluted transshipment schemes, in order to obfuscate the true destination of restricted AI technology—China,” said John A. Eisenberg, Assistant Attorney General for National Security. “These chips are the product of American ingenuity, and NSD will continue to enforce our export-control laws to protect that advantage.”

“The FBI’s investigation revealed that Liaw, Chang, and Sun allegedly conspired to sell billions of dollars’ worth of servers integrating sensitive, controlled graphic processing units to buyers in China, in violation of US export control laws,” said Assistant Director Roman Rozhavsky of the FBI’s Counterintelligence and Espionage Division.  “Controlling the export of sensitive US artificial intelligence technology is essential to safeguarding our national security and defending the homeland.  That’s why combating export violations is among the FBI’s highest priorities, and we will continue working with our law enforcement, private sector, and international partners to bring to justice all who take action to undermine US national security.”

“As alleged in the Indictment, the defendants participated in a systematic scheme to divert massive quantities of servers housing US artificial intelligence technology to customers in China,” said US Attorney Jay Clayton for the Southern District of New York. “They did so through a tangled web of lies, obfuscation, and concealment—all to drive sales and generate revenues in violation of US law. Diversion schemes like those disrupted today generate billions of dollars in ill-gotten gains and pose a direct threat to US national security.

«

Obviously all those Nvidia GPUs had to get to China some way. Whether this is everyone who was doing it may be a different story.
unique link to this extract


• Why do social networks drive us a little mad?
• Why does angry content seem to dominate what we see?
• How much of a role do algorithms play in affecting what we see and do online?
• What can we do about it?
• Did Facebook have any inkling of what was coming in Myanmar in 2016?

Read Social Warming, my latest book, and find answers – and more.


Errata, corrigenda and ai no corrida: none notified

Posted by cks

For reasons, I've reached the point where I would like to be able to map IPv4 addresses into the organizations responsible for them, which is to say their Autonomous System Number (ASN), for use in DWiki, the blog engine of Wandering Thoughts. So today on the Fediverse I mused:

Current status: wondering if I can design an on-disk (read only) data structure of some sort that would allow a Python 2 program to efficiently map an IP address to an ASN. There are good in-memory data structures for this but you have to load the whole thing into memory and my Python 2 program runs as a CGI so no, not even with pickle.

(Since this is Python 2, about all I have access to is gdbm or rolling my own direct structure.)

Mapping IP addresses to ASNs comes up a lot in routing Internet traffic, so there are good in-memory data structures that are designed to let you efficiently answer these questions once you have everything loaded. But I don't think anyone really worries about on-disk versions of this information, while it's the case that I care about, although I only care about some ASNs (a detail I forgot to put in the Fediverse post).

Then I had a realization:

If I'm willing to do this by /24 (and I am) and represent the ASNs by 16-bit ints, I guess you can do this with a 32 Mbyte sparse file of two-byte blocks. Seek to a 16-byte address determined by the first three octets of the IP, read two bytes, if they're zero there's no ASN mapping we care about, otherwise they're the ASN in some byte order I'd determine.

If I don't care about the specific ASN, just a class of ASNs of interest of which there are at most 255, it's only 16 Mbytes.

(And if all I care about is a yes or know answer, I can represent each /24 by a bit, so the storage required drops even more, to only 2 Mbytes.)

This Fediverse post has a mistake. I thought ASNs were 16-bit numbers, but we've gone well beyond that by now. So I would want to use the one-byte 'class of ASN' approach, with ASNs I don't care about mapping to a class of zero. Alternately I could expand to storing three bytes for every /24, or four bytes to stay aligned with filesystem blocks.

That storage requirement is 'at most' because this will be a Unix sparse file, where filesystem blocks that aren't written to aren't stored on disk; when read, the data in them is all zero. The lookup is efficient, at least in terms of system calls; I'd open the file, lseek() to the position, and read two bytes (causing the system to read a filesystem block, however big that is). Python 2 doesn't have access to pread() or we could do it in one system call.

Within the OS this should be reasonably efficient, because if things are active much of the important bits of the mapping file will be cached into memory and won't have to be read from disk. 32 Mbytes is nothing these days, at least in terms of active file cache, and much of the file will be sparse anyway. The OS obviously has reasonably efficient random access to the filesystem blocks of the file, whether in memory or on disk.

This is a fairly brute force approach that's only viable if you're typically making a single query in your process before you finish. It also feels like something that is a good fit for Unix because of sparse files, although 16 Mbytes isn't that big these days even for a non-sparse file.

Realizing the brute force approach feels quite liberating. I've been turning this problem over in my mind for a while but each time I thought of complicated data structures and complicated approaches and it was clear to me that I'd never implement them. This way is simple enough that I could actually do it and it's not too impractical.

PS: I don't know if I'll actually build this, but every time a horde of crawlers descends on Wandering Thoughts from a cloud provider that has a cloud of separate /24s and /23s all over the place, my motivation is going to increase. If I could easily block all netblocks of certain hosting providers all at once, I definitely would.

(To get the ASN data there's pyasn (also). Conveniently it has a simple on-disk format that can be post-processed to go from a set of CIDRs that map to ASNs to a data file that maps from /24s to ASN classes for ASNs (and classes) that I care about.)

Update: After writing most of this entry I got enthused and wrote a stand-alone preliminary implementation (initially storing full ASNs in four-byte records), which can both create the data file and query it. It was surprisingly straightforward and not very much code, which is probably what I should have expected since the core approach is so simple. With four-byte records, a full data file of all recent routes from pyasn is about 53 Mbytes and the data file can be created in less than two minutes, which is pretty good given that the code writes records for about 16.5 million /24s.

(The whole thing even appears to work, although I haven't strongly tested it.)

Posted by cks

Today I made an unpleasant discovery about virt-manager on my (still) Fedora 42 machines that I shared on the Fediverse:

This is my face that Fedora virt-manager appears to have been defaulting to external snapshots for some time and SURPRISE, external snapshots can't be reverted by virsh. This is my face, especially as it seems to have completely screwed up even deleting snapshots on some virtual machines.

(I only discovered this today because today is the first time I tried to touch such a snapshot, either to revert to it or to clean it up. It's possible that there is some hidden default for what sort of snapshot to make and it's only been flipped for me.)

Neither virt-manager nor virsh will clearly tell you about this. In virt-manager you need to click on each snapshot and if it says 'external disk only', congratulations, you're in trouble. In virsh, 'virsh snapshot-list --external <vm>' will list external snaphots, and then 'virsh snapshot-list --tree <vm>' will tell you if they depend on any internal snapshots.

My largest problems came from virtual machines where I had earlier internal snapshots and then I took more snapshots, which became external snapshots from Fedora 41 onward. You definitely can't revert to an external snapshot in this situation, at least not with virsh or virt-manager, and the error messages I got were generic ones about not being able to revert external snapshots. I haven't tested reverting external snapshots for a VM with no internal ones.

(Not being able to revert to external snapshots is a long standing libvirt issue, but it's possible they now work if you only have external snapshots. Otherwise, Fedora 41 and Fedora 42 defaulting to external snapshots is extremely hard to understand (to be polite).)

Update: you can revert an external snapshot in the latest libvirt if all of your snapshots are external. You can't revert them if libvirt helpfully gave you external snapshots on top of internal ones by switching the default type of snapshots (probably in Fedora 41).

If you have an external snapshot that you need to revert to, all I can do is point to a libvirt wiki page on the topic (although it may be outdated by now) along with libvirt's documentation on its snapshot XML. I suspect that there is going to be suffering involved. I haven't tried to do this; when it came up today I could afford to throw away the external snapshot.

If you have internal snapshots and you're willing to throw away the external snapshot and what's built on it, you can use virsh or virt-manager to revert to an internal snapshot and then delete the external snapshot. This leaves the external snapshot's additional disk file or files dangling around for you to delete by hand.

If you have only an external snapshot, it appears that libvirt will let you delete the snapshot through 'virsh snapshot-delete <vm> <external-snapshot>', which preserves the current state of the machine's disks. This only helps if you don't want the snapshot any more, but this is one of my common cases (where I take precautionary snapshots before significant operations and then get rid of them later when I'm satisfied, or at least committed).

The worst situation appears to be if you have an external snapshot made after (and thus on top of) an earlier internal snapshot and you to keep the live state of things while getting rid of the snapshots. As far as I can tell, it's impossible to do this through libvirt, although some of the documentation suggests that you should be able to. The process outlined in libvirt's Merging disk image chains didn't work for me (see also Disk image chains).

(If it worked, this operation would implicitly invalidate the snapshots and I don't know how you get rid of them inside libvirt, since you can't delete them normally. I suspect that to get rid of them, you need to shut down all of the libvirt daemons and then delete the XML files that (on Fedora) you'll find in /var/lib/libvirt/qemu/snapshot/<domain>.)

One reason to delete external snapshots you don't need is if you ever want to be able to easily revert snapshots in the future. I wouldn't trust making internal snapshots on top of external ones, if libvirt even lets you, so if you want to be able to easily revert, it currently appears that you need to have and use only internal snapshots. Certainly you can't mix new external snapshots with old internal snapshots, as I've seen.

(The 5.1.0 virt-manager release will warn you to not mix snapshot modes and defaults to whatever snapshot mode you're already using. I don't know what it defaults to if you don't have any snapshots, I haven't tried that yet.)

Sidebar: Cleaning this up on the most tangled virtual machine

I've tried the latest preview releases of the libvirt stuff, but it doesn't make a difference in the most tangled situation I have:

$ virsh snapshot-delete hl-fedora-36 fedora41-preupgrade
error: Failed to delete snapshot fedora41-preupgrade
error: Operation not supported: deleting external snapshot that has internal snapshot as parent not supported

This VM has an internal snapshot as the parent because I didn't clean up the first snapshot (taken before a Fedora 41 upgrade) before making the second one (taken before a Fedora 42 upgrade).

In theory one can use 'virsh blockcommit' to reduce everything down to a single file, per the knowledge base section on this. In practice it doesn't work in this situation:

$ virsh blockcommit hl-fedora-36 vda --verbose --pivot --active
error: invalid argument: could not find base image in chain for 'vda'

(I tried with --base too and that didn't help.)

I was going to attribute this to the internal snapshot but then I tried 'virsh blockcommit' on another virtual machine with only an external snapshot and it failed too. So I have no idea how this is supposed to work.

Since I could take a ZFS snapshot of the entire disk storage, I chose violence, which is to say direct usage of qemu-img. First, I determined that I couldn't trivially delete the internal snapshot before I did anything else:

$ qemu-img snapshot -d fedora40-preupgrade fedora35.fedora41-preupgrade
qemu-img: Could not delete snapshot 'fedora40-preupgrade': snapshot not found

The internal snapshot is in the underlying file 'fedora35.qcow2'. Maybe I could have deleted it safely even with an external thing sitting on top of it, but I decided not to do that yet and proceed to the main show:

$ qemu-img commit -d fedora35.fedora41-preupgrade
Image committed.
$ rm fedora35.fedora41-preupgrade

Using 'qemu-img info fedora35.qcow2' showed that the internal snapshot was still there, so I removed it with 'qemu-img snapshot -d' (this time on fedora35.qcow2).

All of this left libvirt's XML drastically out of step with the underlying disk situation. So I removed the XML for the snapshots (after saving a copy), made sure all libvirt services weren't running, and manually edited the VM's XML, where it turned out that all I needed to change was the name of the disk file. This appears to have worked fine.

I suspect that I could have skipped manually removing the internal snapshot and its XML and libvirt would then have been happy to see it and remove it.

(I'm writing all of the commands and results down partly for my future reference.)

posted by [syndicated profile] apod_feed at 05:15am on 25/03/2026

[Error: Irreparable invalid markup ('<img [...] watched">') in entry. Owner must fix manually. Raw contents below.]

<p class="ljsyndicationlink"><a href="https://apod.nasa.gov/apod/astropix.html">https://apod.nasa.gov/apod/astropix.html</a></p><p><a href="https://apod.nasa.gov/apod/astropix.html"><img src="https://apod.nasa.gov/apod/calendar/S_260325.jpg" align="left" alt="In the words of today's astrophotographer, Rositsa Dimitrova, "What have these silent sentinels watched" border="0" /></a> In the words of today's astrophotographer, Rositsa Dimitrova, "What have these silent sentinels watched</p><br clear="all"/><p class="ljsyndicationlink"><a href="https://apod.nasa.gov/apod/astropix.html">https://apod.nasa.gov/apod/astropix.html</a></p>
posted by [syndicated profile] apod_feed at 05:15am on 25/03/2026
jamethiel: A woman running past the camera, looking strong (Running)
posted by [personal profile] jamethiel at 04:10pm on 25/03/2026 under
hannah: (Laundry jam - fooish_icons)
posted by [personal profile] hannah at 09:56pm on 24/03/2026
It took me about an hour and a half to walk about four miles today. I had a couple of hours to get from 72nd street down to 4th street, so I figured I might as well go on foot to use the time. I didn't get a lot of thinking done, which I put down to having to keep dodging and weaving through crowds - that kind of thing's easier when there's nobody in my way, on foot or any other method of transportation. Which is on me for sticking to a busy street at a busy time of day than walking a few blocks over and trying on that.

There's also my head's not here or there, and I need to find some space to drift.
Mood:: 'sore' sore
Music:: nothing now
billroper: (Default)
posted by [personal profile] billroper at 09:38pm on 24/03/2026 under , ,
The bug list having cleared, I've embarked on a small side project at work. It's already had one useful result, which I've checked into the code base which will eventually get merged and shipped. Now, I'm going to use that useful result to see if I can make the side project work in a short period of time, like a couple of days.

Naturally, a new bug came in during the middle of this, but I fixed it and dispatched it back to the reporter, so I hope to be able to continue on that side project tomorrow. :)
posted by [syndicated profile] questionable_content_feed at 10:03pm on 24/03/2026

a thousand beaks, a million talons, ten billion eyes. RIP Ms. Beakman, you beautiful bird

Posted by Mark Nottingham

I’ve previously looked at using AI as a tool to evaluate technical standards efforts – basically, asking commercially available chatbots what they think. However, “AI” is more than off-the-shelf, general-purpose chatbots. Can we do better by grounding the model in a specific context?

I’ve been looking for ways to use NotebookLM for a while: grounding a chatbot in a specific set of documents allows you to interact with them in a genuinely new way.

The breakthrough question for me was simple: What if those documents were the records of a working group? Thanks to record-keeping requirements, meetings need to keep minutes, document drafts are available, and often groups keep additional information like issue lists and meeting transcripts.

Feed all of that into NotebookLM and you can effectively chat with the history of a standards effort – asking about why a particular choice was made, who participated, what objections came up, and how a specification evolved.

I suspect this capability could be significant, precisely because the barriers to entry for tracking and understanding standards work are so high. There is simply too much going on — too many emails, issues, and drafts — for most people to follow.

If successful, this technique might help make standards efforts more legible to:

  • New or casual participants, who currently face a “wall of text” when trying to catch up on years of debate.
  • Product managers and developers, who need to understand the intent behind a specification, not just the syntax.
  • Civil society and policymakers, for whom the technical archives are often effectively opaque.

AI Preferences

My first go at this technique was in a working group I chair, AI Preferences. We needed a way to get new and casual participants up to speed on discussions, so that we didn’t need to keep repeating the same arguments.

Here’s the notebook I created.1 I asked it to summarise the arguments against proposals for a “use” term and a “search” term in the vocabulary.

Privately, I got feedback from new participants that these were very useful – and, critically, I was able to create them without injecting my own biases.

GEOPRIV

Another test case is the now-finished IETF work on Geolocation Privacy. I wasn’t involved in this group, but have long heard my IETF colleagues whisper about it in hushed tones; it didn’t succeed, and caused a lot of pain on the way there.

After gathering the relevant documents and dragging them into a notebook,1 I asked:

Why did GEOPRIV fail?

Here’s the full response. Martin Thomson (who was intimately involved in that work) reviewed that answer and said:

The privacy part is broadly correct. The whole on-behalf-of arrangement did lead to some fairly bitter fights. […] Fights were common. The part about wars is entirely accurate. I’m not sure about the over-engineering part, though maybe that relates to the privacy aspect, which is fair. The final thing about lack of commercial success is broadly right, modulo successful deployments for emergency services geolocation.

So I’d say that this is maybe 80%.

A New Tool

The hard part of all of this is getting all of the documents together in one place to feed into NotebookLM. To make that easier, at least for IETF groups, I2 created a new tool, ietf-notebook.

You can install it using pipx:

pipx install ietf-notebook

Then, use it to gather all of a group’s drafts, RFCs, meeting minutes and transcripts, its charter, and optionally its GitHub issues into a directory, ready for dragging into a new notebook, so you can chat with that group’s history.

It’s still rough, so bug reports, suggestions, and improvements are most welcome. In my experience, it takes less than a minute to gather the documents for most groups, so you can be chatting with a group in almost no time.

If you want to see a demo first, check out the notebooks for AIPREF, DIEM, and GEOPRIV.1

  1. You’ll need to be logged into Google to use these notebooks.  2 3

  2. OK, Gemini. 

March 24th, 2026

Posted by Emery Winter

The same blog behind the fabricated Ruffalo tale shared a nearly identical fake story about Adam Sandler.

November

SunMonTueWedThuFriSat
          1
 
2
 
3 4
 
5
 
6
 
7
 
8
 
9
 
10
 
11
 
12
 
13
 
14
 
15
 
16
 
17
 
18
 
19
 
20
 
21
 
22
 
23
 
24
 
25
 
26
 
27
 
28
 
29
 
30