What are Moonshots and how do we build them?
…Plus, why Moonshots are hard…
AI researcher Eirini Malliaraki has read a vast pile of bureaucratic documents to try and figure out how to make ‘moonshots’ work – the result is a useful overview of the ingredients of societal moonshots and ideas for how to create more of them.
A moonshot, as a reminder, is a massive project that, according to Malliaraki, “has the potential to change the lives of dozens of millions of people for the better; encourages new combinations of disciplines, technologies and industries; has multiple, bottom-up diverse solutions; presents a clear case for technical and scientific developments that would otherwise be 5–7x more difficult for any actor or group of actors to tackle”. Good examples of successful moonshots include the Manhattan Project, the Moon Landing, and the sequencing of the human genome.
What’s hard about Moonshots? Moonshots are challenging because they require sustained effort over multiple years, significant amounts of money (though money alone can’t create a moonshot), and also require infrastructure to ensure they work over the long term. “Moonshots need to be managed through an agile (cliche) and adaptive process as they may run over several years and involve hundreds of organisations and individuals. A lot of thinking has gone into appropriate funding structures, less so into creating “attractors” for organisational and systemic collaborations,” Malliaraki notes.
Why this matters: Silver bullets aren’t real and don’t kill werewolves, but Moonshots can be real and – if well scoped enough – can kill the proverbial werewolf. I want to live in a world where society is constantly gathering together resources to create more of these silver bullets – not only is it more exciting, but it’s also one of the best ways for us to make massive, scientific progress. “I want to see many more technically ambitious, directed and interdisciplinary moonshots that are fit for the complexities and social realities of the 21st century and can get us faster to a safe and just post-carbon world,” Malliaraki writes – here, here!
Read more: Architecting Moonshots (Eirini Malliaraki, Medium).
###################################################
Walmart cancels robotics push:
…Ends ties with startup, after saying in January it planned to roll the robots out to 1,000 stores…
Walmart has cut ties with Bossa Nova Robotics, a robot startup, according to the Wall Street Journal. That’s an abrupt change from January of this year, when Walmart said it was planning to roll the robots out to 1,000 of its 4,700 U.S. stores.
Why this matters: Robots, at least those used in consumer settings, seem like error-prone ahead-of-their-time machines, which are having trouble finding their niche. It is perhaps instructive that we see a ton of activity in the drone space – where many of the problems relating to navigation and interacting with humans aren’t present. Perhaps today’s robot hardware and perception algorithms need to be more refined before they can be adopted en mass?
Read more: Walmart Scraps Plan to Have Robots Scan Shelves (Wall Street Journal).
Read more: Boss Nova’s inventory robots are rolling out in 1,000 Walmart stores (TechCrunch, January).
###################################################
Paid Job: Work with Jack and others to help analyze data and contribute to the AI Index!
The AI Index at Stanford Institute for Human-Centered Artificial Intelligence (HAI) is looking for a part-time Graduate Researcher to focus on bibliometrics analyses and curating technical progress for the annual AI Index Report. Specific tasks include extracting/validating technical performance data in the domain of NLP, CV, ASR, etc., developing bibliometric analysis, analyzing Github data with Colabs, run Python scripts to help evaluate systems in the theorem proving domain, etc. This is a paid position with 15-20 hours of work per week. Send with links to papers authored, Github page/other proofs of interest in AI, if any to dzhang105@stanford.edu. Masters or PHD preferred. Job posting here.
Specific requirements:
– US-based.
– Pacific timezone preferred.
PS – I’m on the Steering Committee of the AI Index and spend several hours a week working on it, so you’ll likely work with me in this role, some of the time.
###################################################
What happens when an AI tries to complete Brian Eno? More Brian Eno!
Some internet-dweller has used OpenAI Jukebox, a musical generative model, to try to turn the Windows 95 startup sound into a series of different musical tracks. The results are, at times, quite interesting, and I’m sure would be interesting to Brian Eno who composed the original sound (and 83 variants of it).
Listen here: Windows 95 Startup Sound but an AI attempts to continue the song. [OpenAI Jukebox].
Via Caroline Foley, Twitter.
###################################################
Think you can spot GAN faces easily? What if someone fixes the hair generation part? Still confident?
…International research team tackle one big synthetic image problem…
Recently, AI technology has matured enough that some AI models can generate synthetic images of people that look real. Some of these images have subsequently been used by advertisers, political campaigns, spies, and fraudsters to communicate with (and mislead) people. But GAN aficionados have so far been able to spot manipulated images, for instance by looking at the quality of the background, or how the earlobes connect to the head, or the placement of the eyes, or quality of the hair, and so on.
Now, researchers with the University of Science and Technology of China, Snapchat, Microsoft Cloud AI, and the City University of Hong Kong have developed ‘MichiGAN’, technology that lets them generate synthetic images with realistic hair.
How MichiGAN works: The tech uses a variety of specific modules to disentangle hair into a set of attributes, like shape, structure, appearance, and background, then these different modules work together to guide realistic generations. They then build this into an interactive hair editing system “that enables straightforward and flexible hair manipulation through intuitive user inputs”.
Why this matters: GANs have gone from an in-development line of research to a sufficiently useful tech that they are being rapidly integrated into products – one can imagine future versions of Snapchat letting people edit their hairstyle, for instance.
Read more: MichiGAN: Multi-Input-Conditioned Hair Image Generation for Portrait Editing (arXiv).
Get the code here (Michigan, GitHub).
###################################################
Google turns its supercomputers onto training more efficient networks:
…Big gulp computation comes for EfficientNets…
Google has used a supercomputer’s worth of computation to train an ‘EfficientNet’ architecture network. Specifically, Google recently was able to cut the training time of an EfficientNet model from 23 hours on 8 TPU-v-2 cores, to around an hour by training across 1024 TPU-v3 cores at once. EfficientNets are a type of network, predominantly developed by Google, that are somewhat complicated to train but can be somewhat more efficient once trained.
Why this matters: The paper goes into some of the technical details for how Google trained these models, but the larger takeaway is more surprising: it can be efficient to train at large scales, which means a) more people will train massive models and b) we’re going to get faster at training new models. One of the rules of machine learning is when you cut the time it takes to train a model, organizations with the computational resources to do so will train more models, which means they’ll learn more relative to other orgs. The hidden message here is Google’s research team is building the tools that let it speed itself up.
Read more: Training EfficientNets at Supercomputer Scale: 83% ImageNet Top-1 Accuracy in One Hour (arXiv).
###################################################
AI Policy with Matthew van der Merwe:
…Matthew van der Merwe brings you views on AI and AI policy; I (lightly) edit them…
Pope Francis is praying for aligned AI:
Pope Francis shares a monthly ‘prayer intention’ with Catholics around the world. For November, he asks them to pray for AI that is aligned and beneficial to humanity. This is not the Pope’s first foray into these issues — earlier in 2020, the Vatican released the ‘Rome Call for an AI Ethics’, whose signatories include Microsoft and IBM.
His message in full: “Artificial intelligence is at the heart of the epochal change we are experiencing. Robotics can make a better world possible if it is joined to the common good. Indeed, if technological progress increases inequalities, it is not true progress. Future advances should be oriented towards respecting the dignity of the person and of Creation. Let us pray that the progress of robotics and artificial intelligence may always serve humankind… we could say, may it ‘be human.’”
Read more: Pope Francis’ video message (YouTube).
Read more: Rome Call for an AI Ethics.
Crowdsourcing forecasts on tech policy futures
CSET, at Georgetown University, have launched Foretell — a platform for generating forecasts on important political and strategic questions. This working paper outlines the methodology and some preliminary results from pilot programs.
Method: One obstacle to leveraging the power of forecasting in domains like tech policy is that we are often interested in messy outcomes — e.g. by 2025, will US-China tensions increase or decrease?; will the US AI sector boom or decline? This paper shows how we can construct proxies using quantitative metrics with historical track records to make this more tractable — e.g. to forecast US-China tensions, we can forecast trends in the volume of US-China trade; the number of US visas for Chinese nationals; etc. In the pilot study, crowd forecasts tentatively suggest increased US-China tensions over the next 5 years.
Learn more and register as a forecaster at Foretell.
Read more: Future Indices — how crowd forecasting can inform the big picture (CSET)
(Jack – Also, I’ve written up one particular ‘Foretell’ forecast for CSET relating to AI, surveillance, and covid – you can read it here).
###################################################
Tech Tales:
Down and Out Below The Freeway
[West Oakland, California, 2025]
He found the drone on the sidewalk, by the freeway offramp. It was in a hard carry case, which he picked up and took back to the encampment – a group of tents, hidden in the fenced-off slit of land that split the freeway from the offramp.
“What’ve you got there, ace?” said one of the people in the camp.
“Let’s find out,” he said, flicking the catches to open the case. He stared at the drone, which sat inside a carved out pocket of black foam, along with a controller, a set of VR goggles, and some cables.
“Wow,” he said.
“That’s got to be worth a whole bunch,” said someone else.
“Back off. We’re not selling it yet,” he said, looking at it.
He could remember seeing an advert for an earlier version of this drone. He’d been sitting in a friend’s squat, back at the start of his time as a “user”. They were surfing through videos on YouTube – ancient aliens, underwater ruins, long half-wrong documentaries on quantum physics, and so on. Then they found a video of a guy exploring some archaeological site, deep in the jungles of South America. The guy in the video had white teeth and the slightly pained expression of the rich-by-birth. “Check this out, guys, I’m going to use this drone to help us find an ancient temple, which was only discovered by satellites recently. Let’s see what we find!” The rest of the video consisted of the guy flying the drone around the jungle, soundtracked to pumping EDM music, and concluded with the reveal – some yellowing old rocks, mostly covered in vines and other vegetation – but remarkable nonetheless.
“That shit is old as hell,” said Ace’s friend.
“Imagine how much money this all cost,” said Ace. “Flight to South America. Drone. Whoever is filming him. Imaging what we’d do with that?”
“Buy a lot of dope!”
“Yeah, sure,” Ace said, looking at the videos. “Imagine what this place would look like from a drone. A junkie and their drone! We’d be a hit.”
“Sure, boss,” said his friend, before leaning over some tinfoil with a lighter. Ace stared at the drone while it charged. They’d had to go scouting for a couple of cables to convert from the generator to a battery to something the drone could plug into, but they’d figured it out and after he traded away some cigarettes for the electricity, they’d hooked it up. He studied the instruction manual while it charged. Then once it was done he put the drone in a clearing between the tents, turned it on, put the goggles on, and took flight.
The drone began to rise up from the encampment, and with it so did Ace. He looked through the goggles at the view from a camera slung on the underside of the drone and saw:
– Tents and mud and people wearing many jackets, surrounded by trees and…
– Cars flowing by on either side of the encampment: metal shapes with red and yellow lights coming off the freeway on one side, and a faster and larger river of machines on the other, and…
– The grid of the neighborhood nearby; backyards, some with pools and others with treehouses. Lights strung up in backyards. Grills. And…
– Some of the large mixed-use residential-office luxury towers, casting shadows on the surrounding neighborhood, windows lit up but hard to see through. And…
– The larger city, laid out with all of its cars and people in different states of life in different houses, with the encampment now easy to spot, highlighted on either side by the rivers of light from the cars, and distinguished by its darkness relative to everything else within the view of the drone.
Ace told the drone to fly back down to the encampment, then took the goggles off. He turned them over in his hands and looked at them, as he heard the hum of the drone approaching. When he looked down at his feet and the muddy ground he sat upon, he could imagine he was in a jungle, or a hidden valley, or a field surrounded on all sides by trees full of owls, watching him. He could be anywhere.
“Hey Ace can I try that,” someone said.
“Gimme a minute,” he said, looking at the ground.
He didn’t want to look to either side of him, where he’d see a tent, and half an oil barrel that they’d start a fire in later that night. Didn’t want to look ahead at his orange tent and the half-visible pile of clothes and water-eaten books inside it.
So he just sat there, staring at the goggles in his hand and the ground beneath them, listening to the approaching hum of the drone.
Did some family not need it anymore, and pull over coming off the freeway and leave it on the road?
Did someone lose it – were they planning to film the city and perhaps make a documentary showing what Ace saw and how certain people lived.
Was it the government? Did they want to start monitoring the encampments, and someone went off for a smoke break just long enough for him to find the machine?
Or could it be a good samaritan who had made it big on crypto internet money or something else – maybe making videos on YouTube about the end of the universe, which hundreds of millions of people had watched. Maybe they wanted someone like Ace to find the drone, so he could put the goggles on and travel to places where he couldn’t – or wouldn’t be allowed – to visit?
What else can I explore with this, Ace thought.
What else of the world can I see?
Where shall I choose to go, taking flight in my box of metal and wire and plastic, powered by generators running off of stolen gasoline?
Things that inspired this story: The steady advance of drone technology as popularized by DJI, etc; homelessness and homeless people; the walk I take to the art studio where I write these fictions and how I see tents and cardboard boxes and and people who don’t have a bed to sleep in tell me ‘America is the greatest country of the world’; the optimism that comes when anyone on this planet wakes up and opens their eyes not knowing where they are as they shake the bonds – or freedoms – of sleep; hopelessness in recent years and hope in recent days; the brightness in anyone’s eyes when they have the opportunity to imagine.
via https://AIupNow.com
Jack Clark, Khareem Sudlow