Import AI 162: How neural nets can help us model monkey brains; Ozzie chap goes fishing with DIY drone; why militaries bet on supercomputers for weather prediction #AI - The Entrepreneurial Way with A.I.

Breaking

Monday, September 2, 2019

Import AI 162: How neural nets can help us model monkey brains; Ozzie chap goes fishing with DIY drone; why militaries bet on supercomputers for weather prediction #AI

#A.I.

Better multiagent learning through OpenSpiel:
…DeepMind releases research framework containing 20+ games, plus a variety of ready-to-use algorithms..
Researchers with DeepMind, Google, and the University of Alberta have developed OpenSpiel, a tool to make it easier for AI researchers to conduct research into multi-agent reinforcement learning. Tools like OpenSpiel will help AI developers test out their algorithms on a variety of different environments, while comparing them to strong, well-documented baselines. “The purpose of OpenSpiel is to promote general multiagent reinforcement learning across many different game types, in a similar way as general game-playing, but with a heavy emphasis on learning and not in competition form,” they write.

What’s in OpenSpiel? OpenSpiel contains more than 20 games ranging from Connect Four, to Chess, to Go, to Hex, and so on. It also ships with a variety of inbuilt AI algorithms which range from reinforcement learning ones (DQN, A2C, etc), to ones for multi-agent learning (some fantastic names here: Neural Fictitious Self-Play! Regret Policy Gradients!, to basic search approaches (e.g., Monte Carlo tree search), and more. The software also ships with a bunch of visualization tools to help people plot the performance of their algorithms. 

Why this matters: Frameworks like OpenSpiel are one of the best ways researchers can get a sense of progress in a given domain of AI research. As with all new frameworks, we’ll need to revisit it in a few months to see if many researchers have adopted it. If they have, then we’ll have a new, meaningful signal to use to give us a sense of AI progress.
   Read more: OpenSpiel: A Framework for Reinforcement Learning in Games (Arxiv).
   Get the code here (OpenSpiel official GitHub).

####################################################

Hugging Face squeeze big AI models into small spaces with distillation:
…Want 95% of BERT’s performance in only 66 Million parameters? Try DistilBERT…
In the last couple of years, organizations have started producing significantly larger, more capable language models. These models – BERT, GPT-2, NVIDIA’s ‘MegatronLM’, Grover – are highly capable, but are also expensive to deploy, mostly because of how large their networks are. Remember, the larger the network, the more memory it takes up on the device, and the more memory it takes up in the device, the harder it is to deploy it. 

Now, NLP startup Hugging Face has written an informative post laying out some of the techniques researchers could use to help them shrink down these networks. The result? They’re able to train a smaller language model called ‘DistilBERT’ via supervision from a (larger, more powerful) ‘BERT’ model. In tests, they show this model can obtain up to 95% of the performance of BERT on hard tasks (e.g., those found in the ‘GLUE’ corpus), while being much easier to deploy.

Why this matters: For AI research to transition into AI deployment, it needs to be easy for people to deploy AI systems onto a broad range of devices with different computational characteristics. Work like ‘DistilBERT’ shows us how we might be able to waterfall from large-compute models (e.g., GPT-2, BERT) to mini-compute models (e.g., DistilBERT, and [hypothetical] DistilGPT-2), which will make it easier for more people to access AI systems like these.
   Read more: Smaller, faster, cheaper, lighter: Introducing DistilBERT, a distilled version of BERT (Medium).
   Get the code for the model here (Hugging Face, GItHub).

####################################################

Computers & military capacity: weather prediction:
…When computers define OODA loops…
In the military there’s a concept called an OODA loop and it drives many aspects of military strategy. OODA is short for ‘Observe, Orient, Decide, Act’, and it describes the steps that individual military units may take, all the way up to the decisions made by leaders of armies. One aspect of military conflict that falls out of this is that military organizations want to shrink or shorten their OODA loop: for instance by being able to more rapidly integrate and update observations, or to increase their ability to rapidly make decisions. 

Computers + OODA loops: Here’s one way in which militaries are trying to improve their OODA loops – more exquisite weather monitoring and analysis systems, which can help them better predict how weather patterns might influence military plans, and more rapidly adapt them. The key to these systems? More powerful supercomputers – and the US military just bought three new supercomputers, and one of them will be dedicated to ‘operational weather forecasting and meteorology for both the Air Force and Army. In particular, the machine will be used to run the latest high-resolution, global and regional weather models, which will be used to support weather forecasts for warfighters as well as for environmental impacts related to operations planning,” according to a write-up in The Next Platform. 

Why this matters: Supercomputers are going to have their strategic importance magnified by the arrival of increasingly capable compute-hungry AI systems, and we can expect military strategies to become more closely coupled with a military’s compute capacity over time. It’s all about the OODA loops, folks – and computers can do a lot of work here.
   Read more: US Military Buys Three Cray Supercomputers (The Next Platform).

####################################################

What do monkey brains and neural nets have in common? A lot, it turns out:
…Research suggests contemporary AI tools can approximate some of the neural circuits in a monkey brain…
Can software-based neural networks usefully approximate the (fuzzier, more complex) machinery of the organic brain? That’s a question researchers have been pondering since, well, the invention of neural nets via McCulloch and Pitts in the 1940s. But these days while we understand the brain much, much more than in the past, we’re using neural nets that model neurons in a highly simplistic form, relative to what goes on in organic brains (e.g., in organic brains neurons ‘spike’, whereas in most AI applications, neurons activate or not according to a threshold, transmitting a binary signal of an activation). A valuable question is whether we can still use this neural net machinery to better simulate, approximate, and (hopefully) understand the brain. 

Now, researchers from Deutsches Primatenzentrum GmbH, Stanford University, and the University of Goettingen have spent some time studying how Macaque monkeys observe and grasp objects, and have developed a software simulation of this which – encouragingly – closely mirrors experimental data gathered from the monkey’s themselves. “We bridge the gap between previous work in visual processing and motor control by modeling the entire processing pipeline from the visual input to muscle control of the arm and hand,” the authors write. 

The magic of an mRNN: For this work, the researchers analyzed activity in the brains of two macaque monkeys while they grasped a diverse set of 48 objects, studying the neural circuits that activated in the monkey brains as they did various things like perceive the object and send out muscle activations to grasp it. Based on their observations, they designed several neural network architectures to model this, all oriented around training what they call a modular recurrent neural network (mRNN). “We trained an mRNN with sparsely connected modules mimicking cortical areas to use visual features from Alexnet to produce the muscle kinematics required for grasping,” they explained. “The differences between individual modules in the mRNN paralleled the differences between cortical regions, suggesting that the design of the mRNN model with visual input paralleled the hierarchy observed in the brain.”

Why this matters: “Our results show that modeling the grasping circuit as an mRNN trained to produce muscle kinematics from visual features in a biologically plausible way well matches neural population dynamics and the difference between brain regions, and identifies a simple computational strategy by which these regions may complete this task in tandem,” they write. If further experimentation continues to show the robustness of this approach, then scientists may have a powerful new tool to use when thinking about the intersection between digital and organic intelligence. “We believe that the mRNN framework will provide an invaluable setting for hypothesis generation regarding inter-area communication, lesion studies, and computational dynamics in future neuroscience research”.
   Read more: A neural network model of flexible grasp movement generation (bioRxiv)

####################################################

DIY drones are getting really, really good:
…Daring Australian goes on a fishing expedition with a DIY drone…
Australian bureaucrats are wondering what to do about a man that used a DIY drone to go fishing. Specifically, the mysterious individual used the drone to lift a chair he was tethered in high above a reservoir in Australia, then he fished. Australia’s civil aviation safety authority (CASA) isn’t quite sure what to do about the whole situation. “This is a first for Australia, to have a large homemade drone being used to lift someone off the ground,” Peter Gibson, a CASA spokesman, told ABC News.

Why this matters: Drones are entering their consumerization phase, which means we’re going to see more and more cases of people tweaking off-the-shelf drone technology for idiosyncratic purposes – like fishing! Policymakers would be better prepared for the implications of a world containing cheap, powerful drones if they invested more resources in tracking the usage of such technologies.
   Read more: Gone fly fishing: Video of angler dangling from drone under investigation (ABC News).

####################################################

AI Policy with Matthew van der Merwe:
…Matthew van der Merwe brings you views on AI and AI policy; I (lightly) edit them…

What will AGI look like? Reactions to Drexler’s service model:
AI systems taking the form of unbounded maximising agents pose some specific risks. E.g for any objective we give an agent, it will pursue certain instrumental goals, such as avoiding being turned off. But AI today doesn’t look much like this—Siri answers questions, but doesn’t have any overarching goal, the dogged pursuit of which will lead it to acquire large amounts of computing resources. Why, then, we would create such agents, given we aren’t doing so now, and the associated risks.

Services or agents: Drexler argues that we should instead expect AGI to look like lots of narrow AI services. There isn’t anything a unified agent could do that an aggregate of AI services could not; such a system would come without some of the risks from agential AI; and there is a clear pathway to this model from current AI systems. Critics object that there are benefits to agential AI that will create incentives to build them, in spite of the risks. Some tasks—like running a business—might require truly general intelligence, and agential AI might be significantly cheaper to train and deploy than a suite of AI services. 

Emerging agency: Even if we grant that there will not be good incentives to building agential AGI, some problems will re-emerge. For one, markets can be irrational, so AI development may steer towards building agential AGI despite good reasons not to. What’s more, agential behaviour could emerge from collections of non-agent AIs. Corporations are aggregates of individuals doing narrow tasks, from which agential behaviour can emerge: they can ruthlessly pursue some goal, act unboundedly in the world, and behave in ways their designers did not intend. So in an AI services world, there will still be safety problems arising from agency, but these may differ from the ‘classic’ problems, and demand different solutions.

Why it matters: The AI safety problem is figuring out how to build robust and beneficial AGI in a state of uncertainty about when—and if—we will build it, and what it will look like. We need research aimed at better predicting whether AGI will look more like Drexler’s vision, the ‘classical’ picture of unified agents, or something else entirely, and we need to have a plan for ensuring things go well in either eventuality.
   Read more: Book Review – Reframing Superintelligence (Slate Star Codex).
   Read more: Why Tool AIs Want to be Agents AIs (Gwern).

####################################################

Tech Tales:

The Instrument Generator

The instrument generator worked like this: the machine would generate a few seconds of audio and humans would vote on whether they liked or disliked the generated music. After a few thousand generations, the machine would come up with longer bits of music based on the segments that people had expressed an inclination for. These bits of music would get voted on again until an entire song had been created. Once the machine had a song, the second phase would begin – what people took to calling The Long Build. Here, the machine would work to synthesize a single, predominantly analog instrument that could create the song people had voted for. The construction process took anywhere between a week and a year, depending on how intricate and/or inhuman the song was – and therefore how intricate the generated instrument needed to be. Once the instrument was created, people would gather at their computers to tune-in to a global livestream where the instrument was unveiled in a random location somewhere on the Earth. These instruments would subsequently become tourist attractions in their own right, and a community of ‘song tourers’ formed who would travel around the world, using the generated inhuman instruments as their landmarks. In this way, AI helped humans find new ways to discover their own world, and allowed them a sense of agency when supervising the creation of new and unexpected things.

Things that inspired this story: Musical instruments; generative design; exhibitions; World’s Fair(s); the likelihood of humans and machines go-generating their futures together.





Ai

via https://AIupNow.com

Jack Clark, Khareem Sudlow