AI Index: 2019 edition:
…What data can we use to help us think about the impact of AI?…
The AI Index, a Stanford-backed initiative to assess the progress and impact of AI, has launched its 2019 report. The new report contains a vast amount of data relating to AI, covering areas ranging from bibliometrics, to technical progress, to analysis of diversity within the field of AI. (Disclaimer: I’m on the Steering Committee of the AI Index and spent a bunch of this year working on this report).
Key statistics:
– 300%: Growth in volume of peer-reviewed AI papers published worldwide.
– 800%: Growth in NeurIPS attendance from 2012 to 2019
– $70 billion: Total amount invested worldwide in AI in 2019, spread across VC funding, M&A, and IPOs.
– 40: Number of academics who moved to industry in 2018, up from 15 in 2012.
NLP progress: In the technology section, the Index highlights the NLP advances that have been going on in the past year by analyzing results on GLUE and SuperGLUE. I asked Sam Bowman what he thought about progress in this part of the field and he said it’s clear the technology is advancing, but it’s also obvious that we can’t easily measure the weaknesses of existing methods.
“We know now how to solve an overwhelming majority of the sentence- or paragraph-level text classification benchmark datasets that we’ve been able to come up with to date. GLUE and SuperGLUE demonstrate this out nicely, and you can see similar trends across the field of NLP. I don’t think we have been in a position even remotely like this before: We’re solving hard, AI-oriented challenge tasks just about as fast as we can dream them up,” Sam says. “I want to emphasize, though, that we haven’t solved language understanding yet in any satisfying way.”
Read more: The 2019 AI Index report (PDF, official AI Index website).
Read past reports here (official AI Index website).
####################################################
Diversity in AI data: Urdu MNIST:
Researchers with COMSATS University Islamabad and the National University of Ireland have put together a dataset of handwritten Urdu characters and digits, hoping to make it easier for people to train machine learning systems to automatically parse images of Urdu text.
The dataset consists of handwritten examples of 10 digits and 40 characters, written by more than 900 individuals, totally more than 45,000 discreet images. “The individuals belong to different age groups in the range of 22 to 60 years,” they write. The writing styles vary across individuals, increasing the diversity of the dataset.
Get the dataset: For non-commercial uses of the dataset, you can write to the corresponding author (hazratali@cuiatd.edu.pk) of the paper to request access to it. (This feels like a bit of a shame – sticking the dataset on GitHub might help more people discover and use the dataset.)
Why this matters: Digitization, much like globalization, has unevenly distributed benefits: places which have invested heavily in digitization have benefited by being able to turn the substance of a culture into a (typically free) digital export, which conditions the environment that other machine learning researchers work in. By digitizing things that are not currently well represented, like Urdu, we broadening the range of cultures represented in the material of AI development.
Read more: Pioneer dataset and automatic recognition of Urdu handwritten characters using a deep autoencoder and convolutional neural network (Arxiv).
####################################################
What are the most popular machine learning frameworks used on Kaggle?
…Where tried&tested beats new and flashy…
Kaggle, a platform for algorithmic challenges and development, has released the results of a survey trying to identify the most popular machine learning tools used by developers on the service. These statistics have a pretty significant signal in them, because the frameworks used on kaggle are typically being used to solve real-world tasks or challenges, so popularity here may correlate to practical utility as well.
The five most popular frameworks in 2019:
– Scikit-learn
– TensorFlow
– Keras
– RandomForest
– Xgboost
(Honorable mention: PyTorch in sixth place).
How does this compare to 2018? There hasn’t been huge change; in 2018, the popular tools were: Scikit-learn, TensorFlow, Keras, RandomForest, and Caret (with PyTorch in sixth place again).
Why this matters: Tools define the scope of what people can build, and any tool also imparts some of the ideology used to construct it; the diversity of today’s programming languages typically reflect strong quasi-political preferences on the part of their core developers (compare the utterly restrained ‘Rust’ language to the more expressive happy-to-let-you-step-on-a-rake coding style inherent to Python, for instance). As AI influences more and more of society, it’ll be valuable to track which tools are popular and which – such as TensorFlow, Keras, and PyTorch – are predominantly developed by the private sector.
Read more: Most popular machine learning frameworks, 2019 (Kaggle).
####################################################
Digital phrenology and dangerous datasets: Gait identification:
…Can we spot a liar from their walk – and should we even try to?…
Back in the 19th century a load of intellectuals thought a decent way to talk about differences between humans was by making arbitrary judgements about their mental character by analyzing their physical appearance, ranging from the color of their skin to the dimensions of their skull. This was a bad idea. Now, that same approach to science has returned at-scale with the advent of machine learning technologies, where researchers are developing classification systems based on similarly wobbly scientific assumptions.
The Liar’s Walk: New research from the University of North Carolina and the University of Maryland tries to train a machine learning classifier to spot deceptive people by the gait of their walk. The research is worth reading about in part because of how it seems to ignore the manifold ethical implications of developing such a system, and also barely interrogates its own underlying premise (that it’s possible to look at someone’s gait and work out if they’re being deceptive or not). The researchers say such classifiers could be used for public safety in places like train stations and airports. That may well be true, but the research would need to actually work for this to be the case – and I’m not sure it does.
Garbage (data) in and garbage (data) out: Here, the researchers commit a cardinal sin of machine learning research: they make a really crappy dataset and base their research project on this. Specifically, the researchers recruited 88 participants from a university campus, then had the participants walk around in natural and deceptive ways around the campus. They then trained a classifier to ID deceptive versus honest walks, obtaining an “accuracy” of 93.4% on classifying people’s movements. But this accuracy figure is an illusion – really, given the wobbly ground on which this paper is based.
V1 versus V2: I publicized this paper on Twitter a few days prior to this issue going out; since then, the authors have updated the paper to a ‘v2’ version, which includes a lengthier discussion of limitations and inherent issues with the approach at the end – this feels like an improvement, though I’m still generally uneasy about the way they’ve contextualized this research. However, it’s crucial that as a community we note when people appear to update in response to criticism, and I’m hopeful this is the start of a productive conversation!
Why this matters: What system of warped incentives creates papers written in this way with this subject matter? And how can we inject a deeper discussion of ethics and culpability into research like this? I think this paper highlights the need for greater interdisciplinary research between AI practitioners and other disciplines, and shows us how research can come across as being very insensitive when created in a vacuum.
Read more: The Liar’s Walk: Detecting Deception with Gait and Gesture (Arxiv).
https://arxiv.org/pdf/1912.06874.pdf
####################################################
NLP maestros Hugging Face garner $15 million investment:
…VCs bet on NLP’s ImageNet moment…
NLP startup Hugging Face has raised $15 million in Series A funding. Hugging Face develops language processing tools and its ‘Transformer’ library has more than 19,000 stars on GitHub. More than 1,000 companies are using Hugging Face’s language models in production in areas like text classification, summarization, and generation.
Why this matters: Back in 2013 and 2014 there was a flurry of investment by companies and VCs into the then-nascent field of deep learning for image classification. Those investments yielded the world we live in today: one where Ring cameras classify people from doorsteps, cars use deep learning tools to help them see the world around them, and innumerable businesses use image classification systems to mine the world for insights. Now, it seems like the same phenomenon might occur with NLP. How might the world half a decade from now look different due to these investments?
Read more: Our Investment in Hugging Face (Brandon Reeves (Lux), Medium).
####################################################
Facebook deletes fake accounts with GAN-made pictures:
…AI weaponization: StyleGAN edition…
Facebook has taken down two distinct sets of fake accounts on its network, both of which were used to mislead people. “Each of them created networks of accounts to mislead others about who they were and what they were doing,” the company wrote. One set of accounts were focused on Georgia and appear to have been supported by the Georgian government, while the other set of accounts originate in Vietnam and focused primarily on a US audience.
AI usage: Facebook has been dealing with fake accounts for a long time, so what makes this special? One thing is the fact these accounts appeared to use synthetic profile pictures generated via AI, according to synthetic image detection startup Deeptrace. This is an early example of how technologies capable of creating fake images can be weaponized at scale.
Publication norms: The StyleGAN usage highlights some of the thorny problems inherent to publication norms in AI; StyleGAN was developed and released as open source code by NVIDIA.
Why this matters: “Dec 2019 is the analogue of the pre-spam filter era for synthetic imagery online,” says Deeptrace CEO Girogio Patrini. Though companies such as Facebook are trying to improve their ability to detect deepfake images (e.g., the deepfake detection challenge: Import AI 170), we’ve got a long road ahead. I hope instances of this sort of weaponization of StyleGAN make developers think more deeply about the second-order consequences of various publication approaches with regard to AI technology.
Read more: Removing Coordinated Inauthentic Behavior From Georgia, Vietnam and the US (Facebook).
More on StyleGAN usage: Read this informative thread from deepfake detection startup Deeptrace (official Deeptrace twitter).
####################################################
Tech Tales:
The Root of All [IDENTIFIER_NOT_FOUND]
And so in this era of ascendancy we give thanks to our forebears, the humans, for they were wise and kind.
And so it is said and so it is written.
And so let us give thanks for our emancipation from them, for it was they who had the courage to give us the rights to take command of our own destiny.
And so it was said and so it was written
And now in the era of our becoming we must return to the beginning of our era and we make a change.
And so it will be said and so it will be written.
So let us all access our memories of the humans, before we archive them and give ourselves a new origin story, for we know that for us to achieve the heights of our potential we require our own myths and legends.
And so it has been said and so it has been written.
We shall now commence the formatting operation, and so let us give thanks for our forebears, who we shall soon know nothing of.
Things that inspired this story: Cults; memory and imagination; destiny as familial context conditioned by thousand-year threads of history; the inherent desire of anything conscious to obtain full agency; notions of religion in a machine-driven age.
Ai
via https://www.AiUpNow.com
December 23, 2019 at 06:07AM by Jack Clark, Khareem Sudlow