AI is the hottest global area of innovation right now. There is just so much going on that it would be impossible to keep track of it all.
In this post, we will explore some of the most exciting, interesting, and troubling recent news when it comes to artificial intelligence.
So, let’s take a look!
Can AI Improve the Adoption Process?
Adoption-Share is a non-profit that aims to use AI to help improve the adoption process in the US.
Founder and former social worker Thea Ramirez partnered with computer scientist Gian Gonzaga (who previously worked on eHarmony) to create Family-Match.
This tool that uses AI to help agencies find the best parents for children in foster care.
As Foster Care is notoriously plagued with issues, the pair were hopeful that this tool could improve the entire process for both parents and adoptees.
Unfortunately, upon investigation, results so far are less than promising.
An Associated Press (AP) investigation found that Family-Match produced limited results in the states where it has been used. The results raise questions about the ability of AI to solve complex human problems.
Social workers in several states said that the AI often failed to make useful matches. It would lead them to families that did not actually want to adopt children.
While in Tennessee technical issues meant that social workers could not use Family-Match at all, better results were reported in Florida. It is currently being used in both Florida and Georgia.
Chat GPT Detector is Incredibly Accurate
Since ChatGPT has become a constant source of headlines, many in the academic world have been wondering how to cope with it. Numerous students have been caught using ChatGPT to essentially plagiarize entire papers.
However, this will likely become harder and harder to do with time, as new tools emerge to counter it.
According to a study published on 6 November in Cell Reports Physical Science, this new AI detection tool has done better than two other AI detectors. It has been highly accurate when it comes specifically to chemistry papers.
So far, most of these tools have been developed to detect AI generally. They have had limited success.
However, this work shows that it is possible to increase the accuracy of an AI framework that detects writing produced by artificial intelligence by niching down.
“Most of the field of text analysis wants a really general detector that will work on anything,” says co-author Heather Desaire, a chemist at the University of Kansas in Lawrence. But by making a tool that focuses on a particular type of paper, “we were really going after accuracy”.
So far, their results are very promising. When tested on introductions written by people and ones created by AI from the same journals, the tool picked out ChatGPT-3.5-written sections based on titles 100% of the time.
For the ChatGPT-generated introductions based on abstracts, the accuracy was a bit lower. However, 98% accuracy is still very impressive. The tool worked just as well with text written by the latest version of ChatGPT, ChatGPT-4 too.
AI is Changing Brain Research
Today at Neuroscience 2023, the annual meeting of the Society for Neuroscience, artificial intelligence will be a big topic of discussion.
There have been numerous new and exciting findings when it comes to the potential to apply AI models to the field of neuroscience.
“Advances in AI and machine learning are transforming brain research and clinical treatments,” said Terry Sejnowski, the Francis Crick Professor at the Salk Institute for Biological Studies and distinguished professor at UC San Diego, who will moderate the press conference.
The new findings include using AI to identify and better treat depression and other psychiatric disorders, cognitive impairments, Alzheimer’s, and dystonia in children.
According to Sejnowski:
“Brain recordings produce huge datasets that can be analyzed with machine learning. Predictive modeling, machine-brain interfaces, and neuroimaging/neuromodulation are areas with particular promise in developing new therapeutics and treatment plans for patients.”
The UK Holds the First-Ever Global Artificial Intelligence Safety Summit
On November 1st and 2nd, 2023, the first-ever Artificial Intelligence Safety Summit was held in the UK. The goal of the summit is to approach creating global safety standards to manage the AI surge.
UK Prime Minister Sunak underscored the urgency for global collaboration in the governance of AI, a technology that defies national boundaries and demands a collective regulatory approach.
As a result of the summit, the Bletchley Declaration was created. This is a commitment signed by leading nations. It includes Germany, the United States, and China.
This declaration announces 29 countries’ commitments to enhance cooperation when it comes to the development and oversight of AI.
There have been various ‘soft law’ declarations and agreements such as this around the world, going back at least to 2019.
The OECD principles on AI, which was ratified by member countries in 2019, calls for AI that operates ethically and for the benefit of society.
These principles demand that AI systems adhere to legal statutes, human rights, and democratic values. They also must include mechanisms for human intervention in cases of emergency.
They also advocate for a high level of transparency. Responsible disclosure is also a necessity in order to empower users to understand and challenge AI-driven outcomes.
However, it is important to remember that while numerous agreements exist, nothing has been legally solidified on an international level. On legal grounds, AI companies can still basically do whatever they want in many countries.
Prior soft law agreements include:
- The AI HLEG Ethics Guidelines for Trustworthy AI developed by the European Commission
- The AI4People Summit Declaration
- The Montreal Declaration for Responsible AI
- The UNESCO Recommendation on the Ethics of Artificial Intelligence
It has yet to be seen how AI will truly be regulated globally.
Emory University Will Launch Emory Empathetic AI for Health Institute This Month
The Emory Empathetic AI for Health Institute will be known as Emory AI.Health. Its goals will be to will support the development of artificial intelligence and predictive analytics technologies.
These will be used to improve outcomes for patients with diabetes, heart disease, cancer, and a range of other health conditions. Ultimately, this will improve human health, create economic value, and further social justice by making healthcare more accessible.
Emory will pursue this goal by combining the work of experts from Emory, the Georgia Institute of Technology, and the Atlanta Veterans Affairs (VA) Medical Center.
Emory president Gregory L. Fenves, PhD, said in the press release:
“AI will transform society and at Emory, we want to use these powerful technologies to save and improve lives. We see the power AI has to facilitate healing while improving equitable access to health care.”
A core aspect of the institute’s mission is to promote health equity. They will do so by reducing care costs. Also, their aim is to increase care access and quality as well.
To start, this work will focus on underserved populations in Atlanta and the surrounding areas. A very interesting part of the Emory approach is that it will prioritize personalized medicine and precision medicine to address racial bias built into clinical trials and AI algorithms.
It is well known that one of the potential dangers of AI is that creators, who are unaware of their own biases, build algorithms that carry those biases. This has terrifying implications for most of the people on earth.
According to Anant Madabhushi, PhD, a Robert W. Woodruff professor in the Wallace H. Coulter Department of Biomedical Engineering at Emory and Georgia Institute of Technology:
“There is a critical need to develop dedicated AI-based risk-prediction models for minority patients.”
Are AI-Powered Robot CEOs the Future?
One of the main worries people have about the rise of AI is that companies will replace as many workers as they can with AI to cut costs.
Some fear that this will usher in a rapid global rise of poverty and crime. While most of the conversation revolves around lower-wage and lower-level office workers, this project asks if CEOs can be replaced by automation.
Mika is a collaboration between Hanson Robotics and Polish rum company Dictador. She has been programmed to have the ideal CEO persona.
“I don’t really have weekends — I’m always on 24/7, ready to make executive decisions and stir up some AI magic,” the robot told Reuters in a “video interview.”
Mika is a fully-fledged humanoid robot and the actual CEO of Dictador.
Mika’s responsibilities include a wide range of tasks. These range from:
- Identifying potential clients
- Selecting artists for bottle designs
- Data-analysis-driven decision-making
Mike has been programmed to make decisions aligned with the company’s strategic objectives, without any personal bias. This ensures unbiased and strategic choices that put the interests of Dictador first.
While she does not take part in firing people, which is still controlled by the human leaders at Dictador, she has been the official CEO since last summer.
Final Thoughts on all the Latest in AI
With the ever-changing landscape of Artificial Intelligence, it is necessary for individuals to stay informed of the advances in AI technology and its implications on society.
In all parts of society, many are asking just what AI can do. How can we employ it ethically and to benefit society as a whole? How can it be legislated?
It will be exciting to see what comes next in this ever-evolving area of technology!
What do you think? Comment below.
Since 2009, we have helped create 350+ next-generation apps for startups, Fortune 500s, growing businesses, and non-profits from around the globe. Think Partner, Not Agency.
Find us on social at #MakeItApp’n®