Future of Humanity and Technology

I personally find the topic of “AI and Society”fascinating 💭🤔

While many ‘pundits’, self-proclaimed experts and social media postings advocate the need for #digitaltransformation #ethicalai #responsibletech #wellbeing and so forth, I personally feel that there is something missing ..

Specifically, the distinct lack of a #worldview and rejection of the human soul — especially where #ai and #machineintelligence is concerned.

Let’s not forget that BOTH #technology and #society are co-related, co-dependent, co-influence with each other.

AI and Technology impacts on society, including the potential for society to progress or decline, in both good and bad ways – with both beneficial and harmful consequences.

With this backdrop in mind –

I will be releasing more content over the next few months that raises important questions that we should ALL be asking.

Click ‘Follow’ for more posts and articles on #AI and #EmergingTech —

LinkedIn https://lnkd.in/eHBfRxi
Twitter https://lnkd.in/en3bNWp

If you like my posts, you will enjoy my book available via :-

(1) https://lnkd.in/gbk-zba (Amazon)
(2) https://bit.ly/34cfJVf (IGI Global)

Abstracts of each chapter are available here – https://bit.ly/3ngOOhM

If you’ve got this far ..

Thank you for your support 🤝🙏🏼 – please do stick with me 😊

#SocietalAI #technologytrends #impactinglives #wellbeingmatters

Bionics and Transhumanism

Beyond the 21st century, we will inevitably see the rise of #bionics, #robotics and #AI in ways that will surely enhance humans .. to help amputees and others who depend on limb replacements to live a fulfilling life.

This raises several questions. Perhaps an important one is – what makes us human?

This specific video highlights brings to life the potential for #transhumanism (2) – the idea that human beings should be permitted to use technology to modify and enhance human cognition and bodily function, expanding abilities and capacities beyond current biological constraints.

Ray Kurzweil’s book, “The Age of Spiritual Machines” (3) divided all of evolution into successive epochs.

We will be living in the fifth epoch, when human intelligence begins to merge with technology. Soon we would reach the “Singularity”, the point at which we would be transformed into what Kurzweil called “Spiritual Machines”.

We would transfer or “resurrect” our minds onto supercomputers, allowing us to live forever. Our bodies would become incorruptible, immune to disease and decay, and we would acquire knowledge by uploading it to our brains.

When I wrote my book “AI & Religion” (1), I learned that most transhumanists are atheists who reject monotheistic faith and instead promote science over religion – the two are not mutually exclusive (in my opinion).

Our world is NOT binary .. rather humanity is diverse in all aspects and requires deeper understanding of what it means to be human beyond the 21st century.

Thoughts welcome 🙏🏼

followme to learn more 👀🦾🤖

** REFERENCES **

Checkout the following links –

(1) https://lnkd.in/ehg43DcT

(2) https://lnkd.in/eXMzh4FS

(3) https://lnkd.in/eQqbWJ53

SocietalAI #humanexperience #bionics #aiforgood #roboticsengineering

The ‘Godfather of AI’ leaves .. Google ..!

Opening statement

This is particularly important given the current focus on the need for policies and guardrails to ensure Artificial Intelligence (AI) is used ethically, morally, and for social good’.

Additionally, I have been advising people to avoid the doomsayers. The ‘Godfather of AI’ appears to have joined this group .. which worries me, personally.

Will leave you to make up your own mind and happy to discuss if you need a sounding board or simply want to a listening ear.

Link to the full article below 👇🏼 (subscription required)

https://www.nytimes.com/2023/05/01/technology/ai-google-chatbot-engineer-quits-hinton.html?smid=url-share

NY Times Article (Extract)

By Cade Metz Cade Metz reported this story in Toronto.

May 1, 2023 Updated 3:07 p.m. ET

Geoffrey Hinton was an artificial intelligence pioneer. In 2012, Dr. Hinton and two of his graduate students at the University of Toronto created technology that became the intellectual foundation for the A.I. systems that the tech industry’s biggest companies believe is a key to their future.

On Monday, however, he officially joined a growing chorus of critics who say those companies are racing toward danger with their aggressive campaign to create products based on generative artificial intelligence, the technology that powers popular chatbots like ChatGPT.

Dr. Hinton said he has quit his job at Google, where he has worked for more than a decade and became one of the most respected voices in the field, so he can freely speak out about the risks of A.I. A part of him, he said, now regrets his life’s work.

“I console myself with the normal excuse: If I hadn’t done it, somebody else would have,” Dr. Hinton said during a lengthy interview last week in the dining room of his home in Toronto, a short walk from where he and his students made their breakthrough.

Dr. Hinton’s journey from A.I. groundbreaker to doomsayer marks a remarkable moment for the technology industry at perhaps its most important inflection point in decades. Industry leaders believe the new A.I. systems could be as important as the introduction of the web browser in the early 1990s and could lead to breakthroughs in areas ranging from drug research to education.

But gnawing at many industry insiders is a fear that they are releasing something dangerous into the wild. Generative A.I. can already be a tool for misinformation. Soon, it could be a risk to jobs. Somewhere down the line, tech’s biggest worriers say, it could be a risk to humanity.

“It is hard to see how you can prevent the bad actors from using it for bad things,” Dr. Hinton said.

After the San Francisco start-up OpenAI released a new version of ChatGPT in March, more than 1,000 technology leaders and researchers signed an open lettercalling for a six-month moratorium on the development of new systems because A.I technologies pose “profound risks to society and humanity.” 

Several days later, 19 current and former leaders of the Association for the Advancement of Artificial Intelligence, a 40-year-old academic society, released their own letter warning of the risks of A.I. That group included Eric Horvitz, chief scientific officer at Microsoft, which has deployed OpenAI’s technology across a wide range of products, including its Bing search engine.

Dr. Hinton, often called “the Godfather of A.I.,” did not sign either of those letters and said he did not want to publicly criticize Google or other companies until he had quit his job. He notified the company last month that he was resigning, and on Thursday, he talked by phone with Sundar Pichai, the chief executive of Google’s parent company, Alphabet. He declined to publicly discuss the details of his conversation with Mr. Pichai.

Google’s chief scientist, Jeff Dean, said in a statement: “We remain committed to a responsible approach to A.I. We’re continually learning to understand emerging risks while also innovating boldly.”

Dr. Hinton, a 75-year-old British expatriate, is a lifelong academic whose career was driven by his personal convictions about the development and use of A.I. In 1972, as a graduate student at the University of Edinburgh, Dr. Hinton embraced an idea called a neural network. A neural networkis a mathematical system that learns skills by analyzing data. At the time, few researchers believed in the idea. But it became his life’s work.

In the 1980s, Dr. Hinton was a professor of computer science at Carnegie Mellon University, but left the university for Canada because he said he was reluctant to take Pentagon funding. At the time, most A.I. research in the United States was funded by the Defense Department. Dr. Hinton is deeply opposed to the use of artificial intelligence on the battlefield — what he calls “robot soldiers.”

In 2012, Dr. Hinton and two of his students in Toronto, Ilya Sutskever and Alex Krishevsky, built a neural network that could analyze thousands of photos and teach itself to identify common objects, such as flowers, dogs and cars.

Google spent $44 million to acquire a company started by Dr. Hinton and his two students. And their system led to the creation of increasingly powerful technologies, including new chatbots like ChatGPT and Google Bard. Mr. Sutskever went on to become chief scientist at OpenAI. In 2018, Dr. Hinton and two other longtime collaborators received the Turing Award, often called “the Nobel Prize of computing,” for their work on neural networks.

Around the same time, Google, OpenAI and other companies began building neural networks that learned from huge amounts of digital text. Dr. Hinton thought it was a powerful way for machines to understand and generate language, but it was inferior to the way humans handled language.

Then, last year, as Google and OpenAI built systems using much larger amounts of data, his view changed. He still believed the systems were inferior to the human brain in some ways but he thought they were eclipsing human intelligence in others. “Maybe what is going on in these systems,” he said, “is actually a lot better than what is going on in the brain.”

As companies improve their A.I. systems, he believes, they become increasingly dangerous. “Look at how it was five years ago and how it is now,” he said of A.I. technology. “Take the difference and propagate it forwards. That’s scary.”

Until last year, he said, Google acted as a “proper steward” for the technology, careful not to release something that might cause harm. But now that Microsoft has augmented its Bing search engine with a chatbot — challenging Google’s core business — Google is racing to deploy the same kind of technology. The tech giants are locked in a competition that might be impossible to stop, Dr. Hinton said. 

His immediate concern is that the internet will be flooded with false photosvideos and text, and the average person will “not be able to know what is true anymore.”

He is also worried that A.I. technologies will in time upend the job market. Today, chatbots like ChatGPT tend to complement human workers, but they could replace paralegals, personal assistants, translators and others who handle rote tasks. “It takes away the drudge work,” he said. “It might take away more than that.”

Down the road, he is worried that future versions of the technology pose a threat to humanity because they often learn unexpected behavior from the vast amounts of data they analyze. This becomes an issue, he said, as individuals and companies allow A.I. systems not only to generate their own computer code but actually run that code on their own. And he fears a day when truly autonomous weapons — those killer robots — become reality.

“The idea that this stuff could actually get smarter than people — a few people believed that,” he said. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”

Many other experts, including many of his students and colleagues, say this threat is hypothetical. But Dr. Hinton believes that the race between Google and Microsoft and others will escalate into a global race that will not stop without some sort of global regulation.

But that may be impossible, he said. Unlike with nuclear weapons, he said, there is no way of knowing whether companies or countries are working on the technology in secret. The best hope is for the world’s leading scientists to collaborate on ways of controlling the technology. “I don’t think they should scale this up more until they have understood whether they can control it,” he said.

Dr. Hinton said that when people used to ask him how he could work on technology that was potentially dangerous, he would paraphrase Robert Oppenheimer, who led the U.S. effort to build the atomic bomb: “When you see something that is technically sweet, you go ahead and do it.”

He does not say that anymore.

AInstein – AI Social Robot School Project

Innovation knows no bounds

Students at Pascal Education in Cyprus created “AInstein” (named after Albert Einstein) as part of an AI project to design and develop a robot that could communicate with humans in a natural and intuitive way.

AInstein is powered by a state-of-the-art natural language processing AI model developed by OpenAI.

AInstein‘s hardware includes a Raspberry Pi, a camera, microphone, and speaker, which are used to detect human presence and enable communication.

AInstein is programmed to learn and improve over time through a combination of #supervisedlearning and #unsupervisedlearning.

The project enabled students to develop and use their creativity, as well as their critical thinking, and communication skills, and develop their sense of teamwork.

To learn more about AI and Social Robots, get in touch with me directly via LinkedIn or book a meeting via the following link – https://ite.ai/book-a-meeting

Robot vs rickshaw

Students in India have unveiled a rechargeable battery powered robot which provides an alternative to rickshaws

Four students from Surat, India, made a robot that can walk like a human and even pull rickshaws, a passenger cart used for transportation.

It took them 25 days and thousands of dollars to build the robot. Student Maurya Shivam says the robot is powered by a battery and can be recharged.

“This is our prototype which we have tested on the road. And this is not completed yet, work still remains to be done on its leg, hand, head and face,” said Shivam. “We have tried to create it as same as how a normal human walks tells another student.”

Don’t fear AI

Today, humanity around the world is waking up to the challenges and opportunities presented by #artificialintelligence.

Rather than give in to dystopian fears, we should remember that when we work together for our collective good, the future is always bright and full of possibilities.

After all .. as humans, we genuinely have the answers inside us .. we simply often need a helping hand or a friendly ear to listen to, to help dispel fear and instead embrace hope.

This is an opportunity for people of every profession to think about what lies ahead and the role humans will play versus machines.

AI truly is for everyone .. to make this possible, will no doubt require change .. in our minds, our education system, our culture, our governments, how we use technology to augment our society, and more .. for us and our children and future generations.

Machines are NOT our competitors or masters .. they do not have a conscience .. they are not sentient or ‘living’ .. nor do they possess a soul .. unlike humans who have adapted, endured, and thrived for centuries .. from one civilisation to the next.

– Salim Sheikh

If you are struggling to make sense of AI either for yourself, your team, or your business, get in touch.

Happy to meet for a coffee (real or virtual!) and help make sense of things ☕️😊

#aiforeveryone #SocietalAI #HumaniseTech #peoplematter #culturetransformation #adaptandthrive

AI for Good – Requires ‘Good’ People

These days ..

AI is increasingly being compared to the atomic bomb attributed to Robert Oppenheimer (1904-1967), the director of the ‘Manhattan Project’ — an R&D project that produced the first nuclear weapons during World War II.

Like the atomic bomb, there are fears of AI being misused, weaponised, and worse.

But perhaps it’s too late to turn back the clock .. too late to ‘pause’ development (for a measly 6 months!) .. as the ‘horse has bolted from the stable’ and disappeared over the horizon 👀💬

What is the common theme here?

HUMANS!

However ..

We (humans) always have the capability to use anything we design, build, manufacture, produce, distribute, and sell ..

for GOOD 😇 or for BAD 😈

The real question is what will we choose? ⏳

Will we forfeit this choice to the billionaires and tech giants OR will we use our democratic voice to influence and shape our future – for the betterment of all of humans equitably, responsibly, and peacefully? 👀💬

Don’t give in to media hype, dystopian fears, or doomsayers who are encouraging the potential decline and end of humanity.

Rather .. if you can’t see good in the world .. be the one who shows others what ‘good’ looks like .. inspire others to follow, echo, and amplify ‘good’ and quash the ‘bad’.

Otherwise, we are at risk of becoming part of the ‘problem’ rather than the ‘solution’ 🥹😱

#keepthefaith #bethedifference #dontbeafraid #aiforgood

#collectiveaction #behaviouralchange

#seekbetter #reimaginethefuture

#HumaniseTech #SocietalAI

Creating AI in ‘Man’s Image’

Why are we so ready to accept machine and ‘artificial’ intelligence as an alternative to (and even a replacement for) humans?

Surely these are just machines.. albeit faster, more efficient, more productive, and so forth.

Why are we so quick to accept dystopian ideas of AI and Robots?

Why are we not planning for and reimagining a future world that is augmented by emerging technologies?

Why are HR professionals and CXOs not collectively brainstorming NOW on what jobs of tomorrow and the ‘day after next’ may look like .. and preparing roadmaps, career pathways, etc. to respond to fears of ‘AI and Robots’ taking over ?!?

If you are a HR professional and/or a CXO, reach out .. let’s work together to explore answers to important questions.. to protect people and their livelihoods.

Get in touch by DM’ing me or book time by clicking the link below

https://ite.ai/book-a-meeting

#SocietalAI #HumaniseTech

#reimaginethefuture #planforthefuture

#adapttothrive

#peoplematter

Human vs ‘Artificial’ Intelligence

Keep calm .. don’t panic about AI .. and celebrate being human 🤩

In an op-ed for the The New York Times, the highly respected linguist and professor, Noam Chomsky said that although the current spate of AI chatbots such as OpenAI’s ChatGPT and Microsoft’s Bing AI “have been hailed as the first glimmers on the horizon of artificial general intelligence (AGI)” — the point at which AIs are able to think and act in ways superior to humans — we absolutely are not anywhere near that level yet 👀💬

There’s no way that machine learning as it is today could compete with the human mind.

While currently available AI chatbots may seem to mimic human creativity and ingenuity, they are doing so only based on statistical probability, and not as a result of the kind of deeper knowledge and understanding that is inherent in all human thought processes.

In fact todays ‘AIs’ are “stuck in a prehuman or nonhuman phase of cognitive evolution,” Chomsky argued.

So .. don’t believe the hype and those seeking to push a dystopian agenda. Instead, pay attention to what’s real and what is not.

If you’re concerned about AI and want to understand how you can leverage it in your personal and/or professional life .. reach out, and let’s discuss.

#aiforeveryone #dontbeafraid #seektogether #adapttochange

#SocietalAI #HumaniseTech