ABOUT
CHATGPT AND AI
2023 JULY ISSUE
Written by Andrew Sia
Share this article !
From the Desk of the Publisher
ChatGPT is an artificial-intelligence chatbot developed by OpenAI. ChatGPT interacts with text, interacts with natural human dialogue and can use very complex language work, including automatic text generation, question and answer, summarizing and other tasks. For example, in terms of automatic text generation, ChatGPT can automatically generate similar text for scripts, songs and plans based on the input text. With automatic question and answer, ChatGPT can automatically generate answers based on the input questions. There is also the ability to write and debug computer programs.
ChatGPT can write articles similar to real people, and give detailed and clear answers in many knowledge fields, and quickly gained attention. It proves that it is also competent for knowledge-based jobs that AI will not replace formerly. It is also suitable for the financial and white-collar job market. This impact is quite large, but also believes that the uneven accuracy of facts is its major flaw, and believes that the results of model training based on ideology must be carefully corrected.
It is because that AI is used to develop ChatGPT, and this development has becoming too fast and the tech leaders are worried that it may go out of hand, a call for pause in order the safety protocols can be set up. That is where we are now with this article and we call for your alert.
Everybody is talking ChatGPT, and how can we just ignore it? We did some research and found that ChatGPT is an artificial-intelligence (AI) chatbot developed by OpenAI that was launched in November 2022. It is built on top of OpenAI’s GPT-3.5 and GPT-4 families of large language models (LLMs) and has been fine-tuned. It is an approach to transfer learning using both supervised and reinforcement learning techniques.
We enquired about OpenAI and learned it is an American artificial intelligence (AI) research lab. I made a trial run and typed in: “ Bible study for Proverbs Chapter 2”, and In a split second it walked me through the chapter from Verse 1-5, 6-9. 10-15, 16-19 and 20-22. It took ChatGPT no time at all to do this work.
Earlier on I had done a study on my own by reading these passages from different Bibles, then I came up with a study on verse 3-5, 6-8, 9-11, 12-15, 16-19 and 20-22. It took me more than one hour to do this work.
Both Chatbot and I came up with the same overall message, which is summed up in verse 20 through 22 with: “He taught us to walk the ways of good men and keep to the paths of righteousness. We can remain in this land, but the wicked ones and the unfaithful ones would be cast out”.
My first reaction was one of “future shock,” the term used by Alvin Toffler who was a futurist and one of the social thinkers. Future Shock was a book published in 1970 about the future of today, which continues to have a certain influence on today’s social trends and thoughts. With today’s experience I feel overwhelmed by the power of AI and started questioning myself about what else is left for us to do, especially since we are talking about “the future of today.”
Fortunately, I have received legal advice from my former lawyers reminding me about the risks and issues in using AI chatbot’s auto-generative systems. Without question, businesses and institutions are using ChatGPT in composing essays, studies and letters. Like any machine learning model out there, these systems are subject to certain accuracy and bias risks. They may contain errors, inaccuracies, or biases but also the data provided may contain examples that would apply to one type of person but may not be accurate for another type. We must understand that the algorithms used to train it may also be biased.
ChatGPT is a very powerful tool but it is advisable not be relied upon as the infallible authority. Just because the content is openly available doesn’t mean that we should ignore the possibility of copyright infringement. We find that ChatGPT has shifted that responsibility to the individual users; ChatGPT has already stated it is not liable for any damages arising from the use of the model and that the model also gives no warranty about its accuracy or any particular purpose.
This brings us to an open letter signed by 1,000 technology leaders and researchers who came together earlier on using the letterhead of the nonprofit Future of Life Institute and asking the AI labs to pause development of the most advanced systems. These signers included Elon Musk; Steve Wozniak, co-founder of Apple; Andrew Yang, a 2022 presidential candidate; Rachel Bronson, president of the Bulletin of the Atomic Scientists; and Gary Marcus, an entrepreneur and academic.
Other AI powered chatbots such as Microsoft’s Bing and Google’s Bard, which can also carry out humanlike conversations, can create essays on an endless variety of topics and perform complete tasks including writing computer code. Every one of those tech giants are rushing to develop the more powerful chatbots to become the next leader of the industry.
The open letter has specified a pause in the development of AI system more powerful that GPT-4, which was introduced in March 2023 by the research lab of OpenAI. The letter called for the share of safety protocols, and it even went further to mention if the pause is not enacted, governments should step in and instate a moratorium. But we know the chances are slim as politicians don’t have an understanding of the technology. To ask the lawmakers to regulate AI could be a very long and tedious process. It is equally difficult in Europe as they have failed to recognize the possible damage created by AI, including facial recognition systems. They instead have been lingering on health, safety and individual rights for too long already.
All these big tech companies began to build the neural networks that learned the enormous amounts of digital texts from books, Wikipedia, chat logs and other information culled from the internet since 2018. These gigantic networks are known as Large Language Models or LLMs. As a result, the LLMs learned and generated text on their own, then used this content to post Tweets, release term papers, develop computer programs, and even carry conversations successfully.
For any mistakes and mixed information, the researches would described them as hallucinations when people failed to know what is right or wrong. The concern here is the use of this kind of disinformation, which can spread out instantly within seconds and, God forbidden, may ended up in the hands of people with evil intentions who may use them to create hoaxes in the cyberspace.
For social media such as Twitter was using bots, both for good and bad, the percentage is around 15%. These bots are used to mimic human accounts. With Facebook, it is approximately 5%, or 90 millions of its accounts.
Another social media, TikTok, who claims to have 1 billion monthly active accounts. But how many are fake accounts or bots? The analysts reported that up to 97% of all traffic coming through TikTok could be detected as automated bot activated by its own software. It may sound a little bit too high, but knowing where it is coming from and the bot can turn into a propaganda machine without any doubt. Perhaps we know now why the U.S. government has shown concern that it could endanger the state security.
This letter that gathered 1,000 signatures brings out the voices of the world’s top tech experts calls for a six-month global pause on AI development. But it is hard to persuade the wider community of companies and researchers to put a moratorium in place.
On special note: OpenAI’s CEO, Sam Atman didn’t sign the letter.