If we give algorithms total control over our decisions, they can influence what we eat and how we behave, our choices may be influenced by an entity we cannot control.


Envision a world where the food is prepared and ready to eat before you even have a chance to think about what you want to eat. The ingredients are sourced, the meal is cooked, and the dishes are cleaned up, all while you relax and enjoy your evening. Sounds good if you are thinking about what to cook or order for your next meal right?

Our technology stack is under development and is not yet fully integrated. This is not necessarily a bad thing, as it gives us the opportunity to experiment with different technologies and find the best solution for our needs. This kind of integration requires the highest level of machine learning and predictive technology. It can feel weird and intrusive when we talk about a holiday and then see advertisements for it all over Facebook. It’s like the platform is tracking our conversations and then using that information to target us with ads.

Consider this: what if we were looking for a vacation in a search engine, and all the results led to the same destination, chosen by the system? There would be no room to consider any other options. Scary, isn’t it, that we wouldn’t be free to choose our own vacation? AI can be like a plane on autopilot, with the destination and choices made without our input.

An end-to-end world is seen as more sustainable. Technology is seen as being used more efficiently, and there is less waste. This is seen as being good for the environment, and it also helps to reduce costs.

Image Credit: Mojahid Mottakin on Unsplash

Lets take for example

  • A self-driving car that takes you to your destination without any input from you.
  • A smart home that automatically adjusts the temperature, lighting, and security settings based on your preferences.
  • A healthcare system that tracks your health data and provides personalized recommendations for treatment.

Call for Regulation

In the case above, if we give algorithms total control over our decisions, they can influence what we eat and how we behave. Over time, all of our choices may be influenced by the judgments of an entity that we cannot control.

In view of the same, nearly 1,500 technology leaders are calling for a six-month halt in the development of AI. They are demanding that the content that feeds the decisions be regulated to ensure transparency, robustness, ethics, and traceability.

The open letter, “Pause Giant AI Experiments: An Open Letter” is published on the website of the Future of Life Institute. The letter calls for developers to work instead on making today’s AI systems “more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.”

“The letter isn’t perfect, but the spirit is right: we need to slow down until we better understand the ramifications,” saysid Gary Marcus, a professor at New York University who signed the letter. “The big players are becoming increasingly secretive about what they are doing, which makes it hard for society to defend against whatever harms may materialize.”

Image Credit: D koi on Unsplash

While the labs can assist with content creation, OpenAI’s GPT4 (Generative Pre-trained Transformer) fourth iteration poses a greater danger to society as a whole. Machine learning combined with powerful hardware systems has the potential to alter the course of human civilization more rapidly than we can comprehend. In just 50 years, we have gone from basic dial phones to mobile phones, to the point where we no longer have physical phones.

The days of AI being confined to labs and research centers are numbered. It is not a matter of if, but when, AI will become a part of our everyday lives. OpenAI, a company backed by Microsoft, has the potential to revolutionize the way we interact with technology. OpenAI is capable of generating human-like text, composing poems, and even holding conversations. While this is all very exciting, it is important to remember that AI is still in its early stages of development. There are risks associated with AI, and it is important to make sure that these risks are mitigated before AI is widely adopted.

In a letter issued by the Future of Life Institute, a group of leading AI experts warned that “powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.”

Image Credit: Tesla

Transparency Calls

Unlike Tesla and Apple, who both support proprietary and closed products and use AI, Musk Foundation is a major donor to the non-profit to create a Transparency register. Musk’s foundation along with Founders pledge and Silicon Valley Community foundation are instrumental in pushing for the register.

Despite its intentions, it is important to note that Tesla had to recall 362,000 US vehicles to address an autopilot malfunction. In light of this, while pausing is virtually impossible, fixing forward and checkpoints are essential.

Image Credit: Levart_Photographer on Unsplash

Parallel Leadership

The OpenAI conflict is intensifying, and data acquisition is becoming increasingly important in order to connect the real and digital worlds. However, AI has the potential to inundate our information repositories with misinformation and propaganda. It will be difficult to distinguish between truth and falsehood.

Britain is pushing for an “adaptable” regulatory framework around AI. China and India are taking a more regulated approach.

AI RACE and Democratisation

AI is aimed in the development of learning capabilities in order to achieve end-to-end connectivity.

This means that there is no disconnect between the different parts of the technology stack. For example, there is no need to switch between different software programs to complete a task. All of the necessary tools are available in one place, and they work together seamlessly. This makes it much easier for users to get things done, and it also reduces the risk of errors.

In an end to end world, technology is also more personalized. The system learns about the user’s preferences and habits, and it adapts to provide a more tailored experience. This makes technology more useful and engaging, and it also helps to improve the user’s productivity.

Alphabet and a host of other companies are competing to build specialized language models to achieve this goal.

However, “Did we ever have a single internet service provider?” “In the same vein, we will require multiple foundational model providers for a healthy ecosystem to function.” “A lot of the power to develop these systems has been constantly in the hands of few companies that have the resources to do it,” said Suresh Venkatasubramanian, a professor at Brown University and former assistant director in the White House Office of Science and Technology Policy. There is a popular view to embracing competitors to OpenAI however they are hard to build and more so they are very difficult to democratize.

In case you missed:

Uma currently works as a delivery lead in a leading bank managing anti-money laundering projects. She started her career setting up and managing data centers and disaster recovery centers moving on to setting up niche healthcare business analysis teams. She would like to share her experiences and best practices across industries in view of a common user. The comments reflect the author's views and not the bank or Sify's.

Leave A Reply

Share.
© Copyright Sify Technologies Ltd, 1998-2022. All rights reserved