top of page
Writer's picturedarenklum

Making Digital Monsters: The Rising Concern of Uncontrolled AI and Weaponized Internet

Updated: Mar 28




As a child, I often imagined monsters hiding under my bed or in my closet, which scared the heck out of me. As an adult, my fears have evolved from these imaginary monsters under my bed to a more tangible monster: the Internet. Initially, the Internet was a world of endless possibilities, offering freedom, expression, learning, and a diversity of ideas. I have always loved the early days of the Internet. But sadly, the Internet has transformed into something more menacing, controlled by entities driven by power, money, and control. It has become a digital weapon, soon to control all our finances and our very freedoms. Today, 60% of the Internet is owned by companies that manage our data, use algorithms to analyze our information, and utilize AI to profile each one of us, exerting digital dominance over our lives and society. We see algorithms being used to manipulate us, addict us and know our very intimate secrets. This is far from the free and open Internet I once knew; it's a monster. A monster that is devouring people digitally and we have no idea how this bad dream ends. Yes, that's probably sensationalizing things a tad, but you get the point. What I see now is not good and we must change course immediately. I know you see it too it's hard to miss.


The question now is: we created a monster, where do we go for safety?


AI: A Monster of Our Own Making?

Artificial Intelligence has permeated every aspect of our lives, revolutionized industries and is enhancing daily experiences. However, this powerful tool, much like fire in the hands of early humans, holds the potential for both creation and destruction. The primary concern lies in the possibility of AI evolving beyond our control. As AI systems become more autonomous and complex, the risks of unintended consequences increase. We've seen instances where AI algorithms have exhibited biased decision-making or unexpected behaviors, which illustrates the importance of oversight and ethical considerations in AI development.


What a lot of people don't realize is AI is not software. Software is something you build, and it performs a function that you programmed. AI is different in the sense that it's something that evolves, morphs and learns on its own organically. We don't know enough about AI yet to not put very steep guardrails around the technology because we truly have no idea the extent of what it can fully do over time as it keeps morphing, learning, growing and enhancing itself. In fact, we are at a point where AI can train itself, it can grow itself and even build new improved versions of itself. This will only accelerate, and we simply don't know the full extent of the dangers this paradigm has for humanity. I have major fears and rightfully so.


The Dangers of Unregulated AI

Unregulated AI presents multiple risks, the most significant being the absence of accountability. It's difficult to assign responsibility when AI systems, particularly those involved in critical decision-making, act without clear oversight. Such systems, if lacking transparent governance, can unintentionally reinforce and magnify existing societal biases, leading to unjust and potentially damaging results. You can see it already happening and at scale!


I'm increasingly concerned about the safety of users interacting with AI-generated content, given projections that AI could soon create 60-80% of all online content, spanning text, images, and videos. This proliferation of potentially misleading or convincingly fake content poses a significant threat, exacerbating issues such as fake news. This challenges our understanding of truth, as the manipulation of news and information by those in power could make distinguishing fact from fiction nearly impossible. In such an environment, the reliability of what we see becomes questionable, shaped by AI biases and programming. We're entering an era where humans curate rather than create content, blurring the concept of truth or even ownership/accountability of content. As a technology enthusiast and early adopter, these developments are deeply unsettling to me.


Consider the routine of your daily life, where nearly every task involves interacting with a device screen, an app, or a platform. Whether it's making calls, sending texts, drafting documents, or attending Zoom meetings, our actions are increasingly reliant on a third party—often referred to as "the cloud." In this AI-driven, cloud-connected world, all our data inputs and outputs are funneled through these intermediaries, giving them comprehensive knowledge and control over our digital lives, including seemingly trivial details. Every interaction, idea, or creation is logged, and stored. Can this be altered? Of course, it can but nobody wants you to hear or know that fact. With recent vulnerabilities in the global data infrastructure have exposed the fragility of our personal information, leaving it exposed to exploitation by skilled hackers, rogue cloud insiders, or even bad actors like rogue nation states. The consequences of this are profound, raising pressing concerns about privacy, security, and the fundamental nature of our digital reality. The essential question emerges: can we truly trust this new era of 'intelligence'? One straightforward answer is that the only entity we can unequivocally trust is ourselves, yet our digital systems are designed to encourage us to relinquish personal trust and place it in the hands third parties. In light of this, one might question the sanity of such a system where personal trust is surrendered so readily. I sure do.


Weaponizing the Internet: Quantum Computing and Advanced Algorithms

The emergence of quantum computing and sophisticated algorithms has opened new capabilities in processing power and problem-solving. However, these technologies have also introduced new threats. Quantum computers, for example, can break current cryptographic standards (including the new NIST post-quantum algorithms), jeopardizing the security of digital communications worldwide. Additionally, advanced algorithms are now capable of executing cyber-attacks with unprecedented precision, speed, and efficiency, transforming the internet into a battleground of digital warfare. The war of computing and algorithms is raging right now, and the United States is not winning this war!


The Consequences of Cyber Weaponization

The weaponization of the internet through these technologies leads to several alarming consequences. Critical infrastructures, like power grids and financial systems, become vulnerable to devastating attacks that can disrupt nations. Personal data is at constant risk of being compromised, eroding the trust and privacy of individuals. Do you really trust the systems you use today? I know I don't!


The Need for End-to-End Security

In response to these threats, there is an urgent need for a new security paradigm that provides truly end-to-end protection. This paradigm should encompass not only robust encryption standards that can withstand quantum computing threats but also ethical guidelines and regulations governing AI and algorithmic usage. The focus should be on developing systems that are secure by design, incorporating privacy and safety from the initial stages of development.


Am I saying we shouldn't have AI? Am I saying we shouldn't touch anything digital? No, but what I am saying is that the human in this equation is suffering and until we fix the broken nature of these systems, give power to the people and secure them 100% how can we use these tools, trust these platforms and continue down a path where the human loses, and the platforms win? We need balanced scales and right now there is a horrible imbalance.


This is exactly why Secured2 is building the platform of the future, putting the human back in control, secured and anonymous. No more centralized authority to monitor and tap into our personal lives. No more tools of mass surveillance and data manipulation. That's not the world we should want in America, land of the free and home of the brave. And yes - I'm being brave by signaling the alarm on our digital world. We need to take back control and we do that with our dollars and by investing in companies that have your best interest at heart.


Building a better digital world, one capability at a time

At Secured2, we recognize the critical need for responsible technology evolution, particularly in this new era focused on privacy and secure technology. Our efforts over several years have centered on developing tools designed to safeguard your privacy and ensure anonymity. Now, we're preparing to significantly invest in establishing our own infrastructure. This move is aimed at gaining greater control over the environment that delivers our services, fulfilling our commitment to providing the highest levels of security, privacy, and service up time.


I'll finish by saying this, as we face the possibility of releasing potent digital forces that can harm us, it's crucial to contemplate our future direction carefully. The unchecked advancement of AI and the potential misuse of the internet with sophisticated technologies pose serious challenges. Nonetheless, by adopting thoughtful regulations, pursuing ethical AI development, implementing new technology platforms like those offered by Secured2 that shift the paradigm in your favor, and engaging in collaboration, we can guide the course towards a future where technology serves as a force for good, rather than a tool for harm. The choice is ours, but we hope to give you a choice. Something you don't have today because it's either cloud or impossible to use or manage cobbled solutions. Secured2 is your own personal cloud platform, under your control and protected from the new quantum/AI threats. Yes, you now have a choice, and you have a new ally in the fight for freedom!

1 view0 comments

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
bottom of page