Artificial Intelligence: Dumb New Buzzword, or the Next Evolution in Tech?

By: Ushi Heffernan

You can’t shake a stick without seeing someone attach the term AI or Artificial Intelligence to their tech product. Whether it’s your search engine, email provider, social media platform, or the latest security tools, AI has inundated the world of pretty much anyone who uses a computer or logs into the internet.

Here’s the thing though, most people, yes even the “experts”, don’t actually understand what artificial intelligence is. I was a part of this group until recently.

Small announcement, I’m working with Intezer (BIG woohoo!) and they are about to publish an ebook they contracted me to write about mitigating SOC challenges to improve efficiency, decrease alert fatigue and improve automation efforts. I’m passionate about SOC analyst work conditions, because I know that even in the best SOC, with the best bosses and leadership, alert fatigue is a very real danger for organizations. Too many alerts can lead to missed cyber incidents and lead to further footholds or data loss, which equals money losses. So when they told me they had a product using artificial intelligence to help AUTOMATE tier 1 SOC investigations and significantly reduce investigation times on more serious incidents, I was intrigued to say the least. But I didn’t understand artificial intelligence much more than the average non-technical user. So I needed to remedy that to speak intelligently on how they are using their product and if I felt confident enough in how they are utilizing artificial intelligence to recommend their product. (You’ll just have to read the book to find out the rest ;))

Many people first became aware of artificial intelligence with the announcement of OpenAI’s ChatGPT in November or 2022. The popular large language model (LLM) went viral immediately and within a few days had over 1m users. Rightfully so, people were impressed at what AI could do for them. However, few understand that while LLM is one technology component that encompasses AI, it isn’t the only type of AI technology out there, and it certainly isn’t new.

The term “artificial intelligence” was coined in the early 1950’s by John McCarthy, considered the father of artificial intelligence. I could write an entire history lesson on AI, as it’s fascinating. But for now, know that the concept of AI has been around for some time and has evolved and grown. This isn’t some new concept by a long shot.

Ok, so what actually is artificial intelligence?

Artificial Intelligence (AI) is a branch of computer science that aims to create machines capable of performing tasks that would usually require human intelligence. AI is an interdisciplinary science with multiple approaches. Advancements in machine learning and deep learning, specifically, are creating a paradigm shift in practically every industry.

Key components of AI:

Machine Learning (ML):

    • ML is a branch of AI and computer science that uses data and algorithms to enable AI to imitate the way that humans learn. This allows machines to learn and improve performance over time without being explicitly programmed to do so.

  • Neural Networks:

    • Neural networks, aka artificial neural networks (or ANNs), are node layers containing an input layer, one or more hidden layers, and an output layer. Each node is connected to another and has a specified threshold value. If the output of any individual node is above the threshold value, the attached node is activated, and data is sent to the next layer of the network by that node.

  • Deep Learning (DL):

    • DL is a sub-field of neural networks. The word “deep” in Deep Learning refers to the number of layers in a neural network. Like ML, DL is an AI algorithm. DL can ingest large amounts of unstructured data in its raw form and can determine the features which distinguish data categories from one another.

  • Generative AI/ Large Language Model (LLM):

    • LLMs are a class of foundation models, which are trained on enormous amounts of data to provide the foundational capabilities needed to drive multiple use cases and applications, as well as resolve a multitude of tasks. This is in stark contrast to the idea of building and training domain specific models for each of these use cases individually, which is prohibitive under many criteria (most importantly cost and infrastructure), stifles synergies and can even lead to inferior performance.

    • LLMs represent a significant breakthrough in Natural Language Processing (NLP) and artificial intelligence, and are easily accessible to the public through interfaces like Open AI’s Chat GPT-3 and GPT-4. Other examples include Meta’s Llama models and Google’s bidirectional encoder representations from transformers (BERT/RoBERTa) and PaLM models.

    • In a nutshell, LLMs are designed to understand and generate text like a human, in addition to other forms of content, based on the vast amount of data used to train them. They have the ability to infer from context, generate coherent and contextually relevant responses, translate to different languages, summarize text, answer questions and assist in creative writing or code generation.

Buzzwords are a funny phenomena. Oftentimes they are used as a way to describe a new technology that is emerging in a particular industry. Many times these terms get thrown around without much understanding of what is behind them. This leads to widespread misunderstandings and assumptions that tech can do things that it truly can’t. Artificial Intelligence is a great example of that. So many people are running around worried AI is taking over all the things, when it simply can’t do many of the claims being made by the uninformed. Additionally, we must remember that the popular LLMs, while a component of ai, isn’t the only component. It’s kind of an “all thumbs are fingers, but not all fingers are thumbs” situation.

AI is an amazing technology that has evolved into something exciting. But we have to also remember that, just like computer science and coding, we still need a human to utilize and orchestrate these algorithms in their own way.

“Ushi” Heffernan-Cowan is the CEO and chief hacker at HackerHaus Security Solutions, LLC.

Resources and further reading:

https://intezer.com/

https://www.forbes.com/sites/bernardmarr/2023/05/19/a-short-history-of-chatgpt-how-we-got-to-where-we-are-today/

https://www.ibm.com/topics/large-language-models

Previous
Previous

What the Heck is a (v)SOC anyway? (and why should YOUR small business care)