Smart Data is at the Heart of Artificial Intelligence

Artificial intelligence seems to be everywhere you look these days, this article notwithstanding. But seriously, while AI is commonly seen as a complex but innovative solution for curing diseases and fostering social good, today it is being applied to accomplish everything from drafting a better fantasy football team to help craft pick-up lines (how you doin?) Yikes.

No matter what the use case AI aims to solve, it needs data – smart data.  After all, AI can only become intelligent if it is trained by the right information to help it continually learn and eventually outsmart us all. Of course, none of this should come as a surprise.  What is shocking is how poorly many AI projects perform. In the rush to deploy the coolest, newest, buzziest AI solution, it seems we’re all forgetting the importance of leading with the ground truth – the data that will make AI work.

Newsflash: Most AI Projects Fail

Companies pursuing AI projects lack a strong foundation of clean, accurate and structured data; the training data machines need to learn from overtime.  

Common Challenges Sinking Most AI Projects

  • Scarcity of semantically enriched data – structured or unstructured
  • Lack of clean, accurate, quality data
  • Outdated or incomplete data
  • Inconsistent data found in multiple formats and locations
  • Data sufficiency (enough number if samples)
  • Lack of knowing what data is required

According to a recent research report by MIT Technology Review, insufficient quality of data was the second biggest obstacle to employing AI, narrowly behind a shortage of internal talent.  What’s more, 85% of AI projects will “not deliver” according to research by Gartner. The lack of smart data is at the heart of the problem. “You can’t feed the algorithms if you don’t have data. Solid, clean data in large volumes, well-tagged and well organized is crucial,” that according to comments from the Chief Data Officer at the Department of Defense, Michael Conlin. 

In order for artificial intelligence to accurately guide your business decisions and fuel your machine learning algorithms, you must go move from big data to smart data.

Big data is abundant and created by everything we touch.  Bountiful, yes.  Helpful? Not so much. Big data can be both structured and unstructured, but until it’s filtered, cleaned and analyzed, it’s not smart data.  The most common qualities of smart data can be described by the following characteristics.

Accuracy

While this characteristic seems obvious, it cannot be overstated.  Many of us will have different definitions and expectations of what accuracy is, but it’s essentially correct and consistent information that can be used to guide efficient decisions. Accurate data should be correctly defined in a consistent matter in accordance with the expected data standards of a particular business model. But accuracy doesn’t happen by itself, it takes human intervention to define these essential data attributes. In many cases the concept of accuracy is very nuanced, so it must be taken in the context of the particular attribute you’re using it for. If your data is even marginally incorrect, it can derail your objectives.

Completeness

This data characteristic can be measured by how well the data set capture all data points available for a given instance. A complete data set should not have any gaps in the data from what was expected to be collected, and what was actually collected. For example, if a person’s medical record only covers their most recent check-up history, then that data set will misjudge the patient’s true health. The data must paint a full picture to provide the right answers to your questions. 

Uniqueness

This attribute refers to data that can stand alone and not be found in multiple formats and locations within your database. In other words, there should be no duplicates of the same record. Unfortunately, many companies create the same record over and over without even knowing it. Whether it’s a slight modification of the naming convention or inaccurate labeling, the lack of a single source of truth could create challenges with accuracy over time. This is why you’ll often hear the term standardization. Having standardized data allows organizations to find meaningful ways to compare data sets. This is necessary for inputting information, but it is even more important in identifying duplication.

Timeliness

Data is constantly in flux. That’s why it’s imperative to be able to collect and update in a timely fashion. A deep understanding of when the data is no longer useful based on timing needs to be determined. For example, a provision to a financial agreement must be accounted for the moment it is set. If there is significant lag between when the data is collected to when it is used to drive a business decision, it could result in expensive consequences.  Data collected too soon or too late could disrupt machine learning outputs. 

Quality Always Trumps Quantity

All of these characteristics come together to determine data quality; the basis for making good decisions. As more organizations invest in artificial intelligence and machine learning, data scientists must focus on overall quality, especially that of the metadata. Metadata is what describes the data and the lack of such information is one of the primary causes of bad data. If you’re training algorithms minus a solid metadata foundation, they will never become reliable enough to meet your particular needs.

Beyond the data itself, there are severe constraints that can impede analytics and deep learning, including security, privacy, compliance, IP protection, and physical and virtual barriers. These constraints need to be carefully considered. It doesn’t help the enterprise if it has collected and cleaned the data but find it’s inaccessible for various reasons. Often, steps need to be taken such as scrubbing the data so that no private content remains. Sometimes, agreements need to be made between parties that are sharing data, and sometimes technical work needs to happen to move the data to locations where it can be analyzed.

Organizing, cleaning and structuring data may be the least glamorous part of your company’s AI initiative, but it’s certainly the most important. Without that solid data quality foundation to build your models, you’ll never achieve reliable and valid results. Be sure your data has these “smart” characteristics.

(NASDAQ: INOD) Innodata is a global data engineering company delivering the promise of AI to many of the world’s most prestigious companies. We provide AI-enabled software platforms and managed services for AI data collection/annotation, AI digital transformation, and industry-specific business processes. Our low-code Innodata AI technology platform is at the core of our offerings. In every relationship, we honor our 30+ year legacy delivering the highest quality data and outstanding service to our customers.

About

Contact

© 2024 All rights reserved