The selection and preparation of data are foundational steps in any machine learning project. The type of data required for machine learning is dictated primarily by the nature of the problem to be solved and the desired output. Data can take many forms—including images, text, numerical values, audio, and tabular data—and each form necessitates specific handling, preprocessing, and modeling strategies.
Types of Data for Machine Learning
1. Structured Data
Structured data refers to information that is organized in a well-defined manner, often within tables or databases. This format includes rows and columns, where each column represents a feature (variable) and each row represents an observation (instance). Examples include customer demographics (age, income, gender), transactional data (sales records), and sensor readings (temperature, pressure, humidity).
Structured data is amenable to various machine learning models, including linear regression, logistic regression, decision trees, and ensemble methods. Feature engineering and data cleaning are critical when working with this type of data.
2. Unstructured Data
Unstructured data lacks a predefined data model and is not organized in a tabular format. This category encompasses:
– Images: Used in computer vision applications such as facial recognition, object detection, and medical imaging. Each image is typically represented as a matrix of pixel values.
– Text: The primary data type for natural language processing (NLP). Applications include sentiment analysis, document classification, and machine translation. Text data requires tokenization, stop-word removal, stemming, and vectorization.
– Audio: Utilized in speech recognition and audio classification. Audio data is represented as waveforms or spectrograms.
– Video: Combines sequences of images (frames) and audio, used in activity recognition and video classification.
3. Semi-Structured Data
Semi-structured data contains both structured and unstructured elements. Examples include JSON and XML files, where data is not stored in tables but still contains tags or keys that organize information. This data is prevalent in web applications and APIs.
Data Requirements in the Context of Machine Learning Steps
The canonical machine learning workflow consists of seven steps: problem definition, data collection, data preparation, model selection, training, evaluation, and deployment. The importance of data manifests at multiple points in this workflow.
1. Problem Definition
Understanding the business or research problem informs the type of data required. For example, predicting housing prices requires structured data on house characteristics, while detecting spam emails uses text data.
2. Data Collection
Data can be sourced internally (databases, logs, transaction records) or externally (public datasets, sensors, web scraping). The data collected must be representative of the problem space and sufficient in quantity and quality to enable model learning.
3. Data Preparation
Data must be cleaned, transformed, and formatted for use with machine learning algorithms. For structured data, this includes handling missing values, encoding categorical variables, and scaling numerical features. For images, preprocessing may involve resizing, normalization, and augmentation. For text, preprocessing typically includes normalization, tokenization, stemming, and converting text into numerical representations such as bag-of-words or embeddings.
4. Model Selection
The type of data influences model selection. Convolutional neural networks (CNNs) are suited for image data, recurrent neural networks (RNNs) and transformers for sequential text data, and gradient boosting machines for tabular data. The choice of model architecture is inherently linked to the data modality.
5. Model Training
The quantity and diversity of data impact the model’s ability to generalize. For deep learning, large volumes of labeled data are often required, especially for image and text tasks. Data augmentation techniques may be used to artificially expand the dataset, particularly in image processing.
6. Evaluation
A separate portion of the data (validation and test sets) must be withheld to assess model performance. This ensures that the model generalizes beyond the training data. For imbalanced datasets, evaluation metrics beyond accuracy, such as precision, recall, and F1-score, are often necessary.
7. Deployment
The model, once trained and evaluated, is deployed to make predictions on new, real-world data. The data pipeline must ensure that the input data at inference time matches the format and preprocessing steps used during training.
Examples of Data Types for Machine Learning Tasks
– Image Classification: Requires labeled images. For example, a dataset of animal photos with labels indicating the species (cat, dog, bird).
– Sentiment Analysis: Utilizes text data such as customer reviews labeled with sentiment categories (positive, negative, neutral).
– Speech Recognition: Involves audio recordings paired with textual transcriptions.
– Fraud Detection: Makes use of structured transactional data with features such as transaction amount, time, location, and merchant.
– Medical Diagnosis: May rely on a combination of structured data (patient demographics, lab results) and unstructured data (radiology images, doctor’s notes).
Data Annotation and Labeling
Labeled data is required for supervised learning tasks, where each example is paired with the correct output. Data annotation can be manual (human annotators), semi-automated, or automated (using pre-existing labels). The accuracy of labels is critical, as mislabeled data can degrade model performance.
For unsupervised learning, labels are not required. Instead, the model seeks to discover structure in the data (e.g., clustering similar items). For reinforcement learning, data may consist of states, actions, and rewards observed as the agent interacts with an environment.
Data Quality and Quantity
High-quality data is accurate, consistent, complete, and relevant. Data quality issues (missing values, duplicates, outliers, inconsistent formats) must be addressed before modeling. The quantity of data impacts the statistical power and generalization of the model. For complex models (e.g., deep neural networks), large datasets are typically necessary to prevent overfitting.
Data Privacy and Compliance
When collecting and using data, it is important to consider privacy, confidentiality, and regulatory requirements (such as GDPR or HIPAA). Sensitive data must be anonymized or de-identified, and appropriate consent must be obtained.
Data Storage and Management
Storing and managing large datasets require robust infrastructure, particularly for image, audio, and video files that consume significant storage space. Cloud platforms, such as Google Cloud, provide managed storage solutions (Cloud Storage, BigQuery) and scalable processing resources for handling large-scale data.
Feature Representation
Raw data is often transformed into features suitable for machine learning algorithms. For structured data, this may involve aggregating, normalizing, or encoding variables. For images, features may be extracted using pre-trained neural networks. For text, common feature representations include term frequency-inverse document frequency (TF-IDF), word embeddings (Word2Vec, GloVe), or contextual embeddings from transformer models.
Data Examples by Modality
– Images: MNIST digit dataset (images of handwritten digits), CIFAR-10 (color images in 10 classes), chest X-rays (medical imaging).
– Text: IMDB movie reviews (sentiment analysis), news articles (topic classification), Wikipedia entries (language modeling).
– Tabular Data: Titanic passenger data (survival prediction), UCI Machine Learning Repository datasets (various domains).
– Audio: LibriSpeech (speech recognition), UrbanSound8K (environmental sound classification).
– Video: UCF101 (action recognition in videos), YouTube-8M (video classification).
Data Augmentation
Data augmentation techniques are applied to expand the diversity of the training data without collecting new examples. In images, augmentation methods include rotation, flipping, cropping, and color jittering. For text, augmentation may involve synonym replacement, back-translation, or paraphrasing.
Handling Missing and Imbalanced Data
Real-world datasets often contain missing values or imbalanced class distributions. Techniques for handling missing data include imputation (mean, median, or model-based), while imbalanced data can be addressed with resampling (oversampling minority class, undersampling majority class) or using advanced algorithms that adjust for class imbalance.
Integration with Google Cloud Machine Learning
Google Cloud offers a suite of tools for managing and processing data for machine learning:
– Google Cloud Storage: For scalable storage of large datasets (images, text, audio).
– BigQuery: For handling structured, tabular data at scale.
– Cloud Dataflow and Dataprep: For data pipeline creation and preprocessing.
– Vertex AI: For building, training, and deploying machine learning models, with integrated tools for data labeling and feature engineering.
Ethical Considerations
The choice and use of data in machine learning should be guided by ethical considerations, ensuring fairness, transparency, and avoidance of bias. Data diversity helps prevent models from learning and perpetuating societal biases. Regular audits and bias assessments are recommended.
Summary Paragraph
Selecting the right data for machine learning hinges on the specific problem and the desired application. Whether using structured tables, unstructured text, images, or audio, the data must be carefully curated, preprocessed, and managed to enable successful model development. The data pipeline, from collection to preparation and feature engineering, forms the backbone of any machine learning workflow and directly influences the resulting model’s accuracy and reliability.
Other recent questions and answers regarding EITC/AI/GCML Google Cloud Machine Learning:
- How is a neural network built?
- How can ML be used in construction and during the construction warranty period?
- How are the algorithms that we can choose created?
- How is an ML model created?
- What are the most advanced uses of machine learning in retail?
- Why is machine learning still weak with streamed data (for example, trading)? Is it because of data (not enough diversity to get the patterns) or too much noise?
- Why, when the loss consistently decreases, does it indicate ongoing improvement?
- How do ML algorithms learn to optimize themselves so that they are reliable and accurate when used on new/unseen data?
- What are the hyperparameters m and b from the video?
- Answer in Slovak to the question "How can I know which type of learning is the best for my situation?
View more questions and answers in EITC/AI/GCML Google Cloud Machine Learning

