Alejandra Vasquez and Ericson Hernandez employed a systematic and meticulous approach to gather the data for their machine learning model, which aimed to identify potholes on Los Angeles roads using TensorFlow. Their methodology involved several steps, ensuring the collection of a comprehensive and diverse dataset.
To begin with, Alejandra and Ericson identified various locations in Los Angeles that were prone to potholes. They selected roads with different characteristics, such as high traffic areas, residential streets, and roads with varying surface materials. This selection process ensured that their dataset would encompass a wide range of road conditions and pothole occurrences.
Once the target locations were identified, the duo used a combination of manual and automated data collection techniques. They physically visited each location, carefully inspecting the roads for potholes and recording their findings. This manual inspection allowed them to capture important details about the potholes, such as size, depth, and location on the road. They also took photographs of each pothole to provide visual data for their model.
In addition to manual inspection, Alejandra and Ericson utilized advanced technologies to enhance their data collection process. They equipped vehicles with sensors and cameras to capture real-time data while driving on the selected roads. These sensors recorded various parameters, such as vibration, acceleration, and GPS coordinates. By correlating this sensor data with the manually collected information, they were able to create a more comprehensive dataset.
To further enrich their dataset, Alejandra and Ericson collaborated with the Los Angeles Department of Transportation (LADOT). The LADOT provided historical data on road conditions, maintenance records, and previous pothole repairs. This additional information allowed them to incorporate the temporal aspect of pothole occurrence and analyze the effectiveness of past repairs.
To ensure the accuracy and reliability of their dataset, Alejandra and Ericson implemented a rigorous quality control process. They cross-validated the manually collected data with the sensor data to identify any discrepancies or outliers. Any inconsistencies were carefully reviewed, and the data was corrected or excluded if necessary. This meticulous approach ensured that their dataset was of high quality and representative of the actual road conditions in Los Angeles.
Alejandra Vasquez and Ericson Hernandez collected data for their machine learning model on identifying potholes on Los Angeles roads using a combination of manual inspection, sensor data collection, and collaboration with the LADOT. Their systematic approach encompassed various road types and conditions, ensuring a diverse and comprehensive dataset. By cross-validating and rigorously quality controlling their data, they ensured its accuracy and reliability.
Other recent questions and answers regarding EITC/AI/TFF TensorFlow Fundamentals:
- In the example keras.layer.Dense(128, activation=tf.nn.relu) is it possible that we overfit the model if we use the number 784 (28*28)?
- How important is TensorFlow for machine learning and AI and what are other major frameworks?
- What is underfitting?
- How to determine the number of images used for training an AI vision model?
- When training an AI vision model is it necessary to use a different set of images for each training epoch?
- What is the maximum number of steps that a RNN can memorize avoiding the vanishing gradient problem and the maximum steps that LSTM can memorize?
- Is a backpropagation neural network similar to a recurrent neural network?
- How can one use an embedding layer to automatically assign proper axes for a plot of representation of words as vectors?
- What is the purpose of max pooling in a CNN?
- How is the feature extraction process in a convolutional neural network (CNN) applied to image recognition?
View more questions and answers in EITC/AI/TFF TensorFlow Fundamentals