What specific vulnerabilities does the bag-of-words model present against adversarial attacks or data manipulation, and what practical countermeasures do you recommend implementing?
The bag-of-words (BoW) model is a foundational technique in natural language processing (NLP) that represents text as an unordered collection of words, disregarding grammar, word order, and, typically, word structure. Each document is converted into a vector based on word occurrence, often using either raw counts or term frequency-inverse document frequency (TF-IDF) values. Despite its
How would you design a data poisoning attack on the Quick, Draw! dataset by inserting invisible or redundant vector strokes that a human would not detect, but that would systematically induce the model to confuse one class with another?
Designing a data poisoning attack on the Quick, Draw! dataset, specifically by inserting invisible or redundant vector strokes, requires a multifaceted understanding of how vector-based sketch data is represented, how convolutional and recurrent neural networks process such data, and how imperceptible modifications can manipulate a model’s decision boundaries without alerting human annotators or users. Understanding

