TY - GEN
T1 - ConTheModel
T2 - 1st Silicon Valley Cybersecurity Conference, SVCC 2020
AU - Ram Vinay, Aishwarya
AU - Alawami, Mohsen Ali
AU - Kim, Hyoungshick
N1 - Publisher Copyright:
© 2021, Springer Nature Switzerland AG.
PY - 2021
Y1 - 2021
N2 - News on social media can significantly influence users, manipulating them for political or economic reasons. Adversarial manipulations in the text have proven to create vulnerabilities in classifiers, and the current research is towards finding classifier models that are not susceptible to such manipulations. In this paper, we present a novel technique called ConTheModel, which slightly modifies social media news to confuse machine learning (ML)-based classifiers under the black-box setting. ConTheModel replaces a word in the original tweet with its synonym or antonym to generate tweets that confuse classifiers. We evaluate our technique on three different scenarios of the dataset and perform a comparison between five well-known machine learning algorithms, which includes Support Vector Machine (SVM), Naive Bayes (NB), Random Forest (RF), eXtreme Gradient Boosting (XGBoost), and Multilayer Perceptron (MLP) to demonstrate the performance of classifiers on the modifications done by ConTheModel. Our results show that the classifiers are confused after modification with the utmost drop of 16.36%. We additionally conducted a human study with 25 participants to validate the effectiveness of ConTheModel and found that the majority of participants (65%) found it challenging to classify the tweets correctly. We hope our work will help in finding robust ML models against adversarial examples.
AB - News on social media can significantly influence users, manipulating them for political or economic reasons. Adversarial manipulations in the text have proven to create vulnerabilities in classifiers, and the current research is towards finding classifier models that are not susceptible to such manipulations. In this paper, we present a novel technique called ConTheModel, which slightly modifies social media news to confuse machine learning (ML)-based classifiers under the black-box setting. ConTheModel replaces a word in the original tweet with its synonym or antonym to generate tweets that confuse classifiers. We evaluate our technique on three different scenarios of the dataset and perform a comparison between five well-known machine learning algorithms, which includes Support Vector Machine (SVM), Naive Bayes (NB), Random Forest (RF), eXtreme Gradient Boosting (XGBoost), and Multilayer Perceptron (MLP) to demonstrate the performance of classifiers on the modifications done by ConTheModel. Our results show that the classifiers are confused after modification with the utmost drop of 16.36%. We additionally conducted a human study with 25 participants to validate the effectiveness of ConTheModel and found that the majority of participants (65%) found it challenging to classify the tweets correctly. We hope our work will help in finding robust ML models against adversarial examples.
KW - Adversarial examples
KW - Machine learning
KW - Social media
KW - Tweets
UR - https://www.scopus.com/pages/publications/85107400345
U2 - 10.1007/978-3-030-72725-3_15
DO - 10.1007/978-3-030-72725-3_15
M3 - Conference contribution
AN - SCOPUS:85107400345
SN - 9783030727246
T3 - Communications in Computer and Information Science
SP - 205
EP - 219
BT - Silicon Valley Cybersecurity Conference - First Conference, SVCC 2020, Revised Selected Papers
A2 - Park, Younghee
A2 - Jadav, Divyesh
A2 - Austin, Thomas
PB - Springer Science and Business Media Deutschland GmbH
Y2 - 17 December 2020 through 19 December 2020
ER -