This is the first academic conference in NLP that I have attended and presented on, I have experienced a lot, acquainted some friends, and had a really good experience at Minneapolis.
I learned more in the NLP area, including the different problems that they are working on. Some novel things I heard include using visual cues to help increase the accuracy of co-reference (HKUST), fairness and bias in nlp and machine learning is a quite, QA, discourse analysis and dialogue system, and combining cognitive modeling with NLP.
Sadly, I notice I have lost all the notes taken at the conference, only left with a few pictures I took at Keynote. They are almost all about gender bias and fairness in ML. So this is not going to be a complete recollection of my reflection and what I learned, but only part of it.
Some Inspiring Pics
This is an interesting picture, showing what words are and should be gender neutral, and what words are okay to be biased toward one gender.
This is a picture signifying that language bias can be caused by difference language, so to say, language itself can cause stereotype, because there seems to be a difference in bias between different languages.
This is when the keynote speaker was confronted with the question that, is the act of studying AAE itself promoting bias(that would be ironic because they are researching to reduce bias in the ml system)? Why should there be a kind of African American English?
The answer he gave was that the distinction is merely in linguistic, without implying any language is inferior or superior than the other.
This is a very refined and interesting experiment, it is not asking people to put words associated with a certain gender to a certain side, but it is measuring the reaction time and using that as a measure of bias.
The experiment is that participants are asked to sort to two sides of the screen boy/girl name, and some other items that are supposed to be gender neutral, (such as addition, books, graph, story). The items are either about math or about story.
The idea is, if people have gender bias, then it would be easier and faster for them to categorize the words that are related to stories to the same side as girl name, and those item words that are related to math to the same side as boy’s name. It seem the result is that people do have a difference in reaction time, and that reflect gender bias.
An interesting experiment, smiling women stereotype. This means those people who smile are more likely to be considered as female.
I presented at the DISRPT workshop, for the work of applying rhetorical structure theory to students essays in order to give structural feedback.
The people in the DISRPT community are nice and does great work. Prof. Amir Zeldes does great work along with his two graduate student Siyao Peng and Yang Liu, they devoted a lot of effort in making RST Web a better tool for researchers, and are continually adding features to it yearly. Shujun Wan and Tino developed a tool that can calculate the agreement of RST trees (annotation) between two annotators, making use of the methodology of Iruskieta. That is really useful, because when I annotated with Shiyan, it was very hard for us to calculate agreement, and we had to build consensus after every passage, and never really enter the independent annotating stage. Shujun said to further improve the tool, a GUI is necessary, look forward to seeing that!
After my talk, I talked with Xinhao from ETS, who are doing similar work in applying RST to speech, in order to do automatic scoring. That seem interesting because with the amount of data ETS can get hold of, it is likely that their model can achieve the state of the art accuracy. I am also happy that Mikel Iruskieta expressed interest in the intelligent tutoring system we built, and he is interested in adapting that into Basque and Spanish. It is nice to hear that people are inspired by your work.
Other interesting topics
There are a lot more interesting topics that I got to understand a bit better through NAACL. My mentor at RIT, his research is more related to accessibility, specifically he is researching on how to help deaf workers have meetings with hearing people. The regular automatic translation while they can achieve somewhat high accuracy, there are some mistakes more costly than others, such as the important details (dates, time of a meeting). Ashutosh Adhikari (from U of Waterloo), presented on how the simpler model beating complex models. Wei Yang and Yuqing Xie, they focus on QA topic in NLP. Conversation with these people are very inspiring and eye-opening.
Conference is very rewarding, and look forward to the next time:)