Last updated on May 22, 2020
Following up on a previous blog post, I’ve finished grading other assignments for my Spring 2020 AI course at FAU, including two so-called “floating assignments” whose topics can picked by the students as they are mentioned (i.e., as they happen to “float around”) in a lecture.
Additionally, just as before, students could choose to focus on ‘breadth’ (a summary of an entire book/course or some of its parts/chapters) or ‘depth’ (choosing a topic — say, genetic algorithms — and going more deeply into it) and were encouraged to stay away from the ‘essay’ / ‘scientific paper’ format and try other formats, such as PPT, videos, and websites.
The results were, once again, amazing!
Here are some examples (in no particular order):
- Several students took Andrew Ng’s “AI for Everyone” on Coursera and summarized the course’s contents in the form of:
- LinkedIn article by Adam Corbin (with companion notes on GitHub).
- Slide decks, such as the ones by Hani Alnami and Divya Gangwani [PDFs available upon request]
- Blog posts, such as the ones by Kevin Anderson, Cameron Hernandez, and Zuber Najam
- Web sites, such as the ones by Lars Koester and Vinh Huynh
- Many students summarized (a selected portion of) the contents of Pedro Domingos’s book, “The Master Algorithm” in the form of:
- A LinkedIn article by Adam Corbin
- Slide decks, such as the ones by Hani Alnami, Nicole Pérez Rivera, Jonathan Yataco, and Vinay Harrichan. [PDFs available upon request]
- A video by Marco Tacca and another video, with emphasis on autonomous vehicles, by Juan David Yepe
- A Medium post by David Wilson
- Blog posts, such as the ones by Kevin Anderson and Cameron Hernandez.
- Web sites, such as the one by Lars Koester, Tyler Smith and Vinh Huynh and the one by Joe Lenihan
- And there was more, much much more…
- Vinay Harrichan created a slide deck on Artificial Intelligence and COVID-19.
- Aleem Sultan prepared a well-researched slide deck on the Turing Test [PDF available upon request].
- Vinh Huynh and Lars Koester created two separate (excellent) websites on self-driving vehicles
- Emily Stark and Michael Teti proposed a combination of ideas from multiple “tribes” to solve a machine learning problem, which they affectionately called “Master Algorithm Jr.” [PDF available upon request]
- Michael Keller, Yuri Villanueva, and Henry Herzfeld implemented five different solutions for the spam classification problem, one for each of the “tribes” of machine learning described by Domingos: Symbolists (decision tree), Connectionists (neural net), Bayesians (naive Bayes classifier), Analogizers (SVM), and Evolutionaries (optimized decision tree with Genetic Algorithm) using Python and standard machine learning libraries and the Apache SpamAssassin dataset.
- Avathar Bhola created a series of blog posts dedicated to AI topics.
- Nicole Perez-Rivera created a Prezi on self-driving vehicles.
- Priya Sigler expanded her blog with insightful entries on the interview with Pamela McCorduck on the “AI Podcast” hosted by Lex Fridman and a summary of Pedro Domingos’s book, “The Master Algorithm”.
- Emily Stark produced an excellent narrated PPT on “Bayesian Inference” [mp4 version]
- Marco Tacca produced a great YouTube video on “7 random issues about autonomous vehicles that you may not have thought about” (viewer discretion advised).
- Jenny Craig (LinkedIn/ Facebook / Twitter: @JamieCraigMusic) leveraged her background in music and her ongoing PhD research on self-driving cars to produce additional episodes for her podcast on autonomous vehicles.
- Maggie Elkin and Justin Johnson prepared excellent (separate) summaries of the joint keynote at AAAI 2020 by the “godfathers” of deep learning; Geoff Hinton, Yann LeCun, and Yoshua Bengio [PDFs available upon request].
- Maggie Elkin created an incredibly rich PPT on “Evolution: Nature’s Learning Algorithm” [PDF available upon request] and wrote Python code to generate a Shakespeare sonnet using a Genetic Algorithm.
- Michael Teti wrote Python code (available on Google Colab) to help teach the basics of genetic algorithms (GAs) and showed an example of how to use GAs to reproduce images, starting with randomly generated ones and evolving the pixel values.
Here is the algorithm in action, re-creating the iconic cameraman test image (partial result after 912,000 generations):
I am extremely proud of my students’ work and wish them lots of success in their lives and careers!
Featured image: photo by Clark Tibbs on Unsplash