Back to all courses

Perceiver for Cardiac Video Data Classification (AlphaCare: Episode 2)

Muhammad Iqbal Bazmi
Dhia Hassen
John Varca
X Man
Fawzi Shafei
Praveen Uchil
Tibor
Michael Teichner
SogMosee
scooby95219
13,496 views
323 likes
62 comments

DeepMind recently released a new type of Transformer called the Perceiver IO, which was able to achieve state of the art accuracy across multiple data types (text, images, point clouds, and more). In this episode of the AlphaCare series, I'll explain how Perceiver works, and how we used it to improve accuracy scores for Cardiac video data. The EchoNet dataset was recently made public by Stanford University, and it contains 10K privatized heart videos from patients. We'll also discuss why Transformer networks work so well, and how by using 2 key features (Cross attention & positional embeddings), the Perceiver improved on all variants of Transformers. Get hype!

About the instructor
Siraj Raval
Siraj Raval

I'm a technologist on a mission to spread data literacy. Artificial Intelligence, Mathematics, Science, Technology, I simplify these topics to help you understand how they work. Using this knowledge you can build wealth and live a happier, more meaningful life. I live to serve this community. We are the fastest growing AI community in the world!

Join the discussion on YouTube
Muhammad Iqbal Bazmi
Dhia Hassen
John Varca
X Man
Fawzi Shafei
Praveen Uchil
Tibor
Michael Teichner
SogMosee
scooby95219