3

OpenCV AI Competition 2021 Highlights and Team Profiles Part 1 - OpenCV

 3 years ago
source link: https://opencv.org/opencv-ai-competition-2021-highlights-and-team-profiles-part-1/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

OpenCV AI Competition 2021 Highlights and Team Profiles Part 1

Phil Nelson April 21, 20210 Comments Competition Tags: accessibility autonomous vehicles competition oak-d oak2021 pose estimation robotics spacial AI unity

42141-Update-Feature-Image-scaled.jpg

If social media is any indication, OpenCV AI Competition 2021 participants have really hit the ground running. The competition, sponsored by Microsoft Azure and Intel, features over 1000 developers on over 200 Phase 1 winning teams worldwide and our feeds are already buzzing with awesome videos and photos as teams take their first steps toward the Global Grand Prize award of $20,000.

In this post we’re featuring some of the teams posting cool stuff online using the #OAK2021 hashtag, featuring a short Q&A with each. Today you’ll meet Charlie the super Lego robot, learn how to control an airplane with your face, and more. Thanks for taking the time to chat with us Egypt Iris, Kauda, and Cortic Tigers!

Want your team to be featured here? Post content with the #OAK2021 hashtag! We’ll definitely see it and so will thousands of competition watchers all over the world.

Some Highlights From The #OAK2021 Hashtag

42121-Post-Graphic-2-1024x640.jpg

Shout-out to everyone who has posted recently. Here are some of our favorites from the beginning of the competition. Your updates have been an inspiration to your fellow competitors and to those of us running the competition too!

This is just a short list of our faves. We encourage everybody to follow the hashtag on TwitterLinkedInYouTubeFacebook, and Instagram.

Team Profile: Cortic Tigers

42121-Post-Cortic-Graphic-1-1024x640.jpg

Ye, Jane, and Michael from Cortic Tigers have been posting all sorts of great videos to Twitter, Insta, and LinkedIn, showing their smart robot Charlie learning new skills with a lot of personality. On their social media feeds you can find videos showing the little bot learning how to navigate, recognize faces, understand American Sign Language, and more.

What is your project?

AI is powering the world.  As a result, there is a huge demand for AI literate talents in the workforce.   However, learning AI can be intimidating and expensive for beginners.  At Cortic Technology, we want to leverage our expertise in edge computing to reduce the cost and effort required to learn AI.

We think the most effective way to learn AI is to build an AI system yourself to solve problems you care about.  Our goal is to facilitate this type of hands-on learning by enabling a large variety of state-of-the-art AI algorithms to run efficiently on inexpensive edge devices like the Raspberry Pi or the OAK-D.  Our software supports computer vision, speech recognition, natural language processing, speech generation, LEGO motor control, and smart home control in a single programming environment.  We built a rapid prototyping environment that uses visual block-based programming to encourage beginners to experiment with AI from day one.  Our system can also translate visual programs into Jupyter Notebook format or native Python code when people acquire more experience and want to take full advantage of Python’s power and flexibility.  Our project is fully open-sourced, and we encourage everyone to give it a try. You can access it at https://github.com/cortictechnology/cait.

What is your team “origin story?”

All cofounders of Cortic Technology (Ye, Jane, and Michael) met while working for a previous AI startup in Vancouver.  Ye and Jane both have teenage children.  Because of the increasing popularity of AI, our kids became very curious about things we do at work and wanted to know more about the technology.  The funny thing is when we started to answer their questions and talk about basic algorithmic concepts, they became bored right away.  We quickly found out that they would rather spend the time building AI projects with us than talking/discussing the theory behind how things work.  Another big surprise for us is that even though we almost didn’t spend any time teaching them the basic theory, they gained a great understanding of it just by doing these hands-on projects.  They were also very motivated to look things up on the Internet when they needed to help with anything.  We began focusing much more on hands-on learning after observing this behaviour.  We had tons of fun building all kinds of projects with our kids.  Cortic Technology was born with the intent of giving other users the freedom to build on our work and rapidly prototype using this remarkable technology to solve interesting problems.

How did you decide what problem to solve?

We noticed that there are lots of people interested in learning AI, but there is just too much friction (cost, time investment, steep learning curve, etc.) for many of them to even get started.  We see a lot of value in removing this friction so that more people can learn and contribute to this burgeoning field.  

What is the most exciting part of #OAK2021 to you?

The most exciting part about #OAK2021 is the possibility that the OAK-D device brings.  The #OAK2021 competition brought together over 250 teams worldwide to use the OAK-D device to solve different problems.  We are already starting to see many creative uses of this device, such as using it to build a plugin for the Unreal game engine for character modelling, literally building a cow tracker, and not to mention all the projects that use it to help those among us that are physically challenged.  

What do you think / feel upon learning you were selected for Phase 2?

We were definitely humbled by the fact that we made it to Phase 2.  There are so many exciting ideas and capable teams from over 1400 submissions in Phase 1.  We are very fortunate to be able to move into the next phase of the competition and have a chance to showcase how we will make the OAK-D device a core part of our solution.

What, if anything, has surprised you so far about the competition?

We are constantly surprised by the creativity and technical prowess of everyone involved in this competition.  The DepthAI discord channel is our go-to place when we have complex technical issues that we need to solve.  The speed at which Brandon and his team at Luxonis answer these questions is just mind boggling, considering the number of new questions that show up every day.  We are very grateful for all the help we received from Luxonis and our fellow competitors.

Do you have any words for your fellow competitors?

We want to tell our fellow competitors to keep their creative juices flowing.  We are very eager to see all the final submissions in July.  We look forward to learning from each and every one of our fellow competitors.  

Where should readers follow you, to best keep up with your progress?

The best way to follow our progress through the competition is definitely on either Twitter (https://twitter.com/CorticTechnolo1) or LinkedIn (https://www.linkedin.com/company/cortic/).

Team Profile: Kauda

42121-Post-Kauda-Graphic-1024x640.jpg

Teammates Giovanni Lerda and Gerard Espona Fiedler met due to Giovanni’s work on the Kauda robotic arm last year. Gerard brings the software power and Giovanni is the team’s hardware expert. We loved seeing their LinkedIn post showing a DepthAI plugin for the Unity game engine that lets you control a plane with your face.

What is your project? 

Our project basically is about enabling computer vision tasks on Kauda robotic arm. Kauda robotic arm is an open source, low cost – 3D printable desktop-size 5-axis robot arm designed and developed by Giovanni Lerda. It’s the perfect platform for anyone interested in robotics, especially in the period of industry 4.0 where computer vision on anthropomorphic robots are increasingly in demand, and moreover ideal to research and develop using computer vision, 3D depth and AI in general. Our goal is to mount an OAK-D camera and enable complex robotic tasks using Spatial AI.

What is your team “origin story?”

Everything started when Giovanni released Kauda robotic arm last year on Instructables (https://www.instructables.com/KAUDA-Robotic-Arm/) and on his own website (https://lerdagiovanni.wixsite.com/kauda). When I saw the project I knew quickly it would be the perfect match to research and develop robotics using computer vision, deep learning and AI in general. I contacted him and we teamed up to improve Kauda. Giovanni as a talented industrial developer is focused on improving Kauda on mechanical and electronics design meanwhile I’m focusing on software development, specially in digital twin inside Unity that allows us to program more complex tasks using computer vision and AI. So we’re kind of different but complementary profiles.

How did you decide what problem to solve?

Kauda robotic arm is a great platform but some more advanced tasks are only possible enabling spatial AI, eg: detect and pick objects dynamically or be aware of the surrounding environment to enable collaboration tasks. 

What is the most exciting part of #OAK2021 to you?

About the contest we’re thrilled to compete with such high quality teams and for us it’s a perfect opportunity to take Kauda to the next level.

What do you think / feel upon learning you were selected for Phase 2?

Well honestly we were aware of the high number and high quality entries so we invested quite a lot of time preparing the submission and it was shocking to know at the end we were selected. We’re really proud of that.

What, if anything, has surprised you so far about the competition?

To see some of the teams around the globe with such good ideas to solve real world problems thanks to Spatial AI and edge AI devices like OAK-D.

Do you have any words for your fellow competitors?

We wish the best for all the finalists. We know each team is working very hard to complete their own proposal and we know the quality bar is very high. Ofc we keep some surprises for our submission in July.

Where should readers follow you, to best keep up with your progress? (Twitter, LinkedIn, etc)

Right now probably Linkedin is the best place and there you have my personal website, twitter and IG.

https://www.linkedin.com/in/gerardespona/

Team Profile: Egypt Iris

42121-Post-Egypt-Iris-Graphic-1-1024x640.jpg

Rowan Hisham, Safynaz Tarek, Mahmoud Elkarargy, Aya Elghannam of Egypt Iris are a team who met at Alexandria University. Together they have worked on many projects in the past including experience with Azure Kinect, IBM Cloud Services, and remotely operated vehicles. Safynaz’s “first steps” post is my personal favorite. Like last year’s Grand Prize winner, Jagadish Mahendran, their project seeks to improve the lives of those with visual impairments.

What is your project? Briefly Describe your problem statement and proposed solution

Our project, Egypt Iris, aims to utilize the power of assistive technology so that it could be used by the visually impaired to help promote independence and autonomy, both for the person and those around them. We aim to create an assistive device that can act as a second eye to the user. The device leverages the power of edge AI to offer multiple features to the user needed in his everyday life.

What is your team “origin story?”

Our shared passion for robotics was the reason our team members met back in 2019 after joining our university’s robotics team. We joined multiple competitions where we were challenged to use image processing techniques, which introduced us to the OpenCV library that sparked our interest in computer vision and AI. That’s why getting this chance to compete in a competition organized by OpenCV itself was incredibly gratifying. 

How did you decide what problem to solve?

Being undergraduate students ourselves, we see the struggle of our visually impaired colleagues in their everyday life, from taking notes, answering exams, to picking the color of what they wanted to wear so we decided to focus on a case study that includes a visually impaired student, and how our device can be a second eye to them throughout the day.

What is the most exciting part of OAK2021 to you?

The most exciting part was receiving an OAK-D for each member of our team, we are

grateful for the chance to get our hand on such a piece of tech we wouldn’t normally have access to.

What do you think/feel upon learning you were selected for phase 2?

It felt incredible. Especially in such a large-scale competition, it’s very exciting that we got a chance to develop our idea. It seemed impossible at the beginning, but then… it’s not.

What, if anything, has surprised you so far about the competition?

Diversity of teams and ideas, it’s great to get a chance to compete with 250+ teams from all around the globe. We are looking forward to knowing more about each team’s

project and what problem they’re aiming to solve.

Do you have any words for your fellow competitors?

Focus on learning and enjoy the process, competitions like this one are a great catalyst for innovation, it’s not just about winning but about experimenting with new technologies and learning.

Where should readers follow you to best keep up with your progress?

We plan to post our progress regularly on our LinkedIn profiles:

More To Come

Thanks for reading this first in our series of team profiles. These are just a few of the over 200 teams competing in this huge competition- we wish them all the very best of luck! If you’re an AI creator who wants to join in on the cool stuff, why not buy yourself an OAK-D from The OpenCV Store, or from Mouser?

Stay tuned for more profiles, and a steady stream of awesome stuff from these amazing teams. Don’t forget to sign up for the OpenCV Newsletter to be notified when new posts go live, and get exclusive discounts and offers from our partners.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK