Last week our co-founder Ulrik Stig Hansen pitched on the main stage at The GovTech Summit 2022 in The Hague about data-centric #computervision in front of government leaders and some of the world's most innovative corporations! 🚀 It was an honor to be selected as one of the finalists alongside five other ground breaking #startups (👋🏽Tucuvi, Citibeats, FACIL'iti, NeuralSpace and Topolytics) Thank you PUBLIC and The GovTech Summit for organizing a great event!
Encord’s Post
More Relevant Posts
-
💥 Vision-Language Models are here! ⬇️ Join the team to learn about how VLMs, like Google's Gemini, are being used by AI teams to turbocharge their data pipelines.
This content isn’t available here
Access this content and more in the LinkedIn app
To view or add a comment, sign in
-
⚡️ Produce more accurate models with multi-layered ontologies! 👇 Nest up to 7 layers in your ontology to ensure your data's full complexity is captured. 💬 Watch the video below to see how the "person" object is nested with multiple options conditioned upon the person's position and movements.
To view or add a comment, sign in
-
⚡️ Label dozens of video frames in seconds! 👇 Watch below to see polygon labels applied, using SAM, and then Encord's polygon Auto-segmentation Tracking tool automate labels over 45 frames. ➡️ To learn more see the docs in the comments
To view or add a comment, sign in
-
💥 Big month for us at Encord! 🤝 We’re very excited to welcome Simon Barnett, Parvathi Bimal, Thanh Nguyen, Mustakeem Malik, Fred E., Clinton Wee and Kevin McKeever to the team 🚀
To view or add a comment, sign in
-
❌ Struggling to find failure modes in your models? ✅ Use Encord's model testing tooling to detect metrics adversely impacting your model performance. ⬇️ Watch below to see an example of detecting 'border proximity' as a failure mode, and adding relevant frames to a collection.
To view or add a comment, sign in
-
📹 Check out Frederik Hvilshøj’s (our ML Lead) video on how LLMs and Vision interact to create Vision-Language Models. 👀 Watch the space for many more to come!
Excited to share my first video in a series about Vision Foundation Models (VFM)s like GPT Vision, Gemini, and open source models like LLaVa and Monkey-Chat. (see many more here https://lnkd.in/dT7SePFY) Each week I'll post a short explainer on topics around computer vision, deep learning, and all the exciting stuff happening in the multi-modal space. This week I explain how LLMs and vision interact to create VLMs. If you have any topics you would like covered, let me know!
To view or add a comment, sign in
-
⚡️ Results from fine-tuning a Vision-Language Model on geospatial data. ⬇️ See the difference between the 2D embeddings from the general-purpose CLIP model and the model fine-tuned by Encord. ❓Which one would you rather use for finding the data you need? 💬 Find the full explainer in the comments below.
To view or add a comment, sign in
-
🤝 “We now have an integrated, one-stop solution where we can manage our data and also understand our model performance to create feedback mechanisms to improve data and models,” said Prajwal Kotamraju (Co-founder). 💡 Learn how Automotus increased mAP 20% by reducing their dataset size by 35%. 👇 See the full case study in the comments below
To view or add a comment, sign in
-
⏰ Save time by bulk-classifying hundreds of images in just a few clicks... 👉 Use 'Natural Language Search' to find the correct images 👉 Multi-select the relevant images 👉 Add to a collection 👉 Select the appropriate classification for the entire collection
To view or add a comment, sign in
4,756 followers
What a cool and great guy on the stage👍👏🏻!