Miaomiao Liu on LinkedIn: #eccv2024 (2024)

Miaomiao Liu

Senior Lecturer at The Australian National University

  • Report this post

We are very excited to announce that we are going to organise the 1st workshop on Scalable 3D Scene Generation and Geometric Scene Understanding in conjunction with #ECCV2024, Milano, Italy!We invite paper submissions and paper topics may include but are not limited to: - Scalable large-scale 3D scene generation - Efficient 3D representation learning for large-scale 3D scene reconstruction - Learning compositional structure of the 3D Scene, 3D scalable Object-centric learning - 3D Reconstruction and generation for dynamic scene (with humans and/or rigid objects such as cars) - Online learning for scalable 3D scene reconstruction - Foundation models for 3D geometric scene understanding - 3D Reconstruction and Generation for AR/VR/Robotics etc - Datasets for large-scale scene reconstruction and generation with (moving objects)Deadline is July 7th, 2024.We have a great line of speakers and please check our workshop website (https://s3dsgr.github.io/) for more details!Hope to see you in Milano!José M. Álvarez, Hongdong Li, Richard Hartley, # Mathieu Salzmann

  • Miaomiao Liu on LinkedIn: #eccv2024 (2)

105

5 Comments

Like Comment

Weijian Deng

PhD in Computer Science | Research Fellow@ANU | Predicting Model Generalization | 3D content Modeling & Generation | Building a Machine That Can See and Generalize

3mo

  • Report this comment

Worth attending

Like Reply

1Reaction 2Reactions

Junlin Han

Researcher, computer vision & machine learning

3mo

  • Report this comment

Wow super cool and timely topic! We really need to focus a bit more on 3D scene generation!

Like Reply

1Reaction 2Reactions

See more comments

To view or add a comment, sign in

More Relevant Posts

  • Soumyadeep Sahu

    CPO, Matterize | Generalist

    • Report this post

    this is useful finally

    2

    Like Comment

    To view or add a comment, sign in

  • Common Sense Machines

    3,784 followers

    • Report this post

    Topology and adaptive Level-of-Detail (LOD) have been unsolved problems and critical roadblocks towards full-scale studio/enterprise adoption of 3D GenAI.A new era in 3D generative modeling is taking shape, beyond SDS-based NeRFs/Gaussian Splats, SDFs, DMTeT, Flexicubes, etc.Here is an early sneak peak. We want to go deep with a small group of partners on this.You know what to do: hello@csm.ai

    305

    3 Comments

    Like Comment

    To view or add a comment, sign in

  • es/iode

    952 followers

    • Report this post

    📃Scientific paper: Planner3D: LLM-enhanced graph prior meets 3D indoor scene explicit regularizationAbstract:Compositional 3D scene synthesis has diverse applications across a spectrumof industries such as robotics, films, and video games, as it closely mirrorsthe complexity of real-world multi-object environments.Conventional workstypically employ shape retrieval based frameworks which naturally suffer fromlimited shape diversity.Recent progresses have been made in object shapegeneration with generative models such as diffusion models, which increases theshape fidelity.However, these approaches separately treat 3D shape generationand layout generation.The synthesized scenes are usually hampered by layoutcollision, which suggests that the scene-level fidelity is stillunder-explored.In this paper, we aim at generating realistic and reasonable 3Dindoor scenes from scene graph.To enrich the priors of the given scene graphinputs, large language model is utilized to aggregate the global-wise featureswith local node-wise and edge-wise features.With a unified graph encoder,graph features are extracted to guide joint layout-shape generation.Additionalregularization is introduced to explicitly constrain the produced 3D layouts.Benchmarked on the SG-FRONT dataset, our method achieves better 3D scenesynthesis, especially in terms of scene-level fidelity.The source code will bereleased after publication.;Comment: 16 pages, 10 figuresContinued on ES/IODE ➡️ https://etcse.fr/k9G-------If you find this interesting, feel free to follow, comment and share.We need your help to enhance our visibility, so that our platform continues to serve you.

    Planner3D: LLM-enhanced graph prior meets 3D indoor scene explicit regularization ethicseido.com
    Like Comment

    To view or add a comment, sign in

  • Martin K.

    Creative Technologist pushing the boundaries of generative AI, 3D, VFX, LLMOps, and spatial design

    • Report this post

    I have been pushing polygons for decades... And in recent years I've been doing extensive work with Gaussians Splats. And now we are seeing a new evolution for 3D graphics with (EVER) Exact Volumetric Ellipsoid Rendering. Google has just announced a groundbreaking advancement in real-time 3D rendering. This new method takes us beyond the limitations of current techniques like 3D Gaussian Splatting (3DGS).Key points about EVER:- Uses ray tracing and ellipsoid-based scene representation- Eliminates "popping" artifacts common in 3DGS- Provides true volumetric consistency similar to NeRF - Supports complex optical effects like defocus blur and reflections- Balances performance and quality through adaptive density controlWhile training times are slightly longer (1-2 hours), the improved fidelity in reconstructions makes it a game-changer for industries relying on high-quality 3D rendering.This development signals an exciting trend in refining Radiance Field representations. As we continue to push the boundaries of 3D reconstruction, EVER stands out as a significant leap forward.🔗 Check out this project page for side by side comparisons of EVER against Gaussian Splats: https://lnkd.in/gQu3fx5b#3DRendering #ComputerGraphics #AI #TechInnovation #EVER

    24

    4 Comments

    Like Comment

    To view or add a comment, sign in

  • Papers2Date

    54 followers

    • Report this post

    ✨ 3D Visualization: Compressed Gaussian Splatting for Compact & High-Quality Scene Representations ✨💡 Introduction:In the realm of 3D scene modeling, a groundbreaking approach dubbed Compressed Gaussian Splatting (CompGS) has emerged, offering a significant reduction in data size without sacrificing accuracy or rendering quality. This innovative method relies on compact Gaussian primitives and a hybrid structure to capture predictive relationships, optimizing the balance between storage efficiency and visual fidelity.⚙️ Main Features:CompGS introduces a hybrid primitive structure that establishes predictive relationships among primitives, significantly enhancing compression efficacy. The rate-constrained optimization scheme is a game-changer, minimizing redundancies and offering advanced compact representations. CompGS not only achieves remarkable compactness but also maintains high-quality rendering, outperforming existing methods in benchmark comparisons.📖 Case Study or Example:The efficacy of CompGS was put to the test on diverse datasets such as Tanks&Templates, Deep Blending, and Mip-NeRF 360. It consistently outperformed traditional methods, delivering high PSNR, SSIM, and LPIPS scores while achieving impressive compression ratios, proving its potential for real-world applications in industries where 3D model storage and bandwidth are critical factors.❤️ Importance and Benefits:CompGS is a significant leap forward in 3D scene representation. It addresses the pressing need for storage and bandwidth efficiency, unlocking the potential for more practical and widespread use of high-quality 3D models in various domains, from virtual reality to scientific visualization.🚀 Future Directions:While CompGS has set a new standard for 3D scene compression, future research will focus on further optimizing the balance between bitrate and rendering quality, exploring the integration with emerging technologies such as neural rendering, and expanding its application to a broader range of complex scenes.📢 Call to Action:Dive deeper into the intricacies of Compressed Gaussian Splatting and join the conversation on the future of 3D scene representation. Explore the full research and engage with the community on GitHub where the CompGS code is available for further innovation. Read the full paper here: [https://lnkd.in/gjkSBXvC]#CompressedGaussianSplatting #3DModeling #DataCompression #RenderingQuality #InnovationIn3D #TechTrends

    Like Comment

    To view or add a comment, sign in

  • Yogesh Jadhav

    📈 10M+ Views | 🚀 Turning Data into Actionable Insights | 🤖 AI, ML & Analytics Expert | 🎥 Content Creator & YouTuber | 💻 Power Apps Innovator | 🖼️ NFTs Advocate | 💡 Tech & Innovation Visionary | 🔔 Follow for More

    • Report this post

    🌟 Exciting news in the world of AI and 3D scene generation! A new diffusion model, IB-planes, has been introduced to perform fast, detailed reconstruction and generation of real-world 3D scenes using only 2D images. This innovative approach allows for efficient and accurate representation of large 3D scenes, supporting 3D reconstruction and generation in a unified architecture. The model shows superior results on generation, novel view synthesis, and 3D reconstruction, marking a significant advancement in the field of AI and computer vision. #AI #MachineLearning #ComputerVision #3DSceneGeneration #NeuralNetworks #Innovation

    🌟 Exciting news in the world of AI and 3D scene generation! A new diffusion model, IB-planes, has been introduced to perform fast, detailed reconstruction and generation of real-world 3D scenes using only 2D images. This innovative approach allows for efficient and accurate representation of large 3D scenes, supporting 3D reconstruction and generation in a unified architecture. The model sho... arxiv.org
    Like Comment

    To view or add a comment, sign in

  • DopikAI

    53 followers

    • Report this post

    Introducing Meta 3D Gen – new text-to-3D research from AI researchers at Meta that enables text-to-3D generation with high-quality geometry and textures.Research paper https://go.fb.me/5qh895 Meta 3D Gen delivers text-to-mesh generation with high-quality geometry, texture and PBR materials. It can generate high-quality 3D assets, with both high-resolution textures and material maps end-to-end, producing results that are superior to previous state-of-the-art solutions — all at 3-10x the speed of previous work.

    Like Comment

    To view or add a comment, sign in

  • Rob Sloan

    Creative Technologist & CEO | ICVFX × NeRF × Digital Twins • Husband, Father, & Grad School Professor • @RobMakesMeta 🐦

    • Report this post

    👨🎨 Paint-it, an innovative venture from University of Tuebingen, Max Planck Institute, POSTECH, and others dabbles in the realm of 3D texturing. This tool synthesizes high-fidelity physically-based rendering (PBR) texture maps from just a text description, employing a novel deep convolutional texture map optimization. Paint-it's unique DC-PBR parameterization and Score-Distillation Sampling (SDS) approach make it a first in achieving remarkable quality PBR texture maps within 15 minutes, streamlining the 3D texturing workflow and opening new doors for creativity and efficiency in digital content creation.🔗 Discover their Project Page: https://lnkd.in/e399E3vp📚 Dive into their research: https://lnkd.in/eHfVqU8c💻 Explore the code on GitHub: https://lnkd.in/eXVX_fZf (coming soon)For more cutting-edge AI and computer graphics insights ⤵👉 Follow Orbis Tabula#TextToTexture #PBR #3DTexturing

    48

    Like Comment

    To view or add a comment, sign in

  • Saad Salman

    Let's talk AI...

    • Report this post

    Flex3D, a new framework developed by researchers from Meta and the University of Oxford. This new approach addresses the longstanding challenge of generating high-quality 3D content from text, single images, or sparse input views. Flex3D combines the best of multi-view diffusion models and a flexible reconstruction model to produce superior 3D objects with unprecedented precision.Key Highlights:- Flexible Input: Unlike previous methods, Flex3D can leverage an arbitrary number of high-quality input views for more diverse 3D reconstructions.- Innovative Two-Stage Framework: Generates a large pool of candidate views and uses a quality-driven selection pipeline for the best 3D reconstructions.- State-of-the-art Results: Flex3D outperforms leading 3D generative models with a user study win rate of over 92%.- Real-Time Efficiency: FlexRM, the flexible reconstruction model, processes 3D Gaussians in under 0.5 seconds, making it incredibly fast.Explore more and see the results: https://lnkd.in/dmZGBpyATeam: Junlin Han, Jianyuan W., Andrea Vedaldi, Philip Torr, Filippos Kokkinos#AIResearch #3DGeneration #DeepLearning #MetaAI #Flex3D #ComputerVision #MachineLearning

    7

    Like Comment

    To view or add a comment, sign in

  • Mentesenot Kibebew

    Software Engineer | Junior Data Scientist

    • Report this post

    Meta's new text-to-3D research is out that enables high-quality 3D generation from text. It produces 3D models with detailed geometry, textures, and materials at 3-10 times the speed of previous work. Boundaries are now passed and can’t imagine how this impacts industries that involve 3D modeling. Research paper ➡️ https://go.fb.me/c9g4x6AI at Meta

    9

    Like Comment

    To view or add a comment, sign in

Miaomiao Liu on LinkedIn: #eccv2024 (33)

Miaomiao Liu on LinkedIn: #eccv2024 (34)

715 followers

  • 3 Posts

View Profile

Follow

Explore topics

  • Sales
  • Marketing
  • IT Services
  • Business Administration
  • HR Management
  • Engineering
  • Soft Skills
  • See All
Miaomiao Liu on LinkedIn: #eccv2024 (2024)
Top Articles
Latest Posts
Recommended Articles
Article information

Author: Tuan Roob DDS

Last Updated:

Views: 5923

Rating: 4.1 / 5 (42 voted)

Reviews: 89% of readers found this page helpful

Author information

Name: Tuan Roob DDS

Birthday: 1999-11-20

Address: Suite 592 642 Pfannerstill Island, South Keila, LA 74970-3076

Phone: +9617721773649

Job: Marketing Producer

Hobby: Skydiving, Flag Football, Knitting, Running, Lego building, Hunting, Juggling

Introduction: My name is Tuan Roob DDS, I am a friendly, good, energetic, faithful, fantastic, gentle, enchanting person who loves writing and wants to share my knowledge and understanding with you.