When the box is tapped, a screen crop texture crops the capture target and copies the frame, saving it to another texture is used to create our cloned object.An orthographic camera starts with the image from the world camera and overlays the UI onto it before rendering to a separate render target that’s used for Live views-this way, the UI won’t be seen when watching a recorded Snap.A perspective camera with device tracking builds a 3D map of the world, identifies horizontal planes, and renders what it sees to a render target.To achieve this experience, we used a fairly complicated render pipeline in Lens Studio. Move the cloned object in either 2D or 3D space.Manipulate a cropping box with just one finger.We started with a list of user experience requirements. With a model in hand, we needed to build the rest of the cloning experience around it in Lens Studio. Using our own expertise designing small, efficient neural networks, along with some inspiration from the impressive U²-Net model, we created a saliency model that produced high-quality segmentations that also fits well under Lens Studio’s 10mb asset limit. This task is generally known as saliency detection and is closely related to image segmentation tasks (with two classes, background and foreground). In this case, we wanted to look at an image or a section of an image, and separate out the foreground object from the background. The first step of any AI / ML problem is to define the task. In this post, I want to provide a quick behind-the-scenes look at how the Lens works and how we leveraged state-of-the-art AI models with Snap Lens Studio and SnapML to create the cloning effect.
0 Comments
Leave a Reply. |