Autoflip: Google New AI Can Intelligently Crop Videos For Any Screen Size
Well, how many times you’ve seen that a video is being cropped badly? Indeed, it can annoy and more irritating when you couldn’t do anything with it. But, now you need not frustrate yourself anymore with that stuff.
To deal with this issue, Google’s Artificial Intelligence team has created an open-source component known as Autoflip. So, it helps to reframe the video, which can easily fit the goal machine and dimension (panorama, sq., portrait, and various others).
What is Auto-flip AI?
Autoflip is an open-source platform for intelligent video reframing. And it has created on top of the ‘MediaPipe framework’. Also, it allows the development of the pipelines for processing time series multimodal data. Hence, AutoFLip, which reframes the video content. Moreover, this suits the target device or dimension (square, landscape, portrait, etc).
Thus, AutoFlip examines the video with the same duration in the expected aspect ratio.
So, Autoflip works at three-dimensional levels:
Shot (scene) detection, video content analysis, and reframing. And, the main half is scene detection. Hence, by which the machine analyzing mannequin should detect the purpose sooner than a lower and a leap from one scene to another. Thus, it compares one body with the earlier one than to detect the alteration of parts and colors.
How does it work?
Consequently, AutoFlip AI called ‘Video Dimension Conversions’ and works in three primary stages.
- Shot (scene) detection
- Video Content Analysis
Shot (Scene) Detection
The initial stage is scene detection. Here, the machine learning model requires detecting the component before a cut or a jump from one to another. Also, it compares one frame with the earlier one before detecting the alterations of colors and components.
Video content analysis
Here, in this phase, Artificial Intelligence will use deep learning-based object detection frameworks. So, it helps to recognize interesting and salient content in the frame. This content involves people and animals.
But, other elements may have recognized, depending on the app, including text overlays and logos for commercials. By including the motion and ball detection for sports. Also, AutoFlip taps AI-based object detection frameworks to find genuine content. For example, animals, people, text overlays, logos, and motion.
Face and object detection frameworks have integrated with AutoFlip via MediaPipe. generally, a platform that helps the development of pipelines to process multimodal data. Furthermore, it uses Google’s TensorFlow Lite ML framework on processors. Also, this structure helps AutoFlip to be extensible, as per Google.
So, developers can include detection algorithms for various use instances and video content.
Now, Reframing is the last phase. Thus, the AI model identifies if it will use a stationary mode for scenes. Additionally, it takes place in a singular space. Furthermore, it will track mode for objects of interest that are regularly moving.
So, based on that, the target dimensions in which the video requires appearing. Autoflip will crop frames, reducing jitter and keeping interesting content.
Google Researchers Have Said In A Note Through a Blogpost
Well, we are eager to release this tool straightforwardly two designers and producers. Like any AI algorithm, AutoFlip can profit by an improved capacity. Additionally, this will help in detecting objects to the goal of the video. For example, speaker discovery for interviews or vivified face identification on the kid’s shows.
Furthermore, a typical issue emerges when the input video has significant overlays on the edges of the screen. (For example, content or logos) as they will regularly crop from the view. By merging content/logo identification and picture inpainting technology.
Hence, we trust that future rendition of AutoFlip can reposition closer view objects. And, it will help to fit the new viewpoint proportions. So, deep crop technology will provide the ability to grow beyond the initially viewable area.