If you own a Nintendo Switch and an Android smartphone, did you know that the two can be paired for photography? It turns out the Joy-Con controller can be used as a remote shutter release for triggering photos without having to touch your phone.
A Reddit user named Byotan recently shared this neat fact in this video showing the Joy-Con shutter release in action.
To do this yourself, you’ll first need to pair your Joy-Con with your Android phone over Bluetooth. Press and hold the Joy-Con’s “Sync” button until the light indicators on the side turn on. Next, open your phone’s Bluetooth menu and you should see a new Joy-Con entry. Select this entry to pair your phone with the Joy-Con.
Once the controller is paired, how you use it as a shutter release will vary depending on what device you have, and you may need to fiddle around to see what works for you (and if the left Joy-Con doesn’t work, try the right one, and vice versa).
9to5Google notes that on Google Pixel phones, you take a photo by tapping the “A” button, though whether or not this works may depend on what app you’re using. On Samsung smartphones, you can use the “X” and “Y” buttons to zoom in and out (in increments of 0.1x per press), and the “B” button is used to snap a photo.
Outside of camera apps, the “A” button should also act as a home button and the “Y” button should allow you to select the upper-left app on your home screen.
From what others are reporting online, whether or not this system works for you may be hit and miss. But if you’re in a jam and need a quick way to trigger some photos remotely (like if you’re taking a group photo with your phone on a tripod), you may want to try giving the Joy-Con a shot.
Image credits: Header illustration: phone stock photo licensed from Depositphotos and Joy-Con photo by Nintendo.
The Content Authenticity Initiative (CAI), launched in 2019 as a collaboration between Adobe, Twitter and the New York Times, looks like it might finally be making some visible progress. The initial goals to help provide authenticity in images shared online, particularly when it comes to social media, appear to be coming to some kind of […]
From using Cinema 4D to tell cinematic stories to a project around a mythological femme fatale, the May edition of the 3D & Motion Design Show covers a variety of work by 3D and motion graphics artists.
Maxon’s free virtual event for 3D visual artists is packed with industry luminaries sharing insights, workflows and techniques for the Maxon Suite of creative tools. Maxon’s lineup of speakers for its May 3D & Motion Design Show, is one of the good reasons to attend the virtual event with a series of artist-focused presentations for 3D and motion graphics artists. During the May show, on May 19, 2021, Maxon will host presentations from a collection of professionals from different segments of the industry.
Coming off the heels of the three-day April 3D & Motion Design Show, the free-to-attend virtual series of artist-focused presentations for 3D and motion graphics artists, which launched Cinema 4D S24, attendees will get a closer look at how some of the most dynamic 3D artists are leveraging the Maxon suite of creative tools.
Featured speakers for the May session Include:
Brandon Parvini – a freelance motion designer and director based in Los Angeles. With over a decade of experience, Parvini moves between various lead roles including, creative direction, art direction, sign, senior 3D artist, technical direction, look development, consultation, excelling in the development of nontraditional pipelines and workflows.
Chris Bjerre – a San Francisco-based multidisciplinary artist with over a decade of experience in the motion graphics industry. His diverse and vast body of work includes feature films, commercials, title sequences, music videos, games, VR, and experiential design.
Frederic Colin – a director and designer based in Paris. His recent CG passion project, “Medusa: The Fallen Goddess” used Maxon tools to recreate the story of this mythological femme fatale.
Martin Vanners – after doing photography for over 15 years, Martin Vanners now uses Cinema 4D to tell cinematic stories. Martin will be demonstrating how he uses his photography skills to art direct his 3D renderings.
Presentations for all Maxon 3D and Motion Design Shows are streamed live and available on demand shortly after on 3DMotionShow.com, as well as the Maxon YouTube channel. Viewers will be able to interact and send in questions via chat for the live Q&A sessions with artists.
Vazen has unveiled the 50mm T2.1 1.8X anamorphic lens, adding an ultra wide-angle option to its full frame EF/PL 1.8x anamorphic lens line-up.
Known for its Micro Four Thirds lens set, which comprises the 28mm + 40mm + 65mm focal lengths, the Chinese company Vazen is expanding its full frame lineup with a new lens, the 50mm T2.1 1.8X anamorphic lens, which joins the Vazen 85mm T2.8 1.8X Anamorphic, the first of the promised complete three lens set. The 85mm was announced August 2020, with the two other lenses promised, then, for late 2020 or early 2021. The time is now, apparently, as Vazen announced the new Vazen 50mm T2.1, which joins the available 85mm. According to the company, a 135mm lens is expected to be ready to ship in 1-2 months to complete a 3-lens set (50 – 85 – 135) in phase 1.
We’ve covered the Micro Four Thirds lenses here at ProVideo Coalition, and when announcing the Vazen 65mm T2 1.8x Anamorphic lens we noted, following the information available from Vazen, that the lens was ““the world’s first 1.8x anamorphic prime designed for Micro Four Thirds cameras with 4:3 sensors. Characterized by its front anamorphic design, the VZ Anamorphic prime delivers a butterly smooth oval bokeh, signature blue horizontal flare and the widescreen cinematic look.”
One more note that may be of interest if you’re a user of Canon EOS R mirrorless cameras: the kit introduced for Micro Four Thirds is also available, according to Vazen, with Canon RF mount, if you want to explore this anamorphic solution with a full frame sensor.
For large format cinema cameras
The new Vazen 50mm T2.1 is designed to cover large format cinema cameras like Red Monstro, Alexa LF, Kinefinity Mavo LF and Z-CAM E2-F8 fully. The lens features a super compact and lightweight design. Weighing merely 3.42 pounds (1.55kg) and 5.24” long (13.3cm), it is one of the world’s smallest 1.8x anamorphic lens for full frame cameras. Its compactness allows it to be balanced on gimbals and rigs very easily, says Vazen.
The 50mm has the same focus and aperture position as the 85mm for easy lens switching on set. And a consistent 86mm front filter thread is handy for ND filters or diopters installation. The front diameter is a standard 95mm for matte box mounting. The entire lens is built up of aluminum and the independent aperture and focus rings are incorporated with 0.8 mod cine gears. The lens is available in an interchangeable PL and EF mount. Both mounts and shims can be found with the lens in a Vazen hard case.
All Vazen 1.8x anamorphic lenses feature a front anamorphic design. It delivers a buttery smooth oval bokeh, signature blue, but not over saturated, horizontal flare and the widescreen cinematic look. The lens also features an ultra-wide 65° horizontal field of view (similar to a 28mm spherical lens) and the closest focusing distance is 3.6” from sensor.
A $2,000 discount for the pair
The lens, Vazen claims, delivers “outstanding sharpness, even at wide open, which is unparalleled by other PL/EF anamorphic lenses with the similar squeeze ratio. Vazen chose to adopt a 1.8x squeeze design to balance the anamorphic characters as well as the resolution of the image. The 1.8x squeeze can produce a cinematic widescreen 2.39:1 aspect when paired up with 4:3 sensors. When paired up with 16:9 sensors, much less data (than 2X anamorphic lens) is needed to be cropped away to create the desired 2.39:1 ratio.
The lens is currently available to order from authorized resellers and from Vazen website. It is available to ship in late August. Free Priority shipping provided. The retail price in US is USD 8,000/pc. A $2,000 discount will be offered for a 2-lens (50 + 85) purchase.
Manchester City recently topped rival Manchester United to take the Premier League title and to celebrate the internal content team has published this 3-minute and 40-second single-shot first-person-view (FPV) drone video that tours the entire Etihad Stadium.
Single-take drone videos are starting to balloon in popularity thanks to the smash success of first-person drone pilot Jay Christiensen. Christiensen stunned the cinematography community with his viral video sensation of a single-take bowling alley video in early March. Since then, he and his team have produced two more videos, one featuring the iconic Los Angeles diner Mel’s Drive-In and the other sponsored by the Mall of America that pays homage to The Mighty Ducks.
“This footage is 100% genuine, no camera tricks, no hidden edits, no CGI – a single take drone shot!” the club writes in a description of the video.
Manchester City’s single-take FPV drone video is pretty close to the level of Christensen’s work and is incredibly impressive in its own right given the amount of space the drone covers and how well it deftly moves through both wide open and tight spaces. The drone covers a huge amount of space both inside and out, which likely pushed the signal strength of the controller and drone to their limits. Similar to Christiensen’s video, the Manchester City content team dubbed audio over the original footage to give viewers something to listen to other than the loud whir of the drone’s propellers, though no people are visible anywhere, which is something that again separates this particular video from the ones that likely inspired it.
Manchester City’s Etihad Stadium was first opened to the public in July of 2002 as the home of the Commonwealth Games before it was converted into Manchester City’s home stadium in 2003. The stadium cost £112 million to build and seats over 55,000 fans.
Manchester City took the Premier League title in 2021 for the fifth time in nine years after rivals Manchester United lost to Leicester City earlier this week. Manchester City’s rise from forgettable outsider to one of the world’s most elite teams over the course of the last 20-plus years is one of the more impressive turnaround examples in the sport.
Digital Trends notes that the soccer club has not yet disclosed the identity of the drone pilot who captured the impressive celebratory footage, though it apparently has promised to post a behind-the-scenes video revealing as much and more in the coming days.
The Fujifilm X-E4, announced in January, is the last Fujifilm camera to use the company’s X-Trans IV sensor, according to a new report. The sensor was first introduced on the X-T3 and later found its way into several other cameras over the course of the last three years.
According to FujiRumors, Fujifilm will no longer use the X-Trans IV sensor that is the heart of the Fujifilm X-T3, X-T30, X-Pro3, X100V, X-T4, X-S10 most recently the X-E4. The X-Trans IV is Fujifilm’s fourth-generation backside-illuminated 26.1-megapixel CMOS that the company says integrates the unique X-Trans color filter array to reduce moire and false colors without the need for an optical low pass filter. This combines with its backside-illuminated structure to reduce noise levels and increase image quality.
When the X-Trans IV was first announced as part of the X-T3 release, Fujifilm touted its ability to expand its standard ISO range to ISO 160, which was previously only available via extended ISO. The native ISO range of 160 to 12,800 could then be further expanded to ISO 80 at the new low and ISO 51,200 as the new high.
While the sensor has served as the core of several beloved Fujifilm cameras since its 2018 introduction, this new report alleges that the company is set to leave it behind, meaning the X-E4 will be the last Fujifilm camera to use it.
In the interview above published last month, Fujifilm product manager Takashi Ueno notes that Fujifilm’s focus on its XF18mm f/1.4 release was “resolution,” which Fuji Rumors reports is a hint that higher-megapixel cameras would come in the future. Building on the report that Fujifilm is set to leave the 26.1-megapixel sensor behind, the hope is that an equally functional but higher resolution sensor will come soon.
A new report alleges that Apple is not only planning to shrink the size of its FaceID sensor chip but also scale down the size of the large front-facing “notch” at the top of the display thanks mostly to a redesigned front-facing camera.
Spotted by MacRumors, DigiTimes has reported that Apple plans to scale down the die size of the Vertical-Cavity Surface-Emitting Laser (VCSEL) chips that are used in the FaceID scanner. The move is reportedly being made to help the tech giant reduce production costs as more chips can be produced on a single wafer, which in turn reduces the total number of wafers that have to be made.
DigiTimes also notes that redesigning the VCSEL chip may allow Apple to slide in additional features, but stopped short of speculating on what those features might be.
The new chip will most likely be used in the new iPhone and iPad devices releasing starting in late 2021, which most likely means the forthcoming iPhone 13 and iPhone 13 Pro smartphones. It is also likely that the next generation of iPad and iPad Pro will feature the new chip.
In a previous report, DigiTimes stated that Apple will reduce the size of the iPhone’s now-iconic notch thanks to a redesigned front-facing camera module, and while it is possible that the size of the notch is shrinking also due in part to the new VCSEL chip, it is unclear if that is actually the case.
Well-known industry analyst Ming-Chi Kuo reports that the notch will shrink due mostly to a new front-facing camera module. Per MacRumors:
For the coming iPhone 13 cycle in 2H21, we foresee a more tightly integrated version of the existing structured light system, which will enable the long awaited reduction in the notch. On the rear, we do not anticipate Apple to broaden the adoption of the Lidar 3D sensor beyond the Pro models.
For the 2H22 product cycle, we anticipate an architectural shift from structured light to time-of-flight, allowing for an even smaller footprint. Based on our industry conversations, we do not think structured light beneath the screen is likely to be ready for mass deployment in 2H22. We also view the adoption of fingerprint-under-glass, that likely is added in the 2H21 iPhones, as a structural headwind for additional 3D sensing content at Apple and could be the security feature of the future.
With this news, it appears that while no major image capture changes to the smartphone line are expected until 2022, the iPhone 13 will still see plenty to differentiate it from the current iPhone 12. Earlier this week, a report alleged that the iPhone 13 and 13 Pro would see notably larger rear camera modules, but while larger they would also not protrude as far out from the rear of the device. This report bolsters a previous rumor stating that changes to the rear camera arrays were likely on Apple’s next-generation device.
In a story from January, DigiTimes also reported that the entire iPhone 13 lineup would feature sensor-shift stabilization technology, a feature currently only available in the iPhone 12 Pro Max.
Researchers have designed a new, dual camera platform with the aim of making up for the poor resolution output that comes with most 360-degree cameras.
360-degree field of view cameras of varying types and price ranges have been available in consumer and specialist security markets for some time now. They are often used for virtual business tours, real estate, security, sports and action, travel, and other purposes.
The study explains that with the use of a fisheye lens, which collects light across a wider range, a 360-degree camera is a cost-effective solution to surveillance that leaves hardly any to no blind spots, compared to using several general-purpose cameras to collectively cover the same field of view.
One solution to the low resolution problem would be to increase video resolution to at least 4K, as proposed by a 2015 study by Budagavi et al., which brings an additional set of complexities, such as the need for efficient compression technologies that can handle increased bitrates as a result.
Premachandra and Tamaki also agreed to the need to increase video resolutions to at least 4K “in order to mitigate the problem of resolution degradation due to wide fields of view when using omnidirectional cameras for monitoring,” but the practicalities of detecting and tracking objects, combined with converting such video into 2D images is a complex, costly, and lengthy process.
For example, PetaPixel recently reported on (sphere) Pro1 360-degree lens which can capture video content with no stitching but this type of technology comes at a cost that can be out of reach for most.
This is why both researchers came to the idea of designing a system that takes images from a conventional omnidirectional camera and at the same time also uses a separate camera that can capture high-resolution images of objects further away, whereby combined, the system would enable better identification of moving objects while still affordable.
In the study, the duo created a prototype hybrid camera platform that consists of one omnidirectional camera and two pan-tilt (PT) cameras with a 180-degree field-of-view on either side. When an indistinct target region is detected using the 360-degree camera, the PT camera is then used to capture a high-resolution image of the target.
The two used Raspberry Pi Cameras on which a pan-tilt module was mounted and then connected to the system through a Raspberry Pi 3 Model B. Then, all of the parts of the setup were connected to a personal computer to allow overall control.
“The researchers first processed an omnidirectional image to extract a target region, following which its coordinate information was converted into angle information (pan and tilt angles) and subsequently transferred to the Raspberry Pi,” Science Daily explains.
Following the experiments, the study concluded that this type of system did indeed deliver higher-resolution images compared to those that were generated from a single 360-camera. However, one issue that arose was a possible time delay in the process. For example, when a moving object is determined as a target to be captured with a high-resolution image, there is a potential shift because it takes a moment for the appropriate PT camera to capture it.
As a potential countermeasure, the duo proposes the Kalman filtering technique, which is an algorithm that gives estimates of unknown variables which in this case are future coordinates of the moving object, which would counteract the shift encountered.
Science Daily reports that Premachandra is confident that their proposed camera system “will create positive impacts on future applications employing omnidirectional imaging such as robotics, security systems, and monitoring systems.”
The full published study, including details of all experiments, can be read on the IEEE Xplore website.
Image credits: Header image by Chinthaka Premachandra and Masaya Tamakiused and used under Creative Commons.
Here comes the very first image of the new Sony a7R4a (Source: Weibo). The only difference I can find is the missing Sony logo under the LCD screen. And yes, the screen looks better as it uses a new higher…