A newly published patent suggests that Canon might be trying to bring a catadioptric optical system back to its camera lens lineup. If the “mirror lens” designs do materialize, we would likely see super telephoto lenses that are much smaller and cheaper than equivalent Canon lenses currently on the market.
“An optical system includes a first optical element having a first reflective surface concave toward an object side, a second optical element having a second reflective surface convex toward an image side, and a lens unit disposed between the first optical element and the second optical element,” Canon writes in the patent’s abstract. “Light from an object travels to an image plane through the first reflective surface and the second reflective surface in this order. A movable unit configured to move during image stabilizing includes at least one of the second optical element and the lens unit.”
The patent goes on to describe and show the designs of at least 5 mirror lenses (AKA cat or reflex lenses): a 400mm f/3.6, 800mm f/5, 1200mm f/8, 1200mm f/10.5, and 2000mm f/15. What’s unusual is that they all have image stabilization built in.
“Has Canon decided it’s time for some catadioptric long lenses for the RF system?” Northlight Images writes. “Expect a chorus of disapproval from those who’ve never owned a cat lens.”
The mirrors used to bounce light forward and backward in a catadioptric lens allow lenses to be much shorter than more traditional lens designs, in which light only travels through the length of the lens. The second convex mirror multiplies the focal length up to 4 or 5 times, allowing for super telephoto lenses that are relatively compact.
“In a nutshell, a mirror lens is a compact telescope,” B&H writes. “Mirror lenses contain a series of angled circular mirrors that gather the light and, rather than transmit a focused image directly to the camera sensor (or film plane), reflect the incoming light back and forth, each time reflecting a narrower portion of the image until a highly magnified portion of the original image reaches the camera’s imaging sensor.”
Drawbacks of mirror lenses have historically included fixed apertures (due to the center of the lens being obstructed), low contrast, and donut bokeh (caused by the way light enters the lens through a ring along the outside).
It’s possible that Canon has invented clever ways to overcome one or more of these historical weaknesses.
If these lenses are being designed for the Canon RF ecosystem, an advantage the mirrorless cameras would have is that their viewfinders would not be darkened by the small apertures like the optical viewfinders on DSLRs would be.
Canon Rumors writes that based on the Canon roadmap it has, these mirror lenses could possibly end up in the hands of photographers.
“Interestingly, a Canon RF 1200mm f/8 appears on my Canon RF lens roadmap, Canon Rumors states. “This patent may actually be part of future consumer products. However, I do have it reported as an L lens, so we’ll have to wait and see on that one.”
The releases of mirror lenses could allow the photography masses to try out ultra-long focal lengths — albeit with significantly more limitations — without breaking the bank.
As with any patent, though, there’s no guarantee that the things described will ever show up in the real world, but this is definitely an interesting development from Canon that some photographers will be hoping and watching for.
Researchers have designed a new, dual camera platform with the aim of making up for the poor resolution output that comes with most 360-degree cameras.
360-degree field of view cameras of varying types and price ranges have been available in consumer and specialist security markets for some time now. They are often used for virtual business tours, real estate, security, sports and action, travel, and other purposes.
The study explains that with the use of a fisheye lens, which collects light across a wider range, a 360-degree camera is a cost-effective solution to surveillance that leaves hardly any to no blind spots, compared to using several general-purpose cameras to collectively cover the same field of view.
One solution to the low resolution problem would be to increase video resolution to at least 4K, as proposed by a 2015 study by Budagavi et al., which brings an additional set of complexities, such as the need for efficient compression technologies that can handle increased bitrates as a result.
Premachandra and Tamaki also agreed to the need to increase video resolutions to at least 4K “in order to mitigate the problem of resolution degradation due to wide fields of view when using omnidirectional cameras for monitoring,” but the practicalities of detecting and tracking objects, combined with converting such video into 2D images is a complex, costly, and lengthy process.
For example, PetaPixel recently reported on (sphere) Pro1 360-degree lens which can capture video content with no stitching but this type of technology comes at a cost that can be out of reach for most.
This is why both researchers came to the idea of designing a system that takes images from a conventional omnidirectional camera and at the same time also uses a separate camera that can capture high-resolution images of objects further away, whereby combined, the system would enable better identification of moving objects while still affordable.
In the study, the duo created a prototype hybrid camera platform that consists of one omnidirectional camera and two pan-tilt (PT) cameras with a 180-degree field-of-view on either side. When an indistinct target region is detected using the 360-degree camera, the PT camera is then used to capture a high-resolution image of the target.
The two used Raspberry Pi Cameras on which a pan-tilt module was mounted and then connected to the system through a Raspberry Pi 3 Model B. Then, all of the parts of the setup were connected to a personal computer to allow overall control.
“The researchers first processed an omnidirectional image to extract a target region, following which its coordinate information was converted into angle information (pan and tilt angles) and subsequently transferred to the Raspberry Pi,” Science Daily explains.
Following the experiments, the study concluded that this type of system did indeed deliver higher-resolution images compared to those that were generated from a single 360-camera. However, one issue that arose was a possible time delay in the process. For example, when a moving object is determined as a target to be captured with a high-resolution image, there is a potential shift because it takes a moment for the appropriate PT camera to capture it.
As a potential countermeasure, the duo proposes the Kalman filtering technique, which is an algorithm that gives estimates of unknown variables which in this case are future coordinates of the moving object, which would counteract the shift encountered.
Science Daily reports that Premachandra is confident that their proposed camera system “will create positive impacts on future applications employing omnidirectional imaging such as robotics, security systems, and monitoring systems.”
The full published study, including details of all experiments, can be read on the IEEE Xplore website.
Image credits: Header image by Chinthaka Premachandra and Masaya Tamakiused and used under Creative Commons.
In early March, a report alleged that Facebook was working on a version of Instagram designed specifically for children. In the two months since, the company has facedrepeated pressure to abandon the program, the latest comes from a swath of State Attorneys General (AG).
As noted by Engadget, the AGs allege that social media in general is harmful to the emotional and mental well-being of children and that building a platform that specifically targets them would worsen the cyberbullying problems that already plague youths.
“Without a doubt, this is a dangerous idea that risks the safety of our children and puts them directly in harm’s way,” said Attorney General Letitia James of New York. “Not only is social media an influential tool that can be detrimental to children who are not of appropriate age, but this plan could place children directly in the paths of predators. There are too many concerns to let Facebook move forward with this ill-conceived idea, which is why we are calling on the company to abandon its launch of Instagram Kids. We must continue to ensure the health and wellness of our next generation and beyond.”
The letter is signed by the AGs of Massachusetts, Nebraska, Vermont, Tennessee, Alaska, California, Connecticut, Delaware, District of Columbia, Guam, Hawaii, Idaho, Illinois, Iowa, Kansas, Kentucky, Louisiana, Maine, Maryland, Michigan, Minnesota, Mississippi, Missouri, Montana, Nevada, New Hampshire, New Jersey, New Mexico, New York, North Carolina, Northern Mariana Islands, Ohio, Oklahoma, Oregon, Puerto Rico, Rhode Island, South Carolina, South Dakota, Texas, Utah, Virginia, Washington, Wisconsin, and Wyoming.
“The attorneys general have an interest in protecting our youngest citizens, and Facebook’s plans to create a platform where kids under the age of 13 are encouraged to share content online is contrary to that interest. Use of social media can be detrimental to the health and well-being of children, who are not equipped to navigate the challenges of having a social media account.” the letter reads. “The attorneys general urge Facebook to abandon these plans.”
The AGs express various other concerns over Facebook’s Instagram for Kids proposal, including that the platform could be used by predators to target children and that children lack the capacity to navigate the complexities of what they encounter online, such as advertising, inappropriate content, and relationships with strangers.
“It appears that Facebook is not responding to a need, but instead creating one, as this platform appeals primarily to children who otherwise do not or would not have an Instagram account. In short, an Instagram platform for young children is harmful for myriad reasons. The attorneys general urge Facebook to abandon its plans to launch this new platform,” the letter concludes.
With this letter, 83 total public figures and organizations have come out against Facebook’s plan to make a version of Instagram for kids including four U.S. Senators.
“We’re early in thinking through how this service would work,” Zuckerberg said in a congressional hearing on social media disinformation in March and noted by Mashable. “There is clearly a large number of people under the age of 13 who would want to use a service like Instagram… Helping people stay connected with friends and learn about different content online is broadly positive.”
When asked about concerns parents and groups have with how Facebook and Instagram handle social media addiction, bullying, and the effect on mental health posted by Rep. Kathy Castor (D-FL), Zuckerberg simply responded, “Congresswoman, I’m aware of the issues.”
And then finally watch Facebook, Instagram and WhatsApp CEO Mark Zuckerberg’s answer to @USRepKCastor’s question about revenues from <13. How could this not make parents irate? It’s not a dodge Congress game, it’s their kids. 3 of 3 /14 pic.twitter.com/f9cX17yWMV
“The problem is that you know it,” Castor said in response. “You know that the brain and social development of our kids is still evolving at a young age. There are reasons in the law that we set that [13-year-old age limit] because these platforms have ignored it. They’ve profited off of it. We’re going to strengthen the law.”
A designer duo has created a first-of-its-kind lens that can record 360-degree spherical video content that doesn’t need to be stitched in post-processing and can be used with any conventional camera.
Rob Englert and Meyer Giordano are two experienced industrial and interaction designers with a particular interest in augmented (AR) and virtual reality (VR) and have worked with brands such as Bose, Chobani, KODAK, RIDGID, and others. Together, they founded the (sphere) optics brand under which they developed the unique (sphere) Pro1 lens, which can capture everything in full 360-degree view, and creates shooting opportunities that otherwise would not be possible.
What makes it unique is that this lens eliminates the stitching process normally found in spherical or VR content production that combines the perspective of multiple cameras and lenses together. Additionally, creators can also use their existing camera and workflow with the Pro1 as opposed to needing wholly separate equipment.
The idea for the lens was born out of personal experiences after Englert lost his young brother. This life-changing moment happened long before everyone had smartphones in their pockets, which left Englert with almost no memories captured of his brother in a video format.
This fuelled his drive to use all of his skills as an industrial designer “to explore different ways to capture moments in time,” which can later be revisited over and over again. He says that he would “give almost anything for just a couple more minutes” with his brother, and although that is not possible anymore, he hopes it can be made possible for others in the future.
Originally, the novel lens was developed as part of the duo’s ongoing work on AR and VR technologies, with the inclusion of non-fungible tokens (NFTs) in the project’s funding process. Both designers were producing 360-degree videos, using multi-camera arrays in a housing that they had designed and 3D-printed. The process was arduous because the content was recorded across several cameras and the final output needed stitching. They separated each video into frames, combined each set of frames into a panorama, and then recompiled it as a video. Even then, the final product would still have knit lines visible where the views were merged.
Both designers spent a lot of time reviewing how this process could be improved, which is where the idea of a single lens and single-camera setup originated. Instead of using optical design software, they created potential shapes as 3D models and then simulated the result using regular 3D rendering software, which eventually, after some trial and error, gave them a potential physical prototype. They still had to test it in a real-life scenario, which meant the prototype had to be made by hand. Once finessed, they began working with optics professionals to further refine the design and get one step closer to a finished lens.
To cover 100-percent of the environment with the lens, they started with a regular circular fisheye lens with a field of view of approximately 180-degrees and mounted a mirror that reflects its image downward like a periscope in front of it. Then, using the cross-section of this setup, they extruded it in a circle around the axis of the mirror, which ended up with several donut-shaped lens elements around a cone-shaped mirror, which then provided the 180-degree vertical field of view of the original fisheye design as well as a 360-degree horizontal view from being revolved around the center.
The current design of the lens has 12 elements: one reflective — the mirror — and 11 refractive, including twi torus-shaped elements that surround the mirror. Most of the elements of this lens are just as unique as the design itself and cannot be found in any other existing lenses.
The parts are made from specialist engineering plastics — using a family of materials called cyclic olefin copolymers — in a process called single-point diamond turning, which is the only way to generate the complex aspheric forms that make the lens design possible. These plastics have similar optical properties to glass, but are much lighter and easier to shape, and are used in top-end scientific applications, like space telescopes.
The duo chose to go with Nikon F-mount because it’s easily adaptable to most other common standards, making it easier for content creators to use the equipment they already have instead of investing in a completely new system while maintaining full control over the image at all times. The single-lens construction also allows capturing content that a standard VR setup couldn’t due to space limitations.
The lens has a fixed f/8 aperture and uses a 1mm focal length. It is 150 millimeters (5.9 inches) wide and 198 millimeters (7.8 inches) long and weighs 1.8 kg (4 lbs).
Currently, these lenses are produced extremely slowly. They are fabricated one unit at a time and assembled by hand, which makes the individual cost quite high. However, with a large order, the duo says that costs could be significantly reduced by molding most of the elements. The two believe that this lens can benefit a wide range of industries from film, documentary, gaming, and entertainment, all the way to engineering, military, surveying, and more.
Photo and Video Footage
For stills, the lens produces a circular image with a black void in the center. The area near the center of the image corresponds to the forward end of the lens, and the outer periphery of the image circle is the direction facing the camera body. In their words, “It looks a lot like the black hole from ‘Interstellar.’”
The circular image can be used as-is if desired, but most 360-degree video players use the standard equirectangular format. For this reason, (sphere) provides an ST map, which is an image that acts as a positional lookup table to tell the computer how to rearrange the pixels. This is also commonly used to remove lens distortion and internally is similar to what Adobe Camera Raw’s distortion correction does. The duo says that Adobe could add specific support for this if they desired.
Designers have also created 3D meshes that can be inserted into game engines, such as Unity or Unreal, to unwrap the image in real-time, “which is very useful either to act as a monitor for recording or to facilitate live streaming to VR headsets.” Overall, the process of converting (sphere) videos to VR experiences is simple and can be done on iOS and Android devices, the designers claim.
To fund the project and the ongoing development, the team offers high resolution, fully immersive VR moments as limited edition NFTs on Mintable. These moments were filmed using the lens and buyers can use their VR headset to virtually enter the place and time captured. The purchase includes the video in standard equirectangular format at 8K resolution for viewing via VR headset, along with the native circular projection of the (sphere) Pro 1 lens at the native 4,048 x 4,048 resolution.
Photographer and YouTuber Kevin Raposo uses historical trends and academic research to break down why he believes the Canon EOS R3 could transform photography in the coming years as video and photo capabilities converge.
Raposo argues that he does not need to speculate on nearly anything in this video because just about everything he needs to base his argument on is already either in the confirmed specifications provided by Canon or seen in previous market trends.
He says that this will mark the first time that Canon is releasing a professional digital camera that will shoot still photography at a burst rate that is faster than the framerate of theatrical video.
“Now just to be clear, I understand that Canon is not the first camera manufacturer to pull this off,” Raposo says. “The Sony A1 and the Fujifilm X-T30 are just a couple of the cameras that can shoot at a 30 frames per second burst speed. The difference here is that Canon leads the industry in market share. So when they decide to incorporate a new feature, adoption rates tend to be a lot higher.”
In essence, Rapsoso contends that Canon is basically the Apple of the camera world. While it isn’t usually the first to do something, when it actually does act on new technologies, that technology tends to skyrocket in widespread availability and user acceptance.
Theatrical video is shot at 24 frames per second and up until only very recently, no still frame camera could match that speed with its burst rate.
“But if we look at the resolution and the burst rate of Canon and Nikon flagship digital cameras from 2000 to 2012, we can clearly see a significant — and what some might describe as an exponential — improvement over time,” he says. “In the time frame between 2000 and 2012, something else very important happened: the Canon 5D Mark II came out in 2008 and it included a built-in video feature.”
The inclusion of video in a Canon camera had a dramatic effect on the industry to the degree that it arguably changed it forever.
“It completely reshaped the market because it allowed content creators to shoot 1080p video with a shallow depth of field,” he says. “This meant that any consumer could create a very cinematic look which used to cost a lot of money to produce.”
Even though Nikon and Panasonic had already offered video in the form factor of an SLR before Canon did, it didn’t matter. Raposo contends that because Canon was the market leader, when it decided to integrate the feature was when the entire industry was disrupted and it was not long before every new stills camera came packaged with video functionality.
Raposo argues this is exactly what he expects to happen with the R3. Despite the fact that Sony and Fujifilm already have cameras that do what Canon is finally promising the R3 will accomplish, the fact that Canon is going to offer it has the same capability to reframe the photo industry just as its introduction of video in the 5D Mark II did.
He believes that it is entirely possible that the Canon R3 will once again reshape the media industry and what is expected of professionals. Raposo says that to this point in his career, he has always had to decide to do a scene in photo or video.
“Photo and video have been and continue to be separate,” he says. “What I’ve never had to do is decide whether a picture or a video is a better choice: there has always been some type of compromise whether that be quality, resolution, or framerate.”
But what if photo and video stop being so separate? While Canon hasn’t stated if the R3 will shoot RAW video, Raposo is assuming it will, which changes things quite a bit.
“If there is, that will make the R3 the first camera developed by Canon that can shoot pictures and videos that look identical at the exact same speeds,” he says.
With that in mind, he asks if you are given the choice of shooting 30 photos in a second or 30 video frames in a second with the same quality and resolution, is there really a difference?
“Why don’t we all take a bunch of video clips and extract the frames we need?” he asks.
Raposo brings up an interesting point. Very quickly the differences between photo and video capture are converging and it is resulting in a camera landscape that might end up making both features virtually the same. It’s a situation that photographers don’t like to think about as it removes a lot of what makes photography a skill: preparing and being able to capture a decisive moment. The idea of a camera that can just record and not need a photographer to tell it when exactly to hit the shutter creates an uncomfortable situation for many photographers: what are we needed for?
What do you get when you take a bunch of former Wacom employees, start a new company, and give them carte blanche to develop a brand new pen tablet? What you get is Xencelabs, a new player in graphics that is bringing some much-needed innovation to a stale market. This is no cheap knock-off we’re talking about, Xencelabs’ new Pen Tablet Medium just put Wacom on notice.
For those of you who haven’t been following this space, it’s not that Wacom has been short of competition lately. XP-PEN and Huion in particular have been releasing high-quality pen tablets and pen displays at an alarming clip, while also charging a fraction of Wacom prices for a similar combination of core specs. We’ve reviewed a few of these products and have been duly impressed by what we found.
But both XP-PEN and Huion are very clearly Wacom knock-offs. They are high-quality knock-offs that offer similar performance for a lot less money, but knock-offs all the same. You can’t shake the feeling that you’re using a product designed to undercut Wacom, which usually means cutting a few corners when it comes to build quality, software, customer support, and extraneous features like wireless connectivity.
That’s where the Xencelabs Pen Tablet sets itself apart. It’s a true-blue competitor that meets or exceeds the most stringent build standards, adds some refreshing design elements, and checks all the professional-grade boxes.
Design and Build Quality
The Xencelabs Pen Tablet Medium is available in two different configurations: a standard kit that includes the tablet and two pens ($280), and a “bundle” that includes the tablet, two pens, and the Quick Keys express key remote ($360). Whichever configuration you choose, everything in the box simply oozes “premium” quality.
The tablet itself is built like a tank, with a 16:9 aspect ratio, a10.33 x 5.8-inch active area, and a few really neat little design cues that make it very comfortable to use.
The active area is marked off on the corners by lighted insets that can be customized to a color of your choice, the bottom tapers to a smooth edge so you can comfortably rest your drawing hand on the tablet without a sharp edge digging into your palm, and the three built-in express keys at the top allow you to quickly access the tablet settings, adjust pen pressure, or switch displays if you’re using the tablet with multiple monitors.
That last feature is particularly useful to me, as I’m frequently drawing on a laptop hooked up to a secondary display. At the touch of a button I can now toggle the tablet mapping between laptop only, main display only, or both.
The lights around the active area are also incredibly convenient, as they can be set to different colors for different apps, giving you a quick reference to ensure the right app/shortcuts are active.
Finally, the surface of the tablet itself was tooled to give you just the right amount of “bite.” It is enough so that it feels like you’re drawing on a natural surface instead of slick plastic, but not so much that you notice the resistance fighting you. The surface texture is very similar in feel to my Intuos Pro, and definitely superior to the other third-party tablets I’ve tested.
The fact that Xencelabs includes not one but two different pens in the box is a brilliant move that further sets them apart from their main competition. The thick, traditional style pen includes three buttons while the thinner version has only two, but both include EMR erasers on the other end and they can be configured independently.
I mostly stuck to the thick three-button pen because it felt better in my hand and I like the extra customization, but I can imagine many users who will set up the pressure curves and shortcut keys of their two pens differently, and switching between them for different tasks. One pen for pen tool selections and another for brushwork, for example.
And since they both come in the same (very sturdy) pen case, it’s easy to keep everything together when you throw the tablet in your bag.
The Quick Keys Remote (Sold Separately)
If you decide to spend the additional $80 on the Pen Tablet Medium Bundle — and I suggest that you do — you’ll get all of the above plus the excellent Xencelabs’ Quick Keys remote.
The lack of traditional express keys on the Xencelabs Tablet is one of its few downsides, since the three customizable buttons at the top are not really meant to be used for common shortcuts. But for $360 — which is still $20 less expensive than the Wacom Intuos Pro Medium — you can get the tablet, both pens, and the Quick Keys Remote.
The remote features eight shortcut buttons, a multi-function adjustment dial with a light ring around it, and an OLED display that tells you what each button will do. The dial can be programmed to four different settings, each with its own light color, which you cycle through by pressing the button in the center. The OLED display, meanwhile, allows you to program up to 40 different shortcuts, cycling through a maximum of 5 sets of 8 shortcuts by pressing the button at the top of the remote.
Here, again, you see Xencelabs attention to every little detail: The customizable light color, the fact it takes full advantage of the screen, and you can even select from four different orientations depending on how you prefer to work.
As with the pens and tablet itself, the remote can be programmed differently for each app, with a different set of shortcuts, a different set of dial settings, and a different color scheme for each of those settings.
Everything about the design and built quality of this tablet and its accessories impressed me. I’ve used high-quality Wacom competitors before, but no product, not a single one, felt like Wacom’s equal until now. The materials that Xencelabs chose, the attention to every design detail, and the usability of all of the above sets a new bar for graphics tablet design.
Usability and Performance
Xencelabs attention to detail didn’t stop at build and design, as the company put a lot of thought and effort into usability and performance as well.
The guided setup is really simple. It automatically detects all connected devices and loads them into a beautiful interface that lets you customize everything about the tablet, pens, and Quick Keys remote to your hearts’ content.
However you choose to set things up, you’ll have the option of using the tablet plugged in or wirelessly via the included dongle. I’ll be honest, having to plug in a Logitech-like dongle to use the tablet wirelessly — when my computer already has bluetooth built right in — is a bit of a drag, but Xencelabs insists that this allows them to cut down on latency and ensure a stable connection.
I can buy that… and I can attest that I never had any connection issues when using the tablet wirelessly, which I did almost exclusively after the initial setup.
You will need to plug the tablet back in when it runs short on battery, but many hours of use over the course of one month has only drained the battery of my tablet and Quick Keys by about 50%, so battery life is really not an issue. In many ways, the connectivity, charging, and usability of the devices reminds me of my Logitech MX Master series keyboard and mouse. To borrow an overused phrase from Apple: it just works.
Performance was stellar. The tablet/pens boast an exacting pressure response that is extremely sensitive on the low end of the curve, and every built-in feature functioned as advertised. I even tested features I never use, like Mouse Mode, and nothing ever let me down.
In fact, from setup, through customization, through actually using the Xencelabs Pen Tablet as my main graphics tablet, I experienced only one major hiccup: in its current form, the tablet driver WILL NOT WORK if you have a Wacom tablet driver installed at the same time.
I’ve never run into this problem with any other tablet maker, but whatever the reason, you MUST delete your Wacom drivers before installing and using the Xencelabs tablet. Since many people are likely to be switching brands from Wacom if/when they buy this tablet, this is a very important point.
Xencelabs tells us they’re working on a proper fix, but before working with them to figure out my issues, the tablet was practically unusable. The cursor would jump between points, pressure sensitivity would fail, and some features would sometimes stop working outright. Hopefully by the time you receive your unit, this will be a moot point; until then, if you plan to use both Xencelabs and Wacom tablets on the same computer — even if you’re not using them at the same time — you’re going to have a bad time.
The only other “issue” I spotted is the lack of multi-touch functionality, something that Wacom does include in their Intuos Pro line. Honestly, I actually prefer not having touch functionality, since palm rejection fails as often as it succeeds on my Intuos, but your mileage may vary. If using multi-touch gestures to zoom or move along your canvas is important, you’re out of luck.
King of the Hill
As a reviewer, one of my jobs is to find the quirks and issues. I test features I don’t use, put the tablet through some frankly ridiculous tests, and exchange countless emails with Product Managers to make sure I’m not missing something. It makes me a bit of a pain as a reviewer, but it’s a good way to tease out the issues.
Usually, a first-generation product that tries to compete with the biggest player in the industry would fail in a few obvious ways, especially if it’s cheaper. Build quality, performance, customer support… something usually has to suffer. But that’s simply not the case here.
In every way that matters, the Xencelabs Pen Tablet Medium meets or exceeds my expectations and shows that there is still room for innovation in the graphics tablet space.
Fantastic build quality
Creative new ergonomic design
Ships with two different pens and sturdy pen case
Easy-to-use software with lots of customization options
Fantastic quick-keys remote with built-in screen
Tablet malfunctions if Wacom driver is installed
Quick-Keys remote sold separately
Only three built-in express keys
Wireless functionality requires separate dongle (included)
No touch/gesture functionality
Are There Alternatives?
Other than the elephant in the room, the main alternatives are the same tried and true names that come up in every graphics tablet review: XP-PEN and Huion. They’re not the only affordable third-party alternatives in the game, but they are the best, and the XP-PEN Deco Pro and Huion Inspiroy Dial tablets offer similar core features to the Xencelabs tablet and cost between $120 and $180 less.
You’ll get the same 8000+ levels of pressure sensitivity from a battery-free pen, built-in dials and express keys, and software that has never given this writer trouble. You’ll give up build quality, customer service is hit-or-miss, the included pens simply aren’t on the same level as Xencelabs or Wacom, and the XP-PEN Deco Pro does not feature any kind of wireless connectivity.
Should You Buy It?
There’s no other way to put it: as I write this, the Xencelabs Pen Tablet Medium is the best medium-sized pen tablet money can buy. They’ve leapfrogged Wacom on their first try, leaving me very excited to see what they’ll do next.
Xencelabs already told us they have a pen display in the pipeline. In the meantime, I will be trading in my Intuos Pro, and keeping a very close eye on the updates from this company.
Over the last several years, smartphone cameras have started to meet and exceed expectations when it comes to photo quality. For example, it could be argued that the current iPhone camera can capture better photos than the Nikon D100 from just over a decade ago, and is much smaller and easier to use.
“Today’s smartphone cameras can make a better image than cameras I paid NZ$10,000 (~$7,110) for only 20 years ago,” says Tom Ang, who’s written over 30 books on photography and digital cameras.
The cameras in smartphones have gotten so powerful that many people opt to not even carry a separate camera when just traveling anymore, hence the collapse of the fixed-lens camera segment. But while modern smartphones are incredibly good from the perspective of their advancement over time, there is still room for them to improve before they can start to meet or exceed expectations in modern DSLRs and mirrorless systems.
But it doesn’t seem like those updates are that far away as according to a report from the BBC, smaller, sleeker, and more powerful smartphone cameras are closer than you think.
One such Canadian-based company, Scope Photonics, aims to create a lossless zoom for all kinds of photos and will allow photographers to capture a zoomed-in close-up image that will remain consistently sharp and free of artifacts that typically exist in current max zoomed smartphone photos. The company has been working on a technology that harnesses liquid crystals, much like what’s found in LCD TVs, and allows them to “spin like tops” and reorganize themselves based on how light moves through them. This effect is to mimic a zoom lens system: instead of relying on a series of stacked lenses, Scope’s system can zoom in and out with just a single lens.
This technology is being initially prototyped for medical devices, but the company aims to bring the lenses to smartphone systems within three years.
“I’m comfortable in predicting we can achieve 10 times zoom with our liquid crystals, but this innovation offers a lot of opportunity for growth so you never know where we’ll be at in a few years’ time,” Scope’s CEO Holden Beggs says.
Another start-up from Cambridge, Massachusetts — Metalenz — is looking at removing the “camera bump” that has become the norm over the last few years, even to the degree that it has grown to gigantic sizes in the latest phones like the Xiaomi Mi 11 Ultra.
Leveraging a design that uses a single lens built on a glass wafer between just one and three square millimeters in size, the silicon nanostructures manipulate light rays in a way that will allow for brighter and sharper images when compared to a standard lens element. Metalenz also aims to fine-tune the focus for photo and video imaging on smartphone devices to ensure the camera is picking out the right object to put into focus.
A group of researchers in Utah have developed a lens a hundred times lighter and a thousand times thinner than the iPhone 11’s lenses. This new lens design is made of thousands of microstructures instead of one large curved element which reduces the size but still manages to correct for color aberrations, simplifying the capture process. The reduction in weight, even by just a fraction of a gram, can have a significant impact on delicate technologies like satellites and drones as well.
This technology is also being adapted first for the Department of Defense but the team hopes to have it adapted for smartphones within three to five years as well. With all of these updates in technology, the key for the teams is to keep it simple once adapted to smartphones.
“If you offer, say, true optical zoom on a smartphone that rivals proper cameras, you also raise the barrier to use. If people need an instruction manual for their smartphone camera, you’ve stuffed up,” Ang says.
AirDrop is a popular Apple feature that allows devices to share data, typically exclusively between people who are already known to each other. By default, Airdrop only shows receiver devices from address book contacts. Functionally, AirDrop uses a mutual authentication mechanism that compares a user’s phone number and email address with entries in the other user’s address book.
Unfortunately, the researchers were able to find a way to learn those phone numbers and email addresses, even if the device attempting to make the connection is not known to the target device. As initially reported by Mashable, all that is required to perform the exploit is a WiFi device and physical proximity to a target that can initiate the discovery process by opening the sharing pane on an iOS or macOS device.
The researchers claim that its discovered problems are rooted in Apple’s use of hash functions for “obfuscating” the exchanged phone numbers and email addresses during the discovery process.
“Researchers from TU Darmstadt already showed that hashing fails to provide privacy-preserving contact discovery as so-called hash values can be quickly reversed using simple techniques such as brute-force attacks,” a press release on the discovery reads.
The researchers say that they have informed Apple about the privacy vulnerability in May of 2019 via responsible disclosure. Tom’s Guide published a story in July of that year that summarizes the underlying issue.
According to the researchers, Apple has neither acknowledged the problem nor indicated that they are working on a solution.
“This means that the users of more than 1.5 billion Apple devices are still vulnerable to the outlined privacy attacks. Users can only protect themselves by disabling AirDrop discovery in the system settings and by refraining from opening the sharing menu,” the researchers say.
As Apple has not indicated that a solution is in the works, the researchers say they have developed what they call “PrivateDrop” to replace the “flawed original AirDrop design.”
“PrivateDrop is based on optimized cryptographic private set intersection protocols that can securely perform the contact discovery process between two users without exchanging vulnerable hash values. The researchers’ iOS/macOS implementation of PrivateDrop shows that it is efficient enough to preserve AirDrop’s exemplary user experience with an authentication delay well below one second,” the press release says.
Both Tom’s Guide and Mashable have advocated turning off AirDrop both in 2019 and now in 2021 respectively in order to protect devices from this exploit. PetaPixel contacted Apple for comment but did not immediately receive a response.
Waldo Photos has launched a new AI-Powered Mobile Sales Platform that combines with a tech called FaceBlocker. Together, the platform allows photographers to copyright protect proofs, prevent theft, and maximize sales opportunities.
When combined with the companies mobile sales platform, the new FaceBloacker technology addresses several key protection issues professionals often run across.
Waldo Photos is a platform for photographers that allows them to easily share photos with their online community via automated proof delivery, facial recognition, jersey recognition, and AI-Powered sorting. The launch of WaldoPro adds some interesting additional features to the platform
Photomanager – an AI-powered SaaS platform for hosting, managing and publishing photos with advanced analytics
Sell Photos – a mobile sales platform leveraging Faceblocker copyright protection, text-based proof delivery, mobile app ordering process, and drop ship print delivery.
Share Photos – automated mobile delivery platform for event photography
Member Connect – tools for marketing and remarketing, including personalized direct mail and a text-based communication platform.
The AI-driven system provides better engagement and discovery by potential clients as their proofs are delivered via SMS alerts, giving the photographer a 100 percent contactless sales model and options to sell additional images even months after the initial shoot. Tack on the FaceBlocker service that allows the photographer to deliver easily accessible and good quality proofs on the client’s smartphone, while making them difficult to screenshot/copy instead of purchasing and the company has a pretty robust automated and protected sales system.
Waldo Photo says that over the last year, the FaceBlocker technology has been used by professional photographers at national dance and cheer competitions, Miss America, Miss USA, National Gymnastics Competitions, and more. Additionally, photographers who have adopted this system have reported seeing an increase in after-event sales of over 100%. Lofty claims, but the system seems designed to funnel a powerful sales channel, so it’s not particularly surprising.
FaceBlocker takes the images uploaded to Waldo Photo and places the Waldo logo over the facial-recognized face of the potential purchaser. The FaceBlocked proofs are then delivered to the mobile device of the client who can then tap their face on the screen to remove the logo. However, when the client’s face is revealed, the rest of the photo is blurred. This process makes it easy for the client to get an idea of what the whole image will look like, but nearly at the same time makes it nearly impossible for them to save it or screenshot it without placing a legitimate purchase order.
Once the client has purchased an image, the photos can be delivered through the Waldo platform without any obstruction or blurs, and if they order prints, those can be shipped directly to the purchaser which Waldo says saves the photographer from dealing with hours of back and forth with the print labs and shipping companies.
The FaceBlocker tech alone is an unusual and likely highly effective way of monetizing every photo and combined with Waldo’s interface the whole platform seems tailor-made to help a photographer make the most from their work by addressing actual modern problems they face daily.
Waldo offers a demo and a 30-day trial via its website for those interested in seeing if the platform is a fit for their business.