Facial Recognition Software: An Update on Quickly Developing Tech

Facial Recognition Software: An Update on Quickly Developing Tech


Our previous post discussed the advances in facial recognition for travel and security applications. Another eagerly anticipated application of facial recognition technology is the rumored inclusion of this feature in an upcoming iPhone model and other contemporary phones.  While Apple is notoriously tight-lipped about future products, it is widely expected that the next iPhone release will be announced on September 12, 2017, and will include facial recognition technology.  Much of the current media excitement was fueled by a recent Korean News report stating, “The new facial recognition scanner with 3-D sensors can deeply sense a user’s face in the millionths of a second.” Many analysts are predicting that facial recognition software will supplant fingerprint sensing as the primary security biometric on most new smartphones.

TechPats Facial Recognition Software Tech Update

Can the Tech Live Up to Expectations?

For this type of new feature to be successful, facial recognition technology has to tackle three important issues: accuracy, speed, and low-light performance.  Of course, most importantly, the recognition has to work well. What good would facial recognition be if the algorithm never matched anyone (or matched everyone)?

What good would facial recognition be if the algorithm never matched anyone (or matched everyone)?

It is envisioned that facial recognition may be used to unlock a user’s phone and to even replace passwords and passcodes for, e.g., financial transactions, like Apple Pay, Google Wallet, Mobile Banking. With bank accounts and personal information protected being protected behind phone security, a failure in transitioning from authentication via “Touch ID” to relying on facial imaging could be very costly.

For instance, previous facial recognition systems have had various limitations and have even been demonstrated as able to be fooled by printed photographs.  Newer systems take advantage of 3D sensors using depth-sensing technology or “structured light.” These systems follow a step-by-step facial recognition process:

  1. A structured light source provides illumination to an object (typically in the infrared spectrum)
  2. An imaging sensor, or camera is used to acquire a 2D image back to the device
  3. An object, such as a face, will distort this received light pattern
  4. The sensor will capture the reflection of the infrared light while the light pattern is invisible to the naked eye
  5. Sophisticated algorithms compute 3D information from this received data for comparison

Unlike searching through a large database of criminals or terrorists, the received data will only have to be compared with the facial profile for the device’s owner(s).

The threshold of the correlation between the baseline facial scan data and a subsequent scan may determine how secure the technology has become. For instance, what if the facial recognition finds a match between the baseline scan and your twin brother, but refuses to recognize you after growing a beard?  The upcoming products—and consumer reaction—will demonstrate if these new 3D sensors and algorithms can provide the necessary level of security.

Facial Recognition Preparing for Widespread Commercial Use

Since these 3D sensors utilize infrared energy, the devices should be able to perform their facial recognition even in the dark, allowing the user to unlock their phone in complete darkness.  While cinema-goers may be disappointed, depending on visible light or even a flash is simply not a valid solution.  Finally, recognition speed is an additional important factor in facial recognition.  Bloomberg reports that users may be able to unlock their new iPhones “within a few hundred milliseconds,” which may make the feature appealing and encourage acceptance.

Qualcomm has been the first to publicly announce products with this technology with the expansion of their Spectra platform for Android platforms  capable of improved biometric authentication and high-resolution depth sensing  “utilizing active sensing for superior biometric authentication, and structured light for a variety of computer vision applications that require real-time, dense depth map generation and segmentation.”

Structured-light systems, as well as related patents, have been around for a while.  One patent that may appear relevant, dated around 1998, discloses a “system for determining a three-dimensional profile of an object” using a light-source to project a structured light pattern on the object and an image-detecting device to detect a sequence of images.

TechPats Facial Recognition Software Update Claim

Apple’s `177 patent


Much of the recent iPhone discussion has been a result of patents granted to Apple this year.  In one Apple patent, the specification describes a “method for face detection” using a depth sensor to capture three-dimensional data and a camera to capture a two-dimensional image of the scene.  Of course, patents do not guarantee that the technology will be implemented in any upcoming devices.

But if all the rumors are correct, we may only have to wait until the September 12th to see what facial recognition features make it into the next iPhone. An earlier Apple patent filed in 2011 discloses unlocking a mobile device using facial recognition that “capture[s] a subsequent image in response to determining that the device moved to a use position, analyze[s] the subsequent image to detect a user’s face, and unlock[s] the device in response to detecting the user’s face.”

With facial recognition technology on the verge of widespread use in smartphones, it will have a significant impact on intellectual property as more companies seek to incorporate facial recognition into their own products. TechPats’ technical expertise and patent knowledge can help companies protect their intellectual property through our licensing support, patent mining, and patent monetization services. Learn more today!


The Future of Artificial Intelligence – Will Robots/Machines Outsmart Humans?

The Future of Artificial Intelligence – Will Robots/Machines Outsmart Humans?


Artificial Intelligence Future | TechPatsRecent news in technology related to Artificial Intelligence (AI) has yet again revealed a question from one of the most frightening ideas out of science fiction: Will the technology progress to a point where a machine, computer or robot, will be in the position to control society, in part or as a whole?

Will the technology progress to a point where a machine, computer or robot, will be in the position to control society, in part or as a whole?

The follow-up questions, of course, are how far off is that time and can we prevent it? Well, right now, even the experts and those who work in the AI field certainly do not agree.  At one extreme we have those like Elon Musk and Stephen Hawking who worry that AI will bring an end to humanity, while at the other end of the spectrum we have those like Mark Zuckerberg who believe AI will improve humanity and don’t foresee any significant risks with AI.  While the possibilities for AI applications might surely be endless, if even a few of our time’s top minds and great inventors are disagreeing about the potential for danger, perhaps it’s time to consider this question as something more than just a recycled Hollywood plot.

Interesting Demonstrations of AI Technology

One peculiar instance was when Facebook reportedly abandoned an experiment in which two chatbots were instructed to negotiate and barter to swap hats, balls, and books between themselves.  In the experiment, the chatbots developed a language of their own for conducting the negotiations, a sort of “shorthand” that was understandable only to them.  According to Forbes’ Tony Bradley, researchers from Facebook’s AI Research Labs (FAIR) found that the chatbots had “deviated from the script and were communicating in a new language developed without human input.” The company stressed that it was shut down because they want the bots to be able to communicate with humans and not because of the strange results.

Recently a bot from OpenAI, an Elon Musk company, beat the professionals in the popular real-time strategy battle computer game Dota 2.  Other bots have been able to beat champions at chess, poker, Go, and even Jeopardy, but what is interesting about the OpenAI bot is that it did not know how to play the game. That is, OpenAI’s bot learned how to play and win from scratch by playing against itself in the cloud.

The bot did not always beat the pros and the game was limited to one-on-one play, but the important takeaway is that the bot learned how to play without being purposely programmed for the game. Dota 2 generates a lot of viewership and revenue in the e-sports realm, and similar reports about AI beating professional poker players at Texas Hold ‘Em has people worried about the legitimacy of the competitions.

AI in Everyday Life

Today, machines using AI are in our everyday life; some examples include:

  • Virtual Assistants – Siri, Alexa, Dot, IBM’s Watson
  • Self-Driving Vehicles – Domino’s Pizza Vehicle, Google’s Waymo, Tesla’s Autopilot
  • Customer Service – Chatbots such as Slack’s Growthbot
  • Warehouse – Amazon (Grocery Store)
  • Financial – The use of machine learning by financial institutions
  • Medical – Surgeries
  • Warehouse – Amazon automates many picking and packing processes
  • Financial – Monitoring spending for fraud detection
  • Medical – Autonomous and assisted surgeries

AI has demonstrated major improvements in cancer and disease diagnoses. In each of these instances, the respective AI algorithm is designed for a limited, specific function and, thus, only poses a threat to a relative job market, not all of humanity.  It is not these directly beneficial uses of AI that Elon Musk is asking us to worry about. Sure some jobs may ‘disappear’ (e.g., transition similarly to industrial automation), but it is the misdirection of technology in areas such as autonomous weapons and financial applications that have an actual destructive potential. It’s hardly a far stretch when considering that many of the newer bots are boasting of an ability to self-teach via self-play or experimentation. How do we balance the advancements that Artificial Intelligence has given us with the fear that it might overtake us?

Artificial Intelligence Future

Certainly, the future of Artificial Intelligence is not slowing down and the field is rich for development and investment. Perhaps, for now, the more powerful (and potentially scary) AI programs might be limited by expensive hardware and access, but even that is changing rapidly. In the meantime, it never hurts to stay vigilant and cognizant to make sure humanity always benefits. With regard to intellectual property in the artificial intelligence realm, TechPats will continue to rely on our multi-disciplinary technical and patent expertise while monitoring the landscape of this rapidly expanding technology. Learn more today.

Augmented Reality Update

Augmented Reality Update


When most people think of VR or AR (augmented reality), they think of silly glasses or cumbersome headsets used by gamers.  However, with the rumored release of the new iPhone and other recent advances in smartphone hardware and software platforms, we may soon see a “killer app” that could finally bring this technology to the mainstream.  We know from our prior post that Virtual Reality is a computer-generated simulation of a 3D environment allowing users to interact in a seemingly real way.  Augmented Reality (AR) is simply the superposition of computer-generated images in a user’s view, giving them an enhanced view of their surroundings.   While much of this is geared towards gaming or other entertainment applications, recent AR developments are hoped to lead to many new practical applications such as guiding users through tasks ranging from GPS navigation to brain surgery.

There have been huge strides in the development of Augmented Reality systems.  Google Glass, a highly-anticipated break-through product, was placed on hold in early 2015 after a lukewarm reception which included concerns over the high price tag and privacy.  Google announced its latest attempt at AR with the announcement of the Google Lens platform at this year’s development conference.  This technology works with an ordinary smartphone and takes advantage of Google’s vast machine learning experience.  Some things Google Lens can do without the need for any extra hardware include identifying a type of flower you are viewing through your phone’s camera or accessing restaurant reviews and information by flashing your phone over the storefront.

TechPats AR Update based on new innovation in augmented reality and new iPhone technology

Many analysts believe this functionality is a priority at Google, as they likely look to progress beyond web pages and text to images and videos. Google CEO Sundar Pichai stated at the conference that “the fact that computers can understand images and videos has profound implications for our core mission.”

Meanwhile, Apple is moving full speed ahead with AR features, as well, and AR may likely be prominent in the upcoming iOS11 and rumored new iPhone models.  Apple has publicly demonstrated and released ARKit, the development platform tool for bringing AR apps to iOS 11.  ARKit combines with the iPhone hardware – i.e., the camera, motion sensors, and graphics processors – with algorithms to process depth sensing and artificial light rendering, including scale estimation, motion tracking, and ambient light estimation.  At the Apple Worldwide Developers conference in June 2017, one of the standard AR demos featured placing virtual objects such as a lamp, a vase, a cup of coffee on a table — and moving them around.  According to Apple, “ARKit is the largest AR platform in the world.” Reports of potential hardware in a future iPhone could be additional components targeted specifically to aid in AR features, including a custom rear-facing 3D laser system to enable better depth detection, as well as a more accurate type of autofocus for photography. As more examples and proof-of-concept demos are rolling out this summer, many analysts (and iPhone users) expect application developers to soon fill the App Store with games and utilities that leverage ARKit.

TechPats AR Update based on new innovation in augmented reality

The patent landscape is certainly busy in the field of AR.  A recent patent application from Facebook shows a waveguide display with a two-dimensional scanner that can display images of a user display, e.g, a pair of glasses.  Another patent application from Snapchat earlier this year discloses a different approach, i.e. using a database of images and upon detecting the location of a user, inserting an appropriate image into their view while minimizing local processing.  Of course published applications are much different than a granted patent and products based on the specifications may never even reach prototyping, let alone public sale, but it’s evident that leaders in Silicon Valley have set their augmented sights on securing IP in the field.

This should continue to be a busy year in the field of Augmented Reality.  After some significant product announcements previously, the predicted pre-holiday season smartphone roll-out should provide an optimal hardware platform for evolving AR features.  While previous implementations of AR in glasses did not thrive, many consumers have tasted simple AR on their phones through the likes of real-time video stickers and filters, as well as popular games like Pokemon GO. Especially with faster processors, more memory, better graphics, and improved wireless connections, smartphones have become ground zero for the AR revolution. Next, with the imminent transition from goofy gadgets to globally-used devices and real-world applications, the path to widespread acceptance may just be one “killer app” away.

Augmented Reality is just around the corner, learn more about how you can protect your virtual reality patents by contacting us today.

Blockchain Technology Basics

Blockchain Technology Basics


Blockchain technology is gaining momentum and has the potential to make a far greater impact than the web browser technologies of the 1990’s. The term ‘Blockchain Technology’ comprises of a range of technologies that support systems for clearing and executing transactions between two parties without the need for a central authority or trusted party.

Blockchain technology is based on the initial concept of a public ledger that exists on a distributed database and it is maintained collaboratively by a network of computers.  ONLY one computer can update the ledger.

Think of transferring money from your account to another party’s account directly and instantly without the typical 3-5 business days delay due to the clearing house (trusted party) processing.  This is accomplished by simply sending out an encrypted email containing the transaction details such as amount and account numbers to the recipient.  Public key encryption is used to enable authentication of the source of the email transaction.  The email is also received by the network of computers that maintain the public distributed ledger.  One and only one computer in the network gets to update the ledger based on the outcome of specific algorithms. Each transaction is time-stamped and linked to the previous transaction (chained) using encryption.

The concept of block chain was developed by Satoshi Nakamoto in 2008 and later implemented as a core component of the digital currency Bitcoin.  The concept of the public ledger is now being extended to encompass any type of data such as contracts, process automation, titles, corporate identity and others.

NBCUniversal, as well as new collaborations with Disney, Altice USA, Channel 4 (UK), Cox Communications, Mediaset Italia and TF1 Group (France), will work together on a new and improved advertising approach which would facilitate the secure exchange of non-personal, audience insights for addressable advertising.

The use of Blockchain technology is now legal in Delaware effective August 1, 2017.  The law allows state corporations to “use networks of electronic databases (examples of which are described currently as ‘distributed ledgers’ or a ‘blockchain’) for the creation and maintenance of corporate records, including the corporation’s stock ledger.”

Corporations rely on intermediaries like clearinghouses, custodians, exchanges, fiduciaries, or banks to settle transactions. Each intermediary has to verify transactions with their own ledgers, which adds time and cost to each transaction.

With Blockchain Technology, all peer companies can collectively record all transactions digitally and validate transactions without the need for a third party anywhere in the world and without the need for any fees. Transactions that would take days or even weeks with traditional ledgers can be settled in minutes.

The three main characteristics of the enormous disruptive power of Blockchain Technology are decentralization, transparency, and speed.  The prospects for improving society with a new wave of innovations are intriguing and exciting.  Stay tuned!

Take Off and Land with Facial Recognition

Take Off and Land with Facial Recognition


One of the greatest catalysts to the adoption of a new technology is the degree it helps to address problems in society.  One current problem that certainly could use some help is improving the safety and convenience of commercial air travel.  The technology currently getting attention is using facial recognition to enhance travel security and to streamline airline check-in and boarding.



Early examples of facial recognition technology began in the 1980’s. Kohonen demonstrated that a simple neural net could perform face recognition by computing a face description using the eigenvectors of the face’s autocorrelation matrix.  These eigenvectors are now known as “eigenfaces.”  Although Kohonen’s system was not practical, Kirby and Sirovich soon introduced algorithms that could more easily calculate these eigenfaces, sparking vast research into this field.



Many patents have been granted since then.  For example, Samsung was granted a patent in 2003 that described dividing a facial image into components (eg. Eyes, nose, mouth) and doing processing on each component image instead of the entire facial image.  Apple has also recently been active in this field, considering various ways facial recognition can be used on portable devices.

Returning to the airline travel usage, JetBlue recently announced it was working with US Customs and Border Protection (CBP) to test new self-boarding procedures to become the first airline to integrate facial recognition to verify customers at the gate during boarding.  The program will start trials in Boston using a custom-designed camera station to connect to CBP to match with the CBP database of passport, visa, or immigration photos to and verify flight details to allow boarding.

Delta also recently began using the biometric technology by beginning a program in Minneapolis to allow travelers a self-service bag drop, using facial recognition to safely and securely check their own bags, potentially processing twice as many customers per hour over standard methods.

The National Institute of Standards and Technology (NIST) has issued a number of performance reports on facial recognition.  They showed results in 2013 showing that accuracy has improved up to 30 percent since their 2010 report. They evaluated over 75 different algorithms from 16 providers. In February 2017, NIST began a new evaluation method and will provide ongoing results to provide vendors with the most up-to-date data.

Of course, there are many privacy concerns with using this sort of technology for airline travel.  The Biometric Exit program has been debated by politicians since 9/11. This system, using similar technology as the boarding or bag-check systems, allows homeland security to verify that visitors to the US are scanned upon exit and do not overstay their visit.  This program has been accelerated by the recent administration’s new immigration procedures. Enhanced security and the desire by airlines to improve service, while saving costs, will likely prevail and allow for further adoption and advancement of facial-recognition technology.

Self-Driving or Autonomous – What is the Difference?

Self-Driving or Autonomous – What is the Difference?


More car and technology companies have been teaming up to develop technology that makes a self-driving or autonomous car available to the consumer.  With the increase in awareness of these vehicles, are the terms self-driving and autonomous, in reference to a vehicle, the same?

The current NHTSA Federal Automated Vehicle Policy, which sets the testing guidelines for US DOT, has adopted the SAE International definitions for the levels of automation for vehicles.  These definitions divide vehicles into 5 levels, each with an increasing amount of automation and a decreasing amount of driver involvement.  The following outlines the specifics of each level.

  • In SAE Level 0, the human driver does all tasks related to operating the vehicle
  • In SAE Level 1, an automated system on the vehicle can sometimes assist the human driver.  These exist today in vehicle warning systems, such as blind spot detection, back-up detection, and lane departure detection.
  • In SAE Level 2, an automated system on the vehicle can actually conduct some parts of the driving tasks while the human driver monitors the environment and performs the rest of the required driving tasks.  These exist today in systems such as advanced cruise control, parking assist, lane keep assist, and automatic braking
  • In SAE Level 3, an automated system can do both actually conduct some parts of a driving task and monitor the driving environment in some instances, but the human driver must be ready to take back control when the system requests.  These exist today in vehicles from Tesla and Mercedes for use in highway environments where the lanes of the road are clearly marked.  GM will debut a similar system this fall In the Cadillac CT6 sedan.
  • In SAE Level 4, an automated system can both conduct the task of driving and monitoring the environment, without the need for a human driver to take back control.  Operation of the system is limited to certain environments and conditions.  These systems are currently being tested by Google, Uber, Apple, and Samsung.  Additionally, these systems have been tested in trucks by Volvo, Otto (Uber owned) and Daimler (Mercedes Benz).
  • In SAE Level 5, an automated system that can perform all driving tasks, under all conditions that a human driver could perform.

Based on these policy definitions, an autonomous vehicle at levels 4 and 5 certainly is self-driving, but a self-driving vehicle at level 3 is not autonomous as it is limited in the operating environment and requires a human driver that can take control when needed.  Cars with self-driving capability are currently available, with car manufacturers continuing to add this feature to more models each year.  Analysts predict that by 2020 cars with self-driving capability will take off and be widely available.  When autonomous cars will be available to the public will require innovations in sensor technology for size, manufacturability, cost, an expanded detailed map database, and public acceptance.  Analysts predict that by 2030 autonomous vehicles will be in use in cities and urban areas.

3D Printing Technologies: An Overview

3D Printing Technologies: An Overview


3D printing is sometimes referred to as Additive Manufacturing (AM). In 3D printing, one creates a design of an object using software, and the 3D printer creates the object by adding layer upon layer of material until the shape of the object is formed.  The object can be made using a number of printing materials, including plastics, powders, filaments and paper.

There are a number of 3D printing technologies, and this article provides an overview of those technologies.

Stereolithography (SLA)

Stereolithography makes use of a liquid plastic as the source material and this liquid plastic is transformed into a 3D object layer by layer1.  Liquid resin is placed in a vat that has a transparent bottom.   A UV (UltraViolet) laser traces a pattern on the liquid resin from the bottom of the vat to cure and solidify a layer of the resin.  The solidified structure is progressively dragged up by a lifting platform while the laser forms a different pattern for each layer to create the desired shape of the 3D object3.

TechPats 3d printing
Schematic representation of Stereolithography: a light-emitting device a) (a laser or DLP) selectively illuminates the transparent bottom c) of a tank b) filled with a liquid photo-polymerizing resin. The solidified resin d) is progressively dragged up by a lifting platform e)

Digital Light Processing (DLP)

3D printing DLP technology is very similar to Stereolithography but differs in that it uses a different light source and makes use of a liquid crystal display panel1.  This technology makes use of more conventional light sources and the light is controlled using micro mirrors to control the light incident on the surface of the object being printed. The liquid crystal display panel works as a photomask.  This mechanism allows for a large amount of light to be projected onto the surface to be cured, thereby allowing the resin to harden quickly1.

Fused Deposition Modeling (FDM)

With this technology, objects can be built with production-grade thermoplastics1.   Objects are built by heating a thermoplastic filament to its melting point and extruding the thermoplastic layer by layer.   Special techniques can be used to create complex structures.  For example, the printer can extrude a second material that will serve as support material for the object being formed during the printing process1.  This support material can later be removed or dissolved.

TechPats 3d printing

Fused deposition modelling: 1-Nozzle ejecting molten material, 2-Deposited material (modeled part), 3-Controlled movable table

Selective Laser Sintering (SLS)

SLS has some similarities with Stereolithography.  However, SLS makes use of powdered material that is placed in a vat. For each layer, a layer of powdered material is placed on top of the previous layer using a roller and then the powdered material is laser sintered according to a certain pattern for building up the object to be created.   Interestingly, the portion of the powdered material that is not sintered can be used to provide the support structure and this material can be removed after the object is formed for re-use1.

TechPats 3d printing

Selective Laser Sintering Process

Selective Laser Melting (SLM)

The SLM process is very similar to the SLS process.  However, unlike the SLS process where the powdered material is sintered the SLM process involves fully melting the powdered material1.

Electronic Beam Melting (EBM)

This technology is also much like SLM. However, it makes use of an electron beam instead of a high-powered laser1.  The electron beam fully melts a metal powder to form the desired object.   The process is slower and more expensive than for SLM with a greater limitation on the available materials.

Laminated Object Manufacturing (LOM)

This is a rapid prototyping system. In this process, layers of material coated with adhesive are fused together with heat and pressure and then cut into shape using a laser cutter or knife1,2.    More specifically, a foil coated with adhesive is overlaid on the previous layer and a heated roller heats the adhesive for adhesion between the two layers.  Layers can be made of paper, plastic or metal laminates1. The process can include post-processing steps that include machining and drilling.  This is a fast and inexpensive method of 3D printing1.  With the use of an adhesion process, no chemical process is necessary and relatively large parts can be made2.

TechPats 3d printing

Laminated Object Manufacturing

References Used:


Artificial Intelligence and its Potential Implications on Patents

Artificial Intelligence and its Potential Implications on Patents


TechPats AI Tech Feature Artificial IntelligenceArtificial Intelligence (AI) is a technology that has seen a profound rise in attention within the last few years.  The increase in the technology’s media coverage has been the result of consistent progress made in improving its capabilities. The expansion of AI’s capabilities has fostered its adaptation into areas such as business, medicine, and automotive. The concept of AI was once the subject of imaginative thinking found in the realm of literature. Stories ranging from Mary Shelley’s Frankenstein to 2004’s David Mitchell novel Cloud Atlas all address the concept of and questions surrounding AI. A machine’s adaptation of cognitive functions that are associated with the human mind, functions such as understanding of language, problem-solving, and learning, are what classify it as AI.

The origin of Artificial Intelligence (AI) can be traced to the works of Leibniz, Boole, and Turing to name a few. The field of modern AI research was born at a famous 1956 conference at Dartmouth College from the work of five scientists from Carnegie Mellon, MIT, and IBM: Newell, Simon, McCarthy, Minsky and Samuel. They predicted that machines would be able to perform any work that a human can perform within a generation.  The field of AI has grown dramatically in the last 60 years producing many commercial products and services along the way.

Some basic technologies that comprise AI include:

Boolean Search

These are algorithms that provide a type of search allowing users to combine keywords with operators such as AND, NOT and OR to further produce more relevant results. For example, a Boolean search could be “receiver” AND “cable box”. This would limit the search results to only those documents containing the two keywords.

Natural Language Processing (NLP)

NLP comprises AI algorithms that allow computers to process and understand human languages.

Natural Language Search (NLS)

NLS comprises algorithms that perform searches by identifying content that matches a topic described by a user in plain language.

Machine Learning

Machine learning is a method of data analysis that automates analytical model building. Using algorithms that iteratively learn from data, machine learning allows computers to find hidden insights without being explicitly programmed where to look.

A representative commercial AI platform is IBM Watson, a cloud-based AI product that provides Application Program Interfaces (APIs) that “can understand all forms of data to reveal business-critical insights” harnessing “the power of cognitive computing.” APIs are organized into various products for building cognitive search and content analytics engines; configuring virtual agents with company information, using pre-built content and engaging customers in a conversational, personalized manner, and building applications to discover meaningful insights in unstructured text without writing any code.

We see examples of AI in our daily lives, from Apple’s Siri engaging in interactive dialogue with iPhone users to Amazon’s Alexa placing an order for toilet paper at our beckoning. Paying bills by phone or making a customer service inquiry online will bring them face to face with a computer algorithm designed to address their very needs. Voice recognition and text analyzing software has allowed service providers to pinpoint their customers’ exact needs. This ingenuity has also translated to AI applications being programmed with emotional intelligence that allows it to tailor responses based on a customer’s behavior.

AI is consistently being programmed to become smarter than human beings, sometimes becoming more efficient at doing human jobs. In medicine, AI has been used to identify skin cancer in patients by using a Google-developed algorithm to classify 130,000 high-resolution images of skin lesions representing over 2,000 different diseases. The algorithm was able to match the performance of twenty-one dermatologists in correctly identifying benign lesions. An AI system designed to imitate the human brain’s capacity for vision has been found to be able to diagnose congenital cataracts using 410 images of children with the disease, and 476 images of children without it. The system and three ophthalmologists looked at 50 cases involving a variety of medical situations designed by a panel of experts to be challenging. The AI system correctly diagnosed all of the cases while each of the three ophthalmologists missed one case.

With machines seeming to have surpassed human performance in fields such as medicine, the future for AI technology promises to impact the ways in which human beings work. Still, the debate remains, can AI surpass human capability, or is it best used as a tool to aid humanity in work? With these questions in mind, the implications for the rise of AI on patents and Intellectual Property are also subject to debate.

With AI’s advanced functionality in fields such as medicine, the technology may well be on its way to creating its own technology and applications. At the end of 2016, Google’s Neural Machine Translation system was reported to have developed its own internal language to represent the concepts it uses to translate other languages. While this may only be the beginning of AI’s capacity to create, the evidence suggests that the technology may one day function with its own independent mind. AI having independent thought and the capacity to create has major implications for patents and Intellectual Property. With the World Intellectual Property Organization (WIPO) defining Intellectual Property as “creations of the mind, such as inventions”, the definition of “mind” in this context is left for debate: Whether a human mind or a robot mind. Still, AI can only create potentially patentable inventions. With this in mind, a human creator of AI technology that creates its own patentable inventions would logically own those patent rights.




Mesh Networking for Wi-Fi Applications

Mesh Networking for Wi-Fi Applications


mesh-networking-2Wi-Fi home networking has become universal in recent years. With the growth of cord-cutters and streaming media services, users have demanded higher performance with their home networks. The Internet of things (IoT) which promises to offer more connected devices will only add to the need of better networks. Wi-Fi routers and networking devices have evolved greatly over the years, but today have reached commodity status. Users however, still seem to have the same complaints – spotty coverage in the home, poor throughput, and intermittent performance. New technology and products are currently offered with the promise of addressing many of these performance issues. This technology is based on “mesh networking.”

Standard Wi-Fi networks usually consist of an access point or base station, placed in a central location in the home that communicate with each of the client devices. Mesh networks, which offer peer-to-peer support, place a device, or node, at different locations throughout the home. Each of these devices communicate with each other, forming a “mesh” which allows for a stronger signal over a wider range. Devices then connect to the closest node, rather than the central base station to provide a more reliable network connection.

TechPat Mesh Networks
Mesh Network Topology

Mesh networks have been around for a long time, primarily for military or commercial applications. In 2005, the MIT Media Lab first proposed the One Laptop per Child (OLPC) project to offer low-cost computers to the world’s poorest children. These devices were to use Mesh Networking technology, using the 802.11s standard to offer Internet connectivity in locations with severely limited network access. Recently, Mesh Networking products have entered the home networking market. A number of new companies, such as Eero, have begun offering products, in addition to more established players, such as the Netgear Orbi. Of course, Google is also making a play in this space, as well, with their Google WiFi product. These products all claim to offer improved coverage and performance, replacing traditional boxy home-router products, with more contemporary, small white appliances.

TechPat Mesh Networks 2

Mesh Networking Home Networking Products

In addition to improved performance, these products claim to offer easy installation and maintenance through smart phone Apps or web portals, addressing the complex administration issues that may have plagued earlier products. The downside to such new products is the cost – kits including multiple nodes can range from $300-$500 for a typical installation.

Mesh Networking has been around for some time. New products, utilizing standard as well as proprietary technology, are starting to become more popular in the home market. These products will offer improved performance as the home user’s networking needs become more demanding. Like all new products, they demand a price premium today, but will become more affordable over time.

Google Pixel and the Daydream VR Design

Google Pixel and the Daydream VR Design


google-daydream-pixelThe global smartphone market is estimated to be over $400 billion with the two largest market shares going, obviously, to Samsung and Apple with 22 percent and 12 percent market share, respectively.  With Apple handsets making up 12 percent most other devices, around 85 percent of them, run a version of Android as the operating system.  Google has slowly streamlined Android to incorporate a user’s personal information to create a more individualized experience.  With such wide adoption of Android, Google has recently announced its first line of hardware that will take advantage of all the information Google has collected to make each users’ experience even more unique.  The new Google hardware is headlined by the Pixel and Pixel XL smartphone devices running Android “as it was meant to be” with a few Pixel-only features, most notably the Assistant which is Google’s version of AI such as Siri or Cortana.  The Pixel is a typical smartphone design very similar to Apple’s iPhone and Samsung’s Galaxy phones and knowing how highly contested the IP landscape is over smartphones, with the Samsung/Apple case recently going to the Supreme Court, Google must have a strong strategy for defending their IP considering they are so late to enter the mobile market.  One example of this could be their choice to not have a home button on the front side of the Pixel.  In both of the design patents involved in the Supreme Court case that described the ornamental design of an electronic device, a round button was shown on the lower portion of the screen. Perhaps by placing this button on the back side of the phone Google was trying to design around smartphone patents and hoping to avoid a similar, potentially very long and expensive suit.


Figure 1 – Google Pixel (left) and iPhone 7 (right)

Other superficial similarities to competitors’ products include the VR headset Daydream, similar to Samsung’s Gear VR; Google Home which has been compared to to Amazon Echo; Chromecast similar to streaming devices from Roku and Amazon. Again, however, there may be signs of Google designing around existing products, potentially for improved function or IP purposes.  For example, the Samsung Gear VR and many VR headsets have motion sensors built in to the headset and do not rely on the sensors already present on the devices used as the display.  For their VR solution, Google has omitted the sensors on the headset itself and instead does rely on the on board devices on Daydream-ready devices like the Pixel.  This could be just a way to reduce the cost of the headset or it could be another example of designing around existing products of the competition.  One interesting omission in the portable VR market is Apple, although a patent recently granted to Apple, US Patent No. 9,429,759, suggests that the VR market has not been totally ignored by Apple.


Figure 2 – Example figure from US Patent 9,429,759

Each major player in the computing/mobile market also now appears to have, or are developing, their own AI personality to enable voice interactions with the user.  With Google’s incorporation of Assistant, an apparent evolution of Google Now, into the Pixel devices other Android-based smartphones may be unable to use Assistant or could be limited in the available features available.  Samsung is set to release own AI called Viv on the next Galaxy smartphone, the S8.  Viv is developed by a newly acquired company of the same name and run by a member of the team that developed Siri.  Microsoft has Cortana and Amazon uses Alexa.

The Internet of Things (IoT) is another growing market in which many of these companies are trying to get a foothold, and devices that can tap into AIs like Assistant and Alexa can increase consumers’ willingness to adopt a certain platform.  Google’s Home and Amazon’s Echo can react to voice commands to control home automation products like Hue lightbulbs and thermostats as well as control multimedia by responding to commands to play music or even queue up videos from Netflix and YouTube.  Samsung and Apple also both have platforms, Smart Home and HomeKit respectively, to tie into household appliances like laundry machines and refrigerators.  Samsung may have an advantage in this area since Samsung also sells many of these appliances already although adoption has been slow do due security and privacy concerns.

In such fragmented, evolving markets such as home automation and IoT devices, it is interesting that companies with large captive markets like smartphone makers are starting to develop product lines that can leverage existing technology and customer bases to lower the risk of entering these markets.


Football Helmet IP Goes Head to Head

Football Helmet IP Goes Head to Head



TechPats IP tech Football helmetFootball helmets have simultaneously become symbols of both the toughness of the competition as well as the immense dangers of the sport. They are also the subject of some recent patent challenges involving several familiar names of helmet manufacturers.

A few short weeks into the football season and viewers have likely already witnessed more injuries than they probably care to see. With the announcer reminding the audience of the certified athletic trainers monitoring the game after every head-jarring tackle or helmet-to-helmet collision, the inevitable risk of concussion is always near the forefront of fans’ minds.

In Patent Litigation This Season, the Football Helmet is Also Making an Appearance

On August 19, 2016, the Kranos Corporation—better known as helmet and equipment manufacturer Schutt Sports—filed three Inter Partes Review petitions against competitor Riddell, Inc. Undoubtedly the petitions were a response to pending litigation in the Northern District of Illinois between the two parties (Case 1-16-cv-04496) filed by Riddell last April. In that case, Riddell alleges infringement by Schutt of U.S. Patent Nos. 8,938,818 and 8,528,118, each entitled “Sports Helmet,” as well as Riddell’s U.S. Patent No. 8,813,269, entitled “Sports helmet with quick-release faceguard connector and adjustable internal pad element.” One report has Riddell commenting that they are “enforcing [their] intellectual property portfolio when competitors unfairly use [Riddell’s] patented technology,” while Schutt contends that “the suit has no merit and appears to be a desperate attempt by a struggling competitor to attack the market while it faces product liability and other challenges throughout its business.”

The ’118 and ’818 patents were issued in January 2015 and September 2013, respectively, but both claim a priority date of May 2002. The two patents are also the subject of another lawsuit between Riddell and another competitor, Xenith, LLC (Case 1-16-cv-04498), who is petitioning for joinder of the cases and is likely watching the IPRs closely. The ’118 patent was involved in a 2015 infringement suit against Rawlings Sporting Goods Company, Inc. (Case 1-15-cv-00071), which appears to have been settled by the parties. The ’269 patent has claims that focus on a “quick release” faceguard connector, shares a couple inventors with the other patents, but was filed in April 2008 and claims priority to the year earlier. Riddell alleges Schutt is using the claimed technology on all helmets featuring their quarter-turn facemask release system.

Along with the filing of the IPR, Schutt asks the district court for a stay pending outcome of the decision by the Patent Trial and Appeal Board. Two of the IPRs, IPR2016-01646 and -01650, challenge the ’118 and ’818 patents, respectively, while IPR2016-01649 challenges the ’269 patent.

Looking at the Helmet Technology

On first impression, the patents’ claims do not necessarily conjure mental images of anything beyond a typical football helmet. For instance, one of the alleged prior art references Schutt points to as disclosing the ’118 and ’818 patents is a photograph from Sears’s “Wish Book For The 1971 Christmas Season Catalog” featuring a pair of football helmets allegedly each with a front, a rear, vent openings, face guard connectors, and a raised central band, apparently similar to what is claimed. While the 1971 reference might be used partially as hyperbole to over-simplify the features of the patents, it certainly highlights the relative mundaneness of the helmets at issue. This is contradictory to today’s football helmet technology itself, which is in the midst of a boon.

Because of the persistent talk of concussions and more awareness of brain injuries like chronic traumatic encephalopathy (CTE), most people in the equipment technology are focused on ways to reduce the rattling and/or impact of the brain with the inside of the skull. While laymen might demand harder, stronger helmets, engineers are developing ways to cushion and absorb the impact. Interior padding and points of contact are areas with the most advancement.

At the essence of helmet collisions is Newton’s second law: force equals mass times acceleration. The force on the helmet and head can be reduced by slowing down the deceleration of the helmet. For instance, vehicles utilize “crumple zones” that absorb the impact by slowing the collision. Football helmets adopting crumple zones present practical problems, but the concept of impact absorption is certainly applied.

For instance, one popular helmet, Riddell’s SpeedFlex utilizes a polycarbonate shell and features a “built-in hinged rubber-padded panel located on the front near the top” that can give up to 6 millimeters to help absorb an impact. The U-shaped front panel that acts as a cantilever is immediately recognizable when game broadcasts zoom in on the helmets of running backs, safeties, and other players who find themselves leading with their heads too often. Not quite a crumple zone, but the concept is that the hinged front panel will move upon collision and extend the time of deceleration during impact, thereby reducing the force felt and, hopefully, any resulting brain movement. The SpeedFlex was awarded 5 stars by Virginia Tech Helmet Ratings.

Another helmet highly rated by their research is the Schutt Air XP Pro VTD II model, which boasts “TPU Cushioning in a classic, traditional helmet shell.” With TPU, thermoplastic urethane is introduced instead of (or in addition to) traditional foam padding inside the helmet. TPU is considered to handle temperature swings—hot and cold—much better than usual foam padding and is reportedly adapted from military helmet tech.

Both the EPIC and X2E helmet models from Xenith are accused in the infringement suit and are highly rated. The X2E boasts TPU comfort pads that feature vinyl nitrile (VN) foam, which many football (and hockey) players believe to be more comfortable. The padding of the helmets utilize “shock absorbers” which release air upon linear impact and also features a “shock bonnet suspension” system that moves independently of the shell to hopefully reduce the effects of rotational forces.

One product in development gaining attention is a neck collar developed by Q30 Innovations which hopes to “facilitate the body’s own physiology to create a bubble-wrap effect for the brain.” The Q-Collar, worn in addition to a helmet, compresses the jugular vein to “mildly increase blood volume in the cranium, creating a ‘tighter fit’ of the brain in the cranium” and reduce rattling or “slosh.”

The Role of Pro Football

Safety innovation is important, but the recent patent litigation initiated by Riddell may stem from growing competition in the professional market. Riddell had an agreement to be the official helmet of the NFL from 1989 up until 2014 when the NFL was worried about the implications coming with selling exclusive branding rights during a time focused on brain injuries. During that time Riddell was the only permitted logo on helmets and it was estimated they had 90% of the market share, but beginning in 2014 Schutt believed they had 36% of NFL players and 50% of the skill position players who evidently prefer some of the smaller, lighter helmets they offer.

Of course, Riddell and Schutt Sports have previously been involved in litigation over “concussion reduction technology” which resulted in a jury awarding Riddell $29 million in August 2010, Schutt filing for Chapter 11 bankruptcy, and a settlement between Schutt and Riddell of a mere $1 million.

The high stakes world of brain injuries in football will continue to make news with former NFL players petitioning the U.S. Supreme Court to reject the $1 billion settlement of the concussion class action lawsuits because future CTE diagnoses are apparently not compensated and that subgroup of the class is not be treated fairly by the settlement. Several helmet manufacturers are involved in litigation regarding brain injuries at various levels of the game.

As football is not going anywhere, it is clear that technology is urgently needed to improve. It is important that these rivals keep competing as a de facto monopoly could potentially discourage innovation in helmet and equipment safety. New competitors, more studies, and increased funding from the NFL should lead to safer helmets for the next generation of football players. However, the only thing all manufacturers appear to agree on is the necessity of large warning labels and disclaimer statements that the only sure-fire way to avoid brain injury is to avoid playing the sport.


Future Worlds of Virtual Reality

Future Worlds of Virtual Reality


TechPats Virtual Reality Technology Intellectual PropertyIs the virtual world about to become reality? Analysts are predicting that 2016 is the year that virtual and augmented reality finally take hold and develop into viable industries. In virtual reality (VR), the real world is blocked out and the user is immersed in a simulated world with computer-generated objects. In augmented reality (AR), the user still sees the real world but with added overlays of virtual objects. Finally, there is mixed reality (MR), which combines elements of VR and AR, allowing the user to interact with virtual solid objects in the real world.

Virtual reality is the more advanced of the three technologies with the availability of headsets such as the Oculus Rift, Samsung Gear VR, HTC Vive, and the imminent PlayStation VR. Beyond gaming, VR headsets are finding applications in entertainment, healthcare, tourism, education, manufacturing, and training. Imagine watching a movie in your own personal theater, being in the front row of a concert, or cheering from the 50-yard line of a game all while in the comfort of your living room. Virtual 3D models of the body are being used to train for and practice difficult surgical procedures. Therapy for stroke and brain injuries, phobias and PTSD are utilizing VR to exercise the brain and allow interaction with stressors in controlled environments. The Oculus Rift is being used by Ford to design cars, Toyota for distracted driver training, NASA for astronaut training and in the courtroom for crime scene reconstruction. Planning a vacation? VR will let you explore potential destinations and attractions before booking the trip.

Augmented reality is almost here, but AR is expected to gain momentum and build off of the VR base. It would be remiss not to mention the Pokémon Go mobile game as the current, popular example of augmented reality that uses Google maps, along with a phone’s camera and gyroscope to animate the virtual monster seemingly in the real world. The game’s popularity is sure to spawn imitators and inspire improvements in AR implementation. Similarly, mixed reality will build off of the developing VR and AR markets. Both AR and MR have potential applications supporting real-world tasks such as maintenance, military and police training, and product and design evaluation.

For these technologies to take hold and grow there are still driving factors with problems to be solved. Mobility effects AR and MR but not VR. This means untethered platforms that allow the user the ability to move about in the real world while interacting with the virtual world will impact battery life and mobile data communications. Vision issues that impact all three technologies include – field of view, depth of field, resolution, vision correction, and luminosity or view ability. In VR these factors can cause eye strain, wherein AR/MR they impact the quality or usability of the virtual images. Usability of the product is a big factor for all three, these include processing power (i.e., battery life for AR/MR), comfort (i.e., device weight for AR/MR and motion sickness for VR), and input controls (e.g., controllers, motion tracking, eye tracking, voice command). Of course, the cost is always a factor—can the trade-offs be made between these driving factors such that the end products are affordable for the targeted markets?

The reality worlds are coming to your home or workplace; it is just a matter of time. There are still plenty of technical areas with problems to be solved to help these markets grow. This means room for technical innovation and IP development. The team at TechPats has vast experience working with various aspects including optics, graphics and displays, processors and SoCs, software and games, sensors, batteries, and data communications.


Supreme Court’s Cuozzo Case Reveals a Need for More Efficient IPR Preparation

Supreme Court’s Cuozzo Case Reveals a Need for More Efficient IPR Preparation



The U.S. Supreme Court’s decision in Cuozzo Speed Technologies, LLC v. Lee may not have been revolutionary but it certainly signified that the high-stakes Inter Partes Review (IPR) proceedings are not vanishing any time soon. Unanimously, the Court upheld the U.S. Patent and Trademark Office’s rule to apply the “broadest reasonable interpretation” of challenged claims, and also maintained that the Patent Trial and Appeal Board’s (PTAB) decision to institute an IPR is not judicially reviewable under the statute.

While many see this decision as merely maintaining the status quo with IPRs, other patent professionals have spoken of the opinion as administratively strengthening the USPTO and the PTAB. An official statement from Michelle Lee, director of the USPTO, states: “The USPTO appreciates the Supreme Court’s decision which will allow the [PTAB] to maintain its vital mission of effectively and efficiently resolving patentability disputes while providing faster, less expensive alternatives to district court litigation.” IPRs have become a recognized weapon to fight overly aggressive “patent trolls” and nuisance lawsuits.

While filing an IPR may be less pricey than litigation, it’s not cheap. RPX Corp. data indicates an average IPR campaign costs about $278,000 prior to institution, despite most petitioners hoping to budget much less. Cuozzo reminds the tech industry that the PTAB is still a courtroom where, apparently, the Administrative Patent Judges have substantial deference over the decision to institute an IPR. With prior art analysis, petition drafting, expert declarations, and filing fees, preparing for an IPR is far from a low-cost endeavor. Failure to persuade the PTAB to institute would likely prove more costly.

Cuozzo did not make it easier to have the Board institute an IPR—in fact, it may have a chilling effect on institutions if the USPTO is worried that statistics may again invoke the patent “death squad” moniker of 2013-14. With a host of procedural and estoppel issues used to reject imperfect petitions, the petitioning party needs to plan their sole bite at the apple carefully and properly. As researching and preparing to file an IPR must be both thorough and quick, it is clear that in-house counsel and outside attorneys could use a hand in streamlining the process and reducing costs.

The patent experts at TechPats recommend a focus on 4 key phases of preparation for an IPR petition to help ensure the PTAB recognizes and adopts your arguments: (1) Prior Art Investigation, (2) Invalidity Claim Charts, (3) Analysis by a POSITA, and (4) Expert Declaration and Support.

Prior Art Investigation

More than just a Prior Art Search, an Invalidation Investigation Report needs to quickly depict whether challenging the validity as anticipated or obvious can be fruitful, as well as illuminate claim limitations and terms that may need additional focus. An attorney should be able to look at an invalidation report and make a decision to proceed with certain references or request a deeper dig for additional prior art. The reports are especially valuable to help companies and counsel evaluate and prioritize potential challenges for a list of multiple patents with litigation pending.

Invalidity Claim Charts

A detailed Invalidity Claim Chart should be the roadmap to constructing a solid petition for IPR and should provide organization and clarity. A thorough claim chart is a crucial step in building an IPR case for a number of reasons, including procedural compliance, efficiently outlining strategy, and facilitating collaboration with colleagues and experts. Identifying issues or holes in the prior art references—prior to drafting the petition—is perhaps the most valuable aspect of invalidity claim charts. An invalidity chart should be the backbone of the petition’s grounds for invalidity and should eliminate swapping references or combinations at the 11th hour before filing.

Analysis by a Person Having Ordinary Skill in the Art (“POSITA”)

Relying on consultants with expertise in the art at the time of invention can produce analysis as good as—if not better and more efficient than—many in-house IP teams and patent attorneys. Whether it is describing the technological landscape at the time of invention, recalling a particular company or product from 18 years ago, or unearthing that decisive prior art reference, TechPats can act as an extension of your office and fill in any potential gaps in technical experience.

Expert Declaration and Support

The days are over for when the Expert Declaration could just echo the IPR petition. More than ever, the IPR declaration is now used for the critical functions of vividly depicting the state of the art and explaining how a POSITA would interpret each reference’s teachings. An expert has the single opportunity to frame exactly how a POSITA would interpret claim terms and elements. Perhaps just as importantly, when proposing alleged obvious combinations of known elements, the expert can offer a solid rationale or motivation as to why a POSITA would think to combine the references. Conclusory statements and unsupported petitions won’t work anymore, regardless of the experts’ many years of experience or multiple graduate degrees.

For those reasons, it is crucial to have the Expert working early in the project for an efficient and effective Expert Declaration.

Putting the Plan in Action

Our experts and patent analysts have adapted these proven steps from nearly 20 years of helping our clients in litigation and validity challenges. TechPats continues to work with top law firms and in-house counsels of major companies on IPR preparation. We have in-house analysts and expert declarants experienced in IPRs to support your team and contribute key pieces of a petition and declaration.

TechPats is ready to help with any step of the IPR preparation process. However, as technical consultants and patent agents, TechPats cannot file a petition or provide legal advice. Nevertheless, working in conjunction with legal teams, we’ve found that we can optimize IPR preparation and reduce costs.

Self-Driving cars: Coming to a Showroom Near You…Someday

Self-Driving cars: Coming to a Showroom Near You…Someday


TechPats Self Driving CarsIt finally happened a few months ago. One of Google’s Fleet of self-driving cars (it was a Lexus) was at fault in a traffic accident. The car “thought” a bus would slow and allow it to merge into traffic, and when it did not the car struck the side of the bus at slow speed. To add to that, just a few weeks later, Google was issued US Patent 9,280,710 – Bus Detection for Autonomous Vehicle. While certainly ironic, the patent, in fact, describes identifying a large vehicle as a school bus by examining size and color of the vehicle, whereas the Google car was involved an accident with a public transportation bus. While our human minds might think there is little difference, it starts to indicate how complex a task that Google (and others) are attempting. Some additional 3000 decisions were added to the computer code running the vehicle just to help mitigate that one event from happening again. It should make anyone marvel out at how complex our brain is, and how we can perceive and instantaneously process all our sensory data to make sure we don’t hit the bus (of course, sometimes we still do).

Google’s fleet of cars has been driven over 1,000,000 miles without a significant event such as this. That seems astounding, and certainly, the technology must be close to mature. If one of the goals of self-driving cars is to provide a safer ride by reducing accidents and fatalities, a recent study by RAND said that autonomous vehicles must log about 275,000,000 miles of test before the relative safety of the autonomous vehicle can be judged compared with human driven cars. That’s the equivalent of 20,000 vehicles driving the national driving average of 13,750 per year, although if you put those same vehicles (if we had that many) on the road around the clock averaging 30 mph, the task would take less than three weeks.

There are still challenges to mainstream adoption for both technology and regulation. On the technology side, the “eye” of the autonomous vehicle is a LIDAR system. These automotive LIDAR systems still cost 10’s of thousands of dollars each so technology will need to become much more commoditized before autonomous cars are affordable to the general population . LIDAR is a laser-based ranging system, similar to radar, but using laser light instead of radio waves. It is sometimes mistakenly thought to stand for Laser Radar, but it is actually an acronym for Light Detection and Ranging. These LIDAR vision systems are much more advanced than the various types of vision systems used in current generation vehicles for driver assistance features; such as automatic braking, lane change assist, or adaptive cruise control, where a more limited field of view can typically be tolerated. By contrast, the autonomous vehicle needs a constant 360-degree view of its surrounding environment. This is typically accomplished by a rapidly spinning mirror sweeping the laser beam around the vehicle. Objects and their distances are identified from the reflected laser light bouncing back to the detector in the LIDAR unit.

Still, the technology leads the law and regulation. To date, only Nevada, California, Florida, Michigan, North Dakota, Tennessee, and Washington D.C. have passed laws authorizing operation of autonomous vehicles. So, it still might be a while before you walk into any dealer showroom in any state to buy one to put in your garage.


Reading In the Dark: Improving Night-time Reading Performance of Tablets and E-readers

Reading In the Dark: Improving Night-time Reading Performance of Tablets and E-readers

??????????????????????????????????????????????????????????????????????????????Every day, it seems like there’s a new battle in the patent war between smart phone, tablet, and e-reader manufactures. Whether it’s Apple, Samsung, Google, or others, efforts are always underway to remain on top of the technology fight. Often times this is good for consumers, as these companies battle to offer the latest technology innovation to aid their customers.

One innovation that has recently been released addresses a potential health problem that many people never even knew existed. The problem is the light that is emitted by tablets and e-readers and the effect it has one health. Natural light from the sun is important for maintaining one’s biological clock, or their circadian rhythm. Artificial light, especially at night, can have an adverse health effect on various people. This effect may be magnified by the widespread use of tablets or e-readers at night.

Various studies have shown a link between artificial light at night to disruptions in sleep patterns and other types of health problems. One potential cause is that the exposure to light suppresses the secretion of melatonin, a hormone the can affect circadian rhythms. Blue light, in particular, has been shown to have adverse affects. In one major study published by the National Academy of Sciences, “Evening use of light-emitting e-Readers negatively affects sleep, circadian timing, and next-morning alertness.” This could be bad news for many people who have adopted tablets or e-readers instead of books for night-time reading. Teenagers, who find these gadgets indispensable, may be especially vulnerable to these light effects.

The manufacturers of portable devices have been aware of this problem and are starting to come out with interesting solutions. Amazon has released a feature in their recent Fire OS upgrade, called Blue Shade. “Blue Shade is an exclusive Fire OS “Bellini” feature that works behind the scenes to automatically adjust and optimize the backlight for a more comfortable nighttime reading experience.” Blue Shade will utilize specialized filters to limit blue light exposure and allow users to easily add warmer filters and modify brightness for reading at night. As the Kindle Fire is closely tied to the Amazon ecosystem, this feature may be especially important, as Amazon’s customer e-book experience is critical.

Not to be outdone, Apple has been testing its new feature and officially rolled it out in iOS 9.3 in its March product event this week. Night Shift is the new feature that automatically shifts the light created by your iOS display from a bright blue to a warmer tone at night, making it easier to fall asleep. Apple says iOS 9.3 will know when to switch each night based on your location and the clock app.

The Night Shift feature sounds like a great function, but may be the subject of future IP controversies. For example, a third party, F.lux already had released an app for iOS with similar features. F.lux has been working on their technology since 2009 and their technology, according to their website, is “patent pending.” Apple soon banned this app (after F.flux received 200,000 hits in less than 24 hours). F.lux wasn’t too happy and publically called for Apple to reconsider their decision. Perhaps this relationship will yield some sort of licensing deal or may be the source of some future litigation. Both Amazon and Apple are likely pursuing their own patent activities in this area, as well.

Night Shift and Blue Shade both appear to be meaningful additions to their respective Apple and Amazon product families. Their goals are to enhance the user experience and improve on the health and sleep behavior of night-time users. Hopefully these companies won’t be kept up at night worrying about IP matters relating to this useful feature.