TechPats

267.880.1720

Artificial Intelligence and its Potential Implications on Patents

Artificial Intelligence and its Potential Implications on Patents

 

TechPats AI Tech Feature Artificial IntelligenceArtificial Intelligence (AI) is a technology that has seen a profound rise in attention within the last few years.  The increase in the technology’s media coverage has been the result of consistent progress made in improving its capabilities. The expansion of AI’s capabilities has fostered its adaptation into areas such as business, medicine, and automotive. The concept of AI was once the subject of imaginative thinking found in the realm of literature. Stories ranging from Mary Shelley’s Frankenstein to 2004’s David Mitchell novel Cloud Atlas all address the concept of and questions surrounding AI. A machine’s adaptation of cognitive functions that are associated with the human mind, functions such as understanding of language, problem-solving, and learning, are what classify it as AI.

The origin of Artificial Intelligence (AI) can be traced to the works of Leibniz, Boole, and Turing to name a few. The field of modern AI research was born at a famous 1956 conference at Dartmouth College from the work of five scientists from Carnegie Mellon, MIT, and IBM: Newell, Simon, McCarthy, Minsky and Samuel. They predicted that machines would be able to perform any work that a human can perform within a generation.  The field of AI has grown dramatically in the last 60 years producing many commercial products and services along the way.

Some basic technologies that comprise AI include:

Boolean Search

These are algorithms that provide a type of search allowing users to combine keywords with operators such as AND, NOT and OR to further produce more relevant results. For example, a Boolean search could be “receiver” AND “cable box”. This would limit the search results to only those documents containing the two keywords.

Natural Language Processing (NLP)

NLP comprises AI algorithms that allow computers to process and understand human languages.

Natural Language Search (NLS)

NLS comprises algorithms that perform searches by identifying content that matches a topic described by a user in plain language.

Machine Learning

Machine learning is a method of data analysis that automates analytical model building. Using algorithms that iteratively learn from data, machine learning allows computers to find hidden insights without being explicitly programmed where to look.

A representative commercial AI platform is IBM Watson, a cloud-based AI product that provides Application Program Interfaces (APIs) that “can understand all forms of data to reveal business-critical insights” harnessing “the power of cognitive computing.” APIs are organized into various products for building cognitive search and content analytics engines; configuring virtual agents with company information, using pre-built content and engaging customers in a conversational, personalized manner, and building applications to discover meaningful insights in unstructured text without writing any code.

We see examples of AI in our daily lives, from Apple’s Siri engaging in interactive dialogue with iPhone users to Amazon’s Alexa placing an order for toilet paper at our beckoning. Paying bills by phone or making a customer service inquiry online will bring them face to face with a computer algorithm designed to address their very needs. Voice recognition and text analyzing software has allowed service providers to pinpoint their customers’ exact needs. This ingenuity has also translated to AI applications being programmed with emotional intelligence that allows it to tailor responses based on a customer’s behavior.

AI is consistently being programmed to become smarter than human beings, sometimes becoming more efficient at doing human jobs. In medicine, AI has been used to identify skin cancer in patients by using a Google-developed algorithm to classify 130,000 high-resolution images of skin lesions representing over 2,000 different diseases. The algorithm was able to match the performance of twenty-one dermatologists in correctly identifying benign lesions. An AI system designed to imitate the human brain’s capacity for vision has been found to be able to diagnose congenital cataracts using 410 images of children with the disease, and 476 images of children without it. The system and three ophthalmologists looked at 50 cases involving a variety of medical situations designed by a panel of experts to be challenging. The AI system correctly diagnosed all of the cases while each of the three ophthalmologists missed one case.

With machines seeming to have surpassed human performance in fields such as medicine, the future for AI technology promises to impact the ways in which human beings work. Still, the debate remains, can AI surpass human capability, or is it best used as a tool to aid humanity in work? With these questions in mind, the implications for the rise of AI on patents and Intellectual Property are also subject to debate.

With AI’s advanced functionality in fields such as medicine, the technology may well be on its way to creating its own technology and applications. At the end of 2016, Google’s Neural Machine Translation system was reported to have developed its own internal language to represent the concepts it uses to translate other languages. While this may only be the beginning of AI’s capacity to create, the evidence suggests that the technology may one day function with its own independent mind. AI having independent thought and the capacity to create has major implications for patents and Intellectual Property. With the World Intellectual Property Organization (WIPO) defining Intellectual Property as “creations of the mind, such as inventions”, the definition of “mind” in this context is left for debate: Whether a human mind or a robot mind. Still, AI can only create potentially patentable inventions. With this in mind, a human creator of AI technology that creates its own patentable inventions would logically own those patent rights.



 

 

 


Mesh Networking for Wi-Fi Applications

Mesh Networking for Wi-Fi Applications

 

mesh-networking-2Wi-Fi home networking has become universal in recent years. With the growth of cord-cutters and streaming media services, users have demanded higher performance with their home networks. The Internet of things (IoT) which promises to offer more connected devices will only add to the need of better networks. Wi-Fi routers and networking devices have evolved greatly over the years, but today have reached commodity status. Users however, still seem to have the same complaints – spotty coverage in the home, poor throughput, and intermittent performance. New technology and products are currently offered with the promise of addressing many of these performance issues. This technology is based on “mesh networking.”

Standard Wi-Fi networks usually consist of an access point or base station, placed in a central location in the home that communicate with each of the client devices. Mesh networks, which offer peer-to-peer support, place a device, or node, at different locations throughout the home. Each of these devices communicate with each other, forming a “mesh” which allows for a stronger signal over a wider range. Devices then connect to the closest node, rather than the central base station to provide a more reliable network connection.

TechPat Mesh Networks
Mesh Network Topology

Mesh networks have been around for a long time, primarily for military or commercial applications. In 2005, the MIT Media Lab first proposed the One Laptop per Child (OLPC) project to offer low-cost computers to the world’s poorest children. These devices were to use Mesh Networking technology, using the 802.11s standard to offer Internet connectivity in locations with severely limited network access. Recently, Mesh Networking products have entered the home networking market. A number of new companies, such as Eero, have begun offering products, in addition to more established players, such as the Netgear Orbi. Of course, Google is also making a play in this space, as well, with their Google WiFi product. These products all claim to offer improved coverage and performance, replacing traditional boxy home-router products, with more contemporary, small white appliances.

TechPat Mesh Networks 2

Mesh Networking Home Networking Products

In addition to improved performance, these products claim to offer easy installation and maintenance through smart phone Apps or web portals, addressing the complex administration issues that may have plagued earlier products. The downside to such new products is the cost – kits including multiple nodes can range from $300-$500 for a typical installation.

Mesh Networking has been around for some time. New products, utilizing standard as well as proprietary technology, are starting to become more popular in the home market. These products will offer improved performance as the home user’s networking needs become more demanding. Like all new products, they demand a price premium today, but will become more affordable over time.






Google Pixel and the Daydream VR Design

Google Pixel and the Daydream VR Design

 

google-daydream-pixelThe global smartphone market is estimated to be over $400 billion with the two largest market shares going, obviously, to Samsung and Apple with 22 percent and 12 percent market share, respectively.  With Apple handsets making up 12 percent most other devices, around 85 percent of them, run a version of Android as the operating system.  Google has slowly streamlined Android to incorporate a user’s personal information to create a more individualized experience.  With such wide adoption of Android, Google has recently announced its first line of hardware that will take advantage of all the information Google has collected to make each users’ experience even more unique.  The new Google hardware is headlined by the Pixel and Pixel XL smartphone devices running Android “as it was meant to be” with a few Pixel-only features, most notably the Assistant which is Google’s version of AI such as Siri or Cortana.  The Pixel is a typical smartphone design very similar to Apple’s iPhone and Samsung’s Galaxy phones and knowing how highly contested the IP landscape is over smartphones, with the Samsung/Apple case recently going to the Supreme Court, Google must have a strong strategy for defending their IP considering they are so late to enter the mobile market.  One example of this could be their choice to not have a home button on the front side of the Pixel.  In both of the design patents involved in the Supreme Court case that described the ornamental design of an electronic device, a round button was shown on the lower portion of the screen. Perhaps by placing this button on the back side of the phone Google was trying to design around smartphone patents and hoping to avoid a similar, potentially very long and expensive suit.

google-pixel-technology

Figure 1 – Google Pixel (left) and iPhone 7 (right)

Other superficial similarities to competitors’ products include the VR headset Daydream, similar to Samsung’s Gear VR; Google Home which has been compared to to Amazon Echo; Chromecast similar to streaming devices from Roku and Amazon. Again, however, there may be signs of Google designing around existing products, potentially for improved function or IP purposes.  For example, the Samsung Gear VR and many VR headsets have motion sensors built in to the headset and do not rely on the sensors already present on the devices used as the display.  For their VR solution, Google has omitted the sensors on the headset itself and instead does rely on the on board devices on Daydream-ready devices like the Pixel.  This could be just a way to reduce the cost of the headset or it could be another example of designing around existing products of the competition.  One interesting omission in the portable VR market is Apple, although a patent recently granted to Apple, US Patent No. 9,429,759, suggests that the VR market has not been totally ignored by Apple.

apple-vr-headset-drawing

Figure 2 – Example figure from US Patent 9,429,759

Each major player in the computing/mobile market also now appears to have, or are developing, their own AI personality to enable voice interactions with the user.  With Google’s incorporation of Assistant, an apparent evolution of Google Now, into the Pixel devices other Android-based smartphones may be unable to use Assistant or could be limited in the available features available.  Samsung is set to release own AI called Viv on the next Galaxy smartphone, the S8.  Viv is developed by a newly acquired company of the same name and run by a member of the team that developed Siri.  Microsoft has Cortana and Amazon uses Alexa.

The Internet of Things (IoT) is another growing market in which many of these companies are trying to get a foothold, and devices that can tap into AIs like Assistant and Alexa can increase consumers’ willingness to adopt a certain platform.  Google’s Home and Amazon’s Echo can react to voice commands to control home automation products like Hue lightbulbs and thermostats as well as control multimedia by responding to commands to play music or even queue up videos from Netflix and YouTube.  Samsung and Apple also both have platforms, Smart Home and HomeKit respectively, to tie into household appliances like laundry machines and refrigerators.  Samsung may have an advantage in this area since Samsung also sells many of these appliances already although adoption has been slow do due security and privacy concerns.

In such fragmented, evolving markets such as home automation and IoT devices, it is interesting that companies with large captive markets like smartphone makers are starting to develop product lines that can leverage existing technology and customer bases to lower the risk of entering these markets.

 






TechPats Attends Patent Law and Policy Conference in DC

TechPats Attends Patent Law and Policy Conference in DC

 

This past Tuesday, members of the TechPats team, including Kevin Rieffel , Counsel, attended IAM’s Patent Law and Policy conference at the Ronald Reagan Center in Washington, DC. With the theme of “Courts, Congress, and the Monetization Landscape” and timeliness of occurring a week after the U.S. election, the focus of the speakers was on problems in patent and IP laws and policies and pathways to improvements in consistency.

The handful of panels that followed USPTO Director Lee’s opening keynote remarks included prominent professors, top corporate counsel, leading litigators, well-known writers, and a few former patent office directors. Throughout the day there were 3 key themes with regard to strengthening the patent system, bolstering IP value, and promoting continued innovation.

Patent Reform

Recently a hot topic, competing bills in U.S. Congress regarding Patent Reform have taken a backseat. One reason for this could be the recent FTC report on “Patent Assertion Entity Activity” which addressed the issue of “patent trolls.” Most speakers acknowledged the impact of the FTC study as a positive framing of the conversation regarding litigation entities and portfolio entities and declaring the use of the words “trolls” as unproductive. While the problems of frivolous litigation and extorting demand letters have not fully vanished, the panels addressed recent changes in pleading standards, awarding attorney fees, and the ease of challenging patents with IPRs and under Alice as policies that have made the hurdles to patent assertion higher—regardless of a patent owner’s size or intentions.

On some of the panelists’ wishlists for next year were addressing subject matter eligibility under § 101 with better statutes and/or CAFC precedent, considerations in allowing juries to determine sections 102 and 103 validity issues, legislation to strengthen valid patents and discourage “efficient infringement,” and deep examination of venue shopping issues, especially with regard to the federal courts in Texas.

Inter Partes Review and the PTAB

Director Lee’s remarks referenced the Patent Trial and Appeal Board as a “valuable check on patent quality, particularly in the later part of a patent’s lifecycle” and set a tone for a discussion of the merits and difficulties of PTAB proceedings. The “patent death squad” narrative has survived despite improving statistics, but the conversations revolved around differing standards of proof, claim term interpretation, confirmation bias with institution and final decisions, rule-making authority by the director, as well as variations in levels of deference to alleged fact-finders at the examination or PTAB trial level.

The Trump Administration

On the heels of the big election, many attendees were contemplating how the new administration will handle Intellectual Property and patents. While the 2017 White House policy on patents seems to be fairly unpredictable at this point, no one expected drastic changes to come. Ideas that were discussed include how a populist-elected president would support small inventors and how a “Rust Belt” sentiment may affect patent policy. Also discussed was the potential for a new focus on pharmaceuticals over Silicon Valley. Nevertheless, without any concrete policies or speeches on patents coming from the president-elect’s team, any speculation for Mr. Trump being pro-IP came from his companies’ history of brand protection and campaign promises rooted in economic protectionism.


Football Helmet IP Goes Head to Head

Football Helmet IP Goes Head to Head

10/4/2016

 

TechPats IP tech Football helmetFootball helmets have simultaneously become symbols of both the toughness of the competition as well as the immense dangers of the sport. They are also the subject of some recent patent challenges involving several familiar names of helmet manufacturers.

A few short weeks into the football season and viewers have likely already witnessed more injuries than they probably care to see. With the announcer reminding the audience of the certified athletic trainers monitoring the game after every head-jarring tackle or helmet-to-helmet collision, the inevitable risk of concussion is always near the forefront of fans’ minds.

In Patent Litigation This Season, the Football Helmet is Also Making an Appearance

On August 19, 2016, the Kranos Corporation—better known as helmet and equipment manufacturer Schutt Sports—filed three Inter Partes Review petitions against competitor Riddell, Inc. Undoubtedly the petitions were a response to pending litigation in the Northern District of Illinois between the two parties (Case 1-16-cv-04496) filed by Riddell last April. In that case, Riddell alleges infringement by Schutt of U.S. Patent Nos. 8,938,818 and 8,528,118, each entitled “Sports Helmet,” as well as Riddell’s U.S. Patent No. 8,813,269, entitled “Sports helmet with quick-release faceguard connector and adjustable internal pad element.” One report has Riddell commenting that they are “enforcing [their] intellectual property portfolio when competitors unfairly use [Riddell’s] patented technology,” while Schutt contends that “the suit has no merit and appears to be a desperate attempt by a struggling competitor to attack the market while it faces product liability and other challenges throughout its business.”

The ’118 and ’818 patents were issued in January 2015 and September 2013, respectively, but both claim a priority date of May 2002. The two patents are also the subject of another lawsuit between Riddell and another competitor, Xenith, LLC (Case 1-16-cv-04498), who is petitioning for joinder of the cases and is likely watching the IPRs closely. The ’118 patent was involved in a 2015 infringement suit against Rawlings Sporting Goods Company, Inc. (Case 1-15-cv-00071), which appears to have been settled by the parties. The ’269 patent has claims that focus on a “quick release” faceguard connector, shares a couple inventors with the other patents, but was filed in April 2008 and claims priority to the year earlier. Riddell alleges Schutt is using the claimed technology on all helmets featuring their quarter-turn facemask release system.

Along with the filing of the IPR, Schutt asks the district court for a stay pending outcome of the decision by the Patent Trial and Appeal Board. Two of the IPRs, IPR2016-01646 and -01650, challenge the ’118 and ’818 patents, respectively, while IPR2016-01649 challenges the ’269 patent.

Looking at the Helmet Technology

On first impression, the patents’ claims do not necessarily conjure mental images of anything beyond a typical football helmet. For instance, one of the alleged prior art references Schutt points to as disclosing the ’118 and ’818 patents is a photograph from Sears’s “Wish Book For The 1971 Christmas Season Catalog” featuring a pair of football helmets allegedly each with a front, a rear, vent openings, face guard connectors, and a raised central band, apparently similar to what is claimed. While the 1971 reference might be used partially as hyperbole to over-simplify the features of the patents, it certainly highlights the relative mundaneness of the helmets at issue. This is contradictory to today’s football helmet technology itself, which is in the midst of a boon.

Because of the persistent talk of concussions and more awareness of brain injuries like chronic traumatic encephalopathy (CTE), most people in the equipment technology are focused on ways to reduce the rattling and/or impact of the brain with the inside of the skull. While laymen might demand harder, stronger helmets, engineers are developing ways to cushion and absorb the impact. Interior padding and points of contact are areas with the most advancement.

At the essence of helmet collisions is Newton’s second law: force equals mass times acceleration. The force on the helmet and head can be reduced by slowing down the deceleration of the helmet. For instance, vehicles utilize “crumple zones” that absorb the impact by slowing the collision. Football helmets adopting crumple zones present practical problems, but the concept of impact absorption is certainly applied.

For instance, one popular helmet, Riddell’s SpeedFlex utilizes a polycarbonate shell and features a “built-in hinged rubber-padded panel located on the front near the top” that can give up to 6 millimeters to help absorb an impact. The U-shaped front panel that acts as a cantilever is immediately recognizable when game broadcasts zoom in on the helmets of running backs, safeties, and other players who find themselves leading with their heads too often. Not quite a crumple zone, but the concept is that the hinged front panel will move upon collision and extend the time of deceleration during impact, thereby reducing the force felt and, hopefully, any resulting brain movement. The SpeedFlex was awarded 5 stars by Virginia Tech Helmet Ratings.

Another helmet highly rated by their research is the Schutt Air XP Pro VTD II model, which boasts “TPU Cushioning in a classic, traditional helmet shell.” With TPU, thermoplastic urethane is introduced instead of (or in addition to) traditional foam padding inside the helmet. TPU is considered to handle temperature swings—hot and cold—much better than usual foam padding and is reportedly adapted from military helmet tech.

Both the EPIC and X2E helmet models from Xenith are accused in the infringement suit and are highly rated. The X2E boasts TPU comfort pads that feature vinyl nitrile (VN) foam, which many football (and hockey) players believe to be more comfortable. The padding of the helmets utilize “shock absorbers” which release air upon linear impact and also features a “shock bonnet suspension” system that moves independently of the shell to hopefully reduce the effects of rotational forces.

One product in development gaining attention is a neck collar developed by Q30 Innovations which hopes to “facilitate the body’s own physiology to create a bubble-wrap effect for the brain.” The Q-Collar, worn in addition to a helmet, compresses the jugular vein to “mildly increase blood volume in the cranium, creating a ‘tighter fit’ of the brain in the cranium” and reduce rattling or “slosh.”

The Role of Pro Football

Safety innovation is important, but the recent patent litigation initiated by Riddell may stem from growing competition in the professional market. Riddell had an agreement to be the official helmet of the NFL from 1989 up until 2014 when the NFL was worried about the implications coming with selling exclusive branding rights during a time focused on brain injuries. During that time Riddell was the only permitted logo on helmets and it was estimated they had 90% of the market share, but beginning in 2014 Schutt believed they had 36% of NFL players and 50% of the skill position players who evidently prefer some of the smaller, lighter helmets they offer.

Of course, Riddell and Schutt Sports have previously been involved in litigation over “concussion reduction technology” which resulted in a jury awarding Riddell $29 million in August 2010, Schutt filing for Chapter 11 bankruptcy, and a settlement between Schutt and Riddell of a mere $1 million.

The high stakes world of brain injuries in football will continue to make news with former NFL players petitioning the U.S. Supreme Court to reject the $1 billion settlement of the concussion class action lawsuits because future CTE diagnoses are apparently not compensated and that subgroup of the class is not be treated fairly by the settlement. Several helmet manufacturers are involved in litigation regarding brain injuries at various levels of the game.

As football is not going anywhere, it is clear that technology is urgently needed to improve. It is important that these rivals keep competing as a de facto monopoly could potentially discourage innovation in helmet and equipment safety. New competitors, more studies, and increased funding from the NFL should lead to safer helmets for the next generation of football players. However, the only thing all manufacturers appear to agree on is the necessity of large warning labels and disclaimer statements that the only sure-fire way to avoid brain injury is to avoid playing the sport.

 






Future Worlds of Virtual Reality

Future Worlds of Virtual Reality

 

TechPats Virtual Reality Technology Intellectual PropertyIs the virtual world about to become reality? Analysts are predicting that 2016 is the year that virtual and augmented reality finally take hold and develop into viable industries. In virtual reality (VR), the real world is blocked out and the user is immersed in a simulated world with computer-generated objects. In augmented reality (AR), the user still sees the real world but with added overlays of virtual objects. Finally, there is mixed reality (MR), which combines elements of VR and AR, allowing the user to interact with virtual solid objects in the real world.

Virtual reality is the more advanced of the three technologies with the availability of headsets such as the Oculus Rift, Samsung Gear VR, HTC Vive, and the imminent PlayStation VR. Beyond gaming, VR headsets are finding applications in entertainment, healthcare, tourism, education, manufacturing, and training. Imagine watching a movie in your own personal theater, being in the front row of a concert, or cheering from the 50-yard line of a game all while in the comfort of your living room. Virtual 3D models of the body are being used to train for and practice difficult surgical procedures. Therapy for stroke and brain injuries, phobias and PTSD are utilizing VR to exercise the brain and allow interaction with stressors in controlled environments. The Oculus Rift is being used by Ford to design cars, Toyota for distracted driver training, NASA for astronaut training and in the courtroom for crime scene reconstruction. Planning a vacation? VR will let you explore potential destinations and attractions before booking the trip.

Augmented reality is almost here, but AR is expected to gain momentum and build off of the VR base. It would be remiss not to mention the Pokémon Go mobile game as the current, popular example of augmented reality that uses Google maps, along with a phone’s camera and gyroscope to animate the virtual monster seemingly in the real world. The game’s popularity is sure to spawn imitators and inspire improvements in AR implementation. Similarly, mixed reality will build off of the developing VR and AR markets. Both AR and MR have potential applications supporting real-world tasks such as maintenance, military and police training, and product and design evaluation.

For these technologies to take hold and grow there are still driving factors with problems to be solved. Mobility effects AR and MR but not VR. This means untethered platforms that allow the user the ability to move about in the real world while interacting with the virtual world will impact battery life and mobile data communications. Vision issues that impact all three technologies include – field of view, depth of field, resolution, vision correction, and luminosity or view ability. In VR these factors can cause eye strain, wherein AR/MR they impact the quality or usability of the virtual images. Usability of the product is a big factor for all three, these include processing power (i.e., battery life for AR/MR), comfort (i.e., device weight for AR/MR and motion sickness for VR), and input controls (e.g., controllers, motion tracking, eye tracking, voice command). Of course, the cost is always a factor—can the trade-offs be made between these driving factors such that the end products are affordable for the targeted markets?

The reality worlds are coming to your home or workplace; it is just a matter of time. There are still plenty of technical areas with problems to be solved to help these markets grow. This means room for technical innovation and IP development. The team at TechPats has vast experience working with various aspects including optics, graphics and displays, processors and SoCs, software and games, sensors, batteries, and data communications.

 






Supreme Court’s Cuozzo Case Reveals a Need for More Efficient IPR Preparation

Supreme Court’s Cuozzo Case Reveals a Need for More Efficient IPR Preparation

9/12/2016

 

The U.S. Supreme Court’s decision in Cuozzo Speed Technologies, LLC v. Lee may not have been revolutionary but it certainly signified that the high-stakes Inter Partes Review (IPR) proceedings are not vanishing any time soon. Unanimously, the Court upheld the U.S. Patent and Trademark Office’s rule to apply the “broadest reasonable interpretation” of challenged claims, and also maintained that the Patent Trial and Appeal Board’s (PTAB) decision to institute an IPR is not judicially reviewable under the statute.

While many see this decision as merely maintaining the status quo with IPRs, other patent professionals have spoken of the opinion as administratively strengthening the USPTO and the PTAB. An official statement from Michelle Lee, director of the USPTO, states: “The USPTO appreciates the Supreme Court’s decision which will allow the [PTAB] to maintain its vital mission of effectively and efficiently resolving patentability disputes while providing faster, less expensive alternatives to district court litigation.” IPRs have become a recognized weapon to fight overly aggressive “patent trolls” and nuisance lawsuits.

While filing an IPR may be less pricey than litigation, it’s not cheap. RPX Corp. data indicates an average IPR campaign costs about $278,000 prior to institution, despite most petitioners hoping to budget much less. Cuozzo reminds the tech industry that the PTAB is still a courtroom where, apparently, the Administrative Patent Judges have substantial deference over the decision to institute an IPR. With prior art analysis, petition drafting, expert declarations, and filing fees, preparing for an IPR is far from a low-cost endeavor. Failure to persuade the PTAB to institute would likely prove more costly.

Cuozzo did not make it easier to have the Board institute an IPR—in fact, it may have a chilling effect on institutions if the USPTO is worried that statistics may again invoke the patent “death squad” moniker of 2013-14. With a host of procedural and estoppel issues used to reject imperfect petitions, the petitioning party needs to plan their sole bite at the apple carefully and properly. As researching and preparing to file an IPR must be both thorough and quick, it is clear that in-house counsel and outside attorneys could use a hand in streamlining the process and reducing costs.

The patent experts at TechPats recommend a focus on 4 key phases of preparation for an IPR petition to help ensure the PTAB recognizes and adopts your arguments: (1) Prior Art Investigation, (2) Invalidity Claim Charts, (3) Analysis by a POSITA, and (4) Expert Declaration and Support.

Prior Art Investigation

More than just a Prior Art Search, an Invalidation Investigation Report needs to quickly depict whether challenging the validity as anticipated or obvious can be fruitful, as well as illuminate claim limitations and terms that may need additional focus. An attorney should be able to look at an invalidation report and make a decision to proceed with certain references or request a deeper dig for additional prior art. The reports are especially valuable to help companies and counsel evaluate and prioritize potential challenges for a list of multiple patents with litigation pending.

Invalidity Claim Charts

A detailed Invalidity Claim Chart should be the roadmap to constructing a solid petition for IPR and should provide organization and clarity. A thorough claim chart is a crucial step in building an IPR case for a number of reasons, including procedural compliance, efficiently outlining strategy, and facilitating collaboration with colleagues and experts. Identifying issues or holes in the prior art references—prior to drafting the petition—is perhaps the most valuable aspect of invalidity claim charts. An invalidity chart should be the backbone of the petition’s grounds for invalidity and should eliminate swapping references or combinations at the 11th hour before filing.

Analysis by a Person Having Ordinary Skill in the Art (“POSITA”)

Relying on consultants with expertise in the art at the time of invention can produce analysis as good as—if not better and more efficient than—many in-house IP teams and patent attorneys. Whether it is describing the technological landscape at the time of invention, recalling a particular company or product from 18 years ago, or unearthing that decisive prior art reference, TechPats can act as an extension of your office and fill in any potential gaps in technical experience.

Expert Declaration and Support

The days are over for when the Expert Declaration could just echo the IPR petition. More than ever, the IPR declaration is now used for the critical functions of vividly depicting the state of the art and explaining how a POSITA would interpret each reference’s teachings. An expert has the single opportunity to frame exactly how a POSITA would interpret claim terms and elements. Perhaps just as importantly, when proposing alleged obvious combinations of known elements, the expert can offer a solid rationale or motivation as to why a POSITA would think to combine the references. Conclusory statements and unsupported petitions won’t work anymore, regardless of the experts’ many years of experience or multiple graduate degrees.

For those reasons, it is crucial to have the Expert working early in the project for an efficient and effective Expert Declaration.

Putting the Plan in Action

Our experts and patent analysts have adapted these proven steps from nearly 20 years of helping our clients in litigation and validity challenges. TechPats continues to work with top law firms and in-house counsels of major companies on IPR preparation. We have in-house analysts and expert declarants experienced in IPRs to support your team and contribute key pieces of a petition and declaration.

TechPats is ready to help with any step of the IPR preparation process. However, as technical consultants and patent agents, TechPats cannot file a petition or provide legal advice. Nevertheless, working in conjunction with legal teams, we’ve found that we can optimize IPR preparation and reduce costs.


Self-Driving cars: Coming to a Showroom Near You…Someday

Self-Driving cars: Coming to a Showroom Near You…Someday

 

TechPats Self Driving CarsIt finally happened a few months ago. One of Google’s Fleet of self-driving cars (it was a Lexus) was at fault in a traffic accident. The car “thought” a bus would slow and allow it to merge into traffic, and when it did not the car struck the side of the bus at slow speed. To add to that, just a few weeks later, Google was issued US Patent 9,280,710 – Bus Detection for Autonomous Vehicle. While certainly ironic, the patent, in fact, describes identifying a large vehicle as a school bus by examining size and color of the vehicle, whereas the Google car was involved an accident with a public transportation bus. While our human minds might think there is little difference, it starts to indicate how complex a task that Google (and others) are attempting. Some additional 3000 decisions were added to the computer code running the vehicle just to help mitigate that one event from happening again. It should make anyone marvel out at how complex our brain is, and how we can perceive and instantaneously process all our sensory data to make sure we don’t hit the bus (of course, sometimes we still do).

Google’s fleet of cars has been driven over 1,000,000 miles without a significant event such as this. That seems astounding, and certainly, the technology must be close to mature. If one of the goals of self-driving cars is to provide a safer ride by reducing accidents and fatalities, a recent study by RAND said that autonomous vehicles must log about 275,000,000 miles of test before the relative safety of the autonomous vehicle can be judged compared with human driven cars. That’s the equivalent of 20,000 vehicles driving the national driving average of 13,750 per year, although if you put those same vehicles (if we had that many) on the road around the clock averaging 30 mph, the task would take less than three weeks.

There are still challenges to mainstream adoption for both technology and regulation. On the technology side, the “eye” of the autonomous vehicle is a LIDAR system. These automotive LIDAR systems still cost 10’s of thousands of dollars each so technology will need to become much more commoditized before autonomous cars are affordable to the general population . LIDAR is a laser-based ranging system, similar to radar, but using laser light instead of radio waves. It is sometimes mistakenly thought to stand for Laser Radar, but it is actually an acronym for Light Detection and Ranging. These LIDAR vision systems are much more advanced than the various types of vision systems used in current generation vehicles for driver assistance features; such as automatic braking, lane change assist, or adaptive cruise control, where a more limited field of view can typically be tolerated. By contrast, the autonomous vehicle needs a constant 360-degree view of its surrounding environment. This is typically accomplished by a rapidly spinning mirror sweeping the laser beam around the vehicle. Objects and their distances are identified from the reflected laser light bouncing back to the detector in the LIDAR unit.

Still, the technology leads the law and regulation. To date, only Nevada, California, Florida, Michigan, North Dakota, Tennessee, and Washington D.C. have passed laws authorizing operation of autonomous vehicles. So, it still might be a while before you walk into any dealer showroom in any state to buy one to put in your garage.

 






Reading In the Dark: Improving Night-time Reading Performance of Tablets and E-readers

Reading In the Dark: Improving Night-time Reading Performance of Tablets and E-readers

??????????????????????????????????????????????????????????????????????????????Every day, it seems like there’s a new battle in the patent war between smart phone, tablet, and e-reader manufactures. Whether it’s Apple, Samsung, Google, or others, efforts are always underway to remain on top of the technology fight. Often times this is good for consumers, as these companies battle to offer the latest technology innovation to aid their customers.

One innovation that has recently been released addresses a potential health problem that many people never even knew existed. The problem is the light that is emitted by tablets and e-readers and the effect it has one health. Natural light from the sun is important for maintaining one’s biological clock, or their circadian rhythm. Artificial light, especially at night, can have an adverse health effect on various people. This effect may be magnified by the widespread use of tablets or e-readers at night.

Various studies have shown a link between artificial light at night to disruptions in sleep patterns and other types of health problems. One potential cause is that the exposure to light suppresses the secretion of melatonin, a hormone the can affect circadian rhythms. Blue light, in particular, has been shown to have adverse affects. In one major study published by the National Academy of Sciences, “Evening use of light-emitting e-Readers negatively affects sleep, circadian timing, and next-morning alertness.” This could be bad news for many people who have adopted tablets or e-readers instead of books for night-time reading. Teenagers, who find these gadgets indispensable, may be especially vulnerable to these light effects.

The manufacturers of portable devices have been aware of this problem and are starting to come out with interesting solutions. Amazon has released a feature in their recent Fire OS upgrade, called Blue Shade. “Blue Shade is an exclusive Fire OS “Bellini” feature that works behind the scenes to automatically adjust and optimize the backlight for a more comfortable nighttime reading experience.” Blue Shade will utilize specialized filters to limit blue light exposure and allow users to easily add warmer filters and modify brightness for reading at night. As the Kindle Fire is closely tied to the Amazon ecosystem, this feature may be especially important, as Amazon’s customer e-book experience is critical.

Not to be outdone, Apple has been testing its new feature and officially rolled it out in iOS 9.3 in its March product event this week. Night Shift is the new feature that automatically shifts the light created by your iOS display from a bright blue to a warmer tone at night, making it easier to fall asleep. Apple says iOS 9.3 will know when to switch each night based on your location and the clock app.

The Night Shift feature sounds like a great function, but may be the subject of future IP controversies. For example, a third party, F.lux already had released an app for iOS with similar features. F.lux has been working on their technology since 2009 and their technology, according to their website, is “patent pending.” Apple soon banned this app (after F.flux received 200,000 hits in less than 24 hours). F.lux wasn’t too happy and publically called for Apple to reconsider their decision. Perhaps this relationship will yield some sort of licensing deal or may be the source of some future litigation. Both Amazon and Apple are likely pursuing their own patent activities in this area, as well.

Night Shift and Blue Shade both appear to be meaningful additions to their respective Apple and Amazon product families. Their goals are to enhance the user experience and improve on the health and sleep behavior of night-time users. Hopefully these companies won’t be kept up at night worrying about IP matters relating to this useful feature.

 






TechPats Moves Company Headquarters

TechPats Moves Company Headquarters

 

TechPats announced that it has moved its company headquarters to Glenside, PA. The company had spent the previous 18 years in Doylestown, PA.

The move is due to company growth and the desire to be in a more convenient location for its clients and staff.

“I am very proud to call Glenside our new home. Our team is thrilled to be in a more convenient location for our clients, just outside of the Philadelphia metro area, and we are excited to have a space that can accommodate our rapid growth,” says Chris Wichser, CEO of TechPats. “This is an exciting time for our company and for our industry and I’m certain we will continue to build on our legacy of quality, value, and expertise in our new location.”

The new address for TechPats is:

TechPats
101 South Easton Road
Suite 200
Glenside, PA 19038

To read the full press release, click here.





The Brave New World of IoT

The Brave New World of IoT

 

TechPats IoTInternet of Things (IoT) is a term that describes a realm of devices interconnected through a variety of communication technologies (Fig. 2) and ultimately to the Internet or to the Cloud. IoT is alive and part of the connected life experience. For example, interconnected devices are used to improve energy efficiencies in commercial buildings and homes. Health care is another area were huge improvements are expected. Urban life is also expected to benefit from IoT by data collection of environment parameters.

The basic building blocks that form the technology backbone of the IoT are:

  • Sensors and actuators
  • Embedded Processors
  • Connectivity to the Internet and to the Cloud

Smart objects use sensors and actuators to interact with their physical environments. Sensors are used to measure the state of the environment and actuators are employed to change or affect the environment. Essentially, sensors take a mechanical, optical, magnetic or thermal signal and convert this into voltage and current data. This data can then be processed. Actuators follow this same process, but in reverse. Voltage and current induce a mechanical, optical, magnetic or thermal change in the physical environment.

Embedded processing is what gives smart objects their intelligence. This function is usually provided by a microcontroller. These run the software of the smart object and are responsible for connecting its sensors and actuators with a radio transceiver. Basically, a microcontroller is a small low power computer on a chip minus the monitor, keyboard and mouse. In many applications, the sensors may be built in on the same die as the microcontroller, e.g., Tire Pressure Monitoring Devices (TPMS).

Even though large networks of interconnected devices operate today in industrial environments, direct connection of smart devices to the Internet and to the Cloud is in its infancy. Standard activities are in motion and will be driven by the dynamics of the IoT marketplace and its value chain constituents and stakeholders.

As this field continues to develop, many challenges and opportunities for innovation exist, and therefore for building intellectual property portfolios as well as licensing existing intellectual property portfolios related to the backbone technologies of the IoT.

One of the great promises of the IoT is as a technology that will make our lives much easier. To wit, the LECHAL footwear which creates a hands-free navigation system through your feet, guiding the wearer towards their destination through simple vibrations in their shoes or insoles.
Eventually, IoT may lead to a anything-as-service (XaaS) world where connected things serve as our personal concierges, able to anticipate our preferences and trigger a chain of experiences without prompting. A world where technology helps you to set and achieve goals, take charge of your life and habits, and optimize your decisions and choices.

TechPats IoT Communication Tech

 

“O brave new world that has such people in it.”
― Aldous Huxley, Brave New World

 






V2X: The Future of Driving Through V2V and V2I Standards

V2X: The Future of Driving Through V2V and V2I Standards

 

TechPats V2V IP ExpertiseIn the not-too-distant future the cars we drive will not only be talking to us, but communicating with each other and the roads below us. Vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) technologies, collectively referred to as “V2X,” are currently being developed and tested, with initial integration into new car models planned for 2017. This technology allows the cars to be “connected,” providing the capability of alerting or warning the driver of conditions or hazards around him, with the potential to reduce traffic jams, prevent accidents and save lives.

V2V is a mesh network in which each vehicle is a node with the ability to transmit, receive and retransmit messages to each other. The resulting network is based on three sets of standards. The first is IEEE 1609, titled “Family of Standards for Wireless Access in Vehicular Environments” (WAVE), which defines the architecture and procedures of the network. The second is the pair of SAE J2735 and SAE J2945, which define the information carried in the message packets. This data would include information from sensors on the car, such as the location, direction of travel, speed, and braking. The third standard is IEEE 802.11p, which defines the physical standards for automotive-related “Dedicated Short Range Communications” (DSRC). In the U.S., V2V will operate in the 5.9 GHz band, compared to your current Wi-Fi device that mainly use 2.4 and 5 GHz.

Automakers such as GM, Ford, Toyota, Hyundai/Kia, Honda, Volkswagen/Audi, Mercedes-Benz and Nissan/Infiniti took part in 2500-vehicle joint project with University of Michigan and National Highway Traffic Safety Administration (NHTSA) to test various V2X concepts. After analysis of the test data, the NHTSA estimated that over a half-million accidents could be prevented and more than a thousand lives saved annually by the technology.

V2I allows the vehicle to communicate with traffic lights and other stationary infrastructure components which also would become nodes in the mesh network. This allows the vehicle to receive information relating to the timing of traffic lights and road signs, or warn the driver of a potential hazard in a blind spot of an intersection.

It’s easy to see why the MIT Technology Review has called V2V one of the biggest tech breakthroughs of 2015. Auto makers hope to capitalize on the excitement soon and General Motors has announced that the 2017 Cadillac CTS will be enabled with V2V technology. This technology will be provided by Delphi Automotive, which has selected NXP’s RoadLINK chipset with Cohda Wireless’s IEEE 802.11p software for use in their V2V modules. Chip-maker Qualcomm has introduced their Snapdragon X12 and X5 LTE modems which work with their VIVE QCA65x4 chipset to support V2V and V2I applications. On the infrastructure side, Cohda is working with Siemens and GM is testing technology from Cisco to ensure that V2V and V2I devices can share the same radio band with causing interference.

As an initial adopter, the 2017 Cadillac will have few cars with which to communicate. In order to realize the benefits of V2V, significant numbers of cars on the road that are equipped with the technology and communicating are necessary. GM estimates that V2V will be effective when 25% of the cars on the road are equipped. At current U.S. scrap rate, this will take about 5 years. The automakers realize that to make this a reality, government regulations will be required. The NHTSA has announced that it will fast-track a proposed rule that would require V2V communication in all new cars, accelerating the proposed schedule by a year.

In addition to GM, Audi has field tested NXP, Cohda and Delphi’s V2V technology. Toyota is developing new safety packages to be available worldwide by the end of 2017 that integrate vehicle sensors with V2V and V2I technology. Ford has demonstrated V2V-enabled vehicles. Of course after-market products will also be developed to enable late model, cars to connect and benefit from V2V technology. On the infrastructure side, in addition to wireless products from Cisco, Siemens and Savari, there is the potential for a range of interactive devices such as, signs, traffic lights and cross-walks.

With the reality of V2V just “around the corner,” there is plenty of room for innovative technology and products over the next five years. Until then, we all will continue to monitor the speed of the advancements in V2X tech.

 






What’s That Up In The Sky?

What’s That Up In The Sky?

 

TechPats GPSStargaze on a clear and very dark night and you may see one. It’s a point of light that looks like a star among thousands—until you notice it’s moving. You likely have spotted a satellite orbiting Earth. Satellites truly are the stuff of rocket science. While humans have been launching satellites into orbit since 1957, only in the past 15 years or so has satellite technology been really accessible to the general population. We have come to rely on satellites for many everyday activities, such as navigation, entertainment, and communications. Most of these activities rely on groups of satellites working together, known as constellations.

One of the most well-known satellite constellations is the Global Position System (GPS). GPS is only one of several operational or planned global satellite navigation systems. The GPS system is owned and operated by the US Government. Russia has operated its own GLONASS system for nearly as long as the GPS system. Europe is building out the Galileo system. India and China are constructing their own satellite navigation networks. GPS is a constellation of 31 satellites orbiting in a medium Earth orbit of about 12,540 miles above the surface of the Earth. It may be useful to visualize orbital size and scale in comparison to a basketball-sized Earth (about 9.5 inches in diameter). On our basketball-sized Earth, the GPS constellation would be almost 15” from the surface. The GPS satellites are traveling at around a fairly speedy 8,700 mph. In order to provide a three dimensional location of a receiver device (for example, a smartphone), visibility of at least four different GPS satellites by the receiver is required. In the current GPS satellite configuration, at last 6 and usually 8-10 satellites are visible from nearly any point on Earth at any given time, although that number will vary as the satellites move.

Besides GPS satellites, delivery of entertainment content via satellite has become hugely popular, with broadcast satellite companies like DIRECTV (now part of AT&T) and DISH Network providing multimedia content through their own satellite networks. DIRECTV, as an example, owns a constellation (or fleet) of 14 satellites operating in our skies. The satellites are in geostationary orbit of 22,236 miles above the surface of the Earth. Using our basketball-scale Earth, that would be almost 27” (or nearly 3 Earth diameters) away from the surface. The satellites are travelling at a modest 6,700 mph. Recall that a satellite orbiting in geostationary orbit appears static to a stationary Earth observer (so the satellites you see moving in the night sky are definitely not DIRECTV satellites). These orbits can only be achieved with the satellites positioned directly above the equator. Due their great distance from the Earth, the potential ground coverage area of any one of these satellites can be very large—e.g., a thousand miles or more. That of course means everyone in the coverage area nominally receives the same broadcast signal. In order to deliver more regional content, DIRECTV also uses what are known as “spot beams,” or directional signals, to deliver the regional content to a more localized geographic area, but even this “local area” can be a range of 100 miles or so.

Another familiar type of satellite you might spot in the sky may be part of the satellite phone infrastructure. While satellite phones are not a mainstream consumer item, they provide truly global voice and data communication, even including those remote areas where no one is willing or able to build a cell tower. One of the more well-known telecom satellite constellations used for voice and data communications is the Iridium system, originally developed by Motorola. Iridium is constellation of 66 satellites in a low Earth orbit of only about 485 miles above the surface of the Earth. On our scaled basketball-sized Earth, that would be a mere half-inch from the surface. The satphone satellites are whizzing around at nearly 17,000 mph and arranged in a series of polar orbits, which enables each satellite to pass over both the north and south poles in its orbit. It is, of course, a necessity that the Iridium system be bidirectional in communication, having both uplink and downlink between the handsets and the satelites. The two above-mentioned systems provide only a downlink to the end user receiver (e.g., the GPS handset or dish). There is no communication back to the satellite for these systems. In addition, as the satphone satellites are so low and moving so fast, a complicated interlinking system between adjacent satellites is required since any given satellite is only visible to a handset receiver for a few minutes. A satellite handset will see one or maybe two satellites at any given time, so hand-off between satellites is critical in order to maintain a connection with a ground-level receiver.

These are just a few examples of satellites that we may interact with regularly. Whether it is our GPS receivers, or satellite TV systems, or communications via satellite phones, it is amazing to view the skies and think how much consumer electronics relies on technology orbiting the Earth thousands of miles away. There are many more satellites that we depend on up there—weather imaging satellites, super-secret reconnaissance satellites, telescopes, space stations, etc. It really is getting crowded up there.

Have an IP project in this technology space? Contact us today to get started.

 






Good Things Come in Small Packages

Good Things Come in Small Packages

 

TechPatsTo keep up with the ever increasing demands of consumers’ expectations for increased computing power and mobility, the semiconductor industry strives to improve the performance and yield of new devices. By continuing to invest heavily in research and development efforts, chip-makers aim to design even smaller transistors and, thus, keep Moore’s Law alive (i.e., the number of transistors in a dense integrated circuit has doubled approximately every two years). In order to harness these advances and apply them in commercial products, the industry must develop the interface between the semiconductor chip and circuit board to maintain reliable communication while still minimizing the footprint of the device.

Traditionally, integrated circuit packages would consist of a leadframe with an island to which the chip is attached surrounded by leads with wirebonds connecting to the chip or a flip chip package with the chip attached with small solder balls to one side of a circuit board. In this case, the circuit board would have a number of wiring layers and another set of solder balls on the other side. These types of packages can take up a lot of real estate on a motherboard or can be overly thick. Thickness is obviously undesirable for use in mobile devices which are trending thinner and thinner. For example, the latest versions of the Apple and Samsung smartphones are 6.9mm and 6.8mm, respectively, and are a constant advertising point in the current mobile landscape.

To enable mobile devices to include more features and higher processing power most now use a stacked package on package (PoP) solution with the main system memory piggybacking on top of the processor. The PoP can reduce communication latency and improve bandwidth. Smaller communication chips that provide the expected communication options, such as LTE, Bluetooth, and WiFi, often utilize a package with the absolute minimum area possible. Appropriately named chip-scale packages (CSP), these packages apply solder balls, which are used to connect the IC to a circuit board, at the surface of the device.

Beyond the processors and communication chips that need smaller packages, the myriad of the smartphones’ and wearable devices’ sensors also need to find space inside the device. As we are seeing with the new wave of wearables, adding sensors drives further development in size/efficiency. More and more devices and products need to become interconnected as the Internet of Things continues to develop. Other factors that make packaging improvements important include the need for devices to be more resistant to shock, temperatures, and humidity as mobile products accompany us in all aspects of our day to day lives—e.g., our daily commutes, exercise, entertainment, work, and even sleep.

Advanced packaging solutions have also provided an avenue for improving processing devices by directly stacking chips on top of each other in a three-dimensional chip. These chips connect from one surface of the semiconductor chip to the other with what is known as a ‘through-silicon via” or TSV. Reliability and manufacturing concerns related to heat dissipation and costs have made it difficult for companies to develop commercial products thus far. One of the first examples of commercial use is a joint venture between Intel and Micron. The partnership has been able to produce stacks of memory chips using TSV technology for use in graphics processing products in what they call a Hybrid Memory Cube. We will ikely see more smartphone cameras use chip stacking and TSVs, as some flagship mobile devices currently use a camera module with the image processing and control chip stacked directly on the sensor.

As consumers expect more and more features in smaller products, not only are the performances of semiconductor transistors critical but also the package in which they are contained. If the reliability and cost issues surrounding true 3D integration of stacking multiple dies can be solved, for example, it will likely open up new opportunities for increased memory and processing capabilities in devices from smartphones to super-computing systems.

 






Display Technology Alphabet Soup – CRT, LCD, OLED, QD

Display Technology Alphabet Soup – CRT, LCD, OLED, QD

 

EleTechPats Quantum Dot Technologyctronic displays are ubiquitous. We carry them almost everywhere we go. They are on our desks, in almost every room of our house, in our cars, on the back of our airplane seats, in our public spaces, and on and on. During the age of the cathode ray tube (CRT), such proliferation of displays was not possible, but the advent of flat-panel display technologies, and most notably the liquid crystal display (LCD) have allowed displays of all sizes to be put almost anywhere. Of course, the LCD effectively killed the CRT in the mid-2000s for all but niche applications, and also managed to push out its competing flat-panel cousin, the plasma display, in the past few years.

While the LCD currently enjoys the predominate position in the display market, there are several new technologies that threaten to compete. The organic light emitting diode (OLED) display uses a thin film of electroluminescent organic molecules. These molecules emit red, green, or blue (RGB) light, depending on the molecule, when exposed to an electric current. This is very different from the standard LCD that uses either a broad spectrum CCFL or LED-based backlight and color filtering to create the reds, greens, and blues used to compose the picture. OLED displays boast lower power consumption, larger viewing angle, vivid colors, and higher contrast (deeper blacks) than their LCD counterparts. OLED displays can be transparent, flexible, and even rollable. Such attributes will undoubtedly lead to some very interesting product designs. One notable disadvantage for the OLED display is the cost – right now they are much more expensive to manufacture than standard LCDs.

Another technology that has been getting much attention recently is known as Quantum Dot (QD). It’s a very mysterious name. The word “QUANTUM” may seem to conjure complex physics or technology of the future. Still, this “quantum” technology is available today. The quantum dot itself is a crystalline semiconductor nanostructure, usually based on cadmium (Cd), and spherical in shape (hence “dot’). The diameters of these nanostructures are measured in 10’s of atoms, or a few nanometers. In other words, they are really small. That’s where the quantum part of the name comes in. As its name implies, the operation of the quantum dot is based on the principles of quantum mechanics, or more specifically, the principle of quantum confinement.

Generally speaking, as the dimensions of a material become sufficiently small, some strange things (sometimes referred to as “quantum effects”) start to happen. In the case of the quantum dot, a very specific color of light will be produced when the quantum dot is illuminated. The light color emitted from the QD nanostructure is inversely proportional to the size of the QD. In other words, larger QDs produce redder light than smaller QDs. (Red light has less energy than green light which has less energy than blue light.) This selective color emission of different size QDs is what makes them attractive for use in displays.

It is also notable that the QD display is actually a form of LCD, with one of the main differences being the backlighting arrangement. In a QD display, blue LEDs are used in combination with QDs tuned to green or red to project very sharp spectrum RGB colors (much sharper than can be produced by color filtering in standard LCDs). Thus, QD displays are capable of the vivid colors similar to those found in OLED displays. They also use less power than a standard LCD. They have a distinct advantage of being less expensive to manufacture than OLEDdisplays, as they share many similarities with the standard LCD manufacturing processes.

So while the standard LCD is currently the market leader in display technology, it will be interesting to see what the future brings. What will be dominant in 2020? Will it be OLED, QD, or some brand new innovation?