Powered by OpenAIRE graph
Found an issue? Give us feedback

PHOTONICSENS

PHOTONIC SENSORS AND ALGORITHMS SL
Country: Spain
6 Projects, page 1 of 2
  • Funder: European Commission Project Code: 682804
    Overall Budget: 3,468,920 EURFunder Contribution: 2,428,190 EUR

    This proposal aims to develop 3 Dimensional Mini-cameras (producing 3D-Images) for mobile phones, tablets and laptop computers; more specifically embedded camera modules providing 2D and 3D-Images, integrating sensing electronics and optics in a single lens Wafer Level Camera (WLC). This market creating innovative device is aiming towards disruptive market-share and global leadership in 3D-imaging camera modules, to accelerate our growth, profitability and Europe´s competitive position in smart cameras for 3D-imaging. We introduce a paradigm changer by applying new sets of rules in one of the highest growth areas addressable in consumer technologies and, we will set entry barriers that are difficult to overcome for current and new market players; we will hit the market while established competitors will still be trying to figure out our algorithms, our IPR and our hardware/software solution. Our most disruptive competitive advantage is based on our algorithms. These exploit a novel hardware structure described in detail in the proposal. In short, we build micro-lenses on top of a semiconductor substrate containing image sensors or pixels with several color filters. 3D has captured the public imagination through Hollywood mega productions (like James Cameron´s Avatar). It has reached modest market penetration in different segments including movie productions, household 3D-TVs and cameras with two lenses. The hurdle to mass market adoption of the technology is the lack of consistent quality 3D content. The paradigm shift that our market creating innovative 3D camera module enables is that it meets the need for much higher quality content production at a lower cost, and will enable the creation of high-quality consumer-owned 3D-content on mobile platforms. Today, 3D content creation is the privilege of a small number of professional movie makers.

    more_vert
  • Funder: European Commission Project Code: 878873
    Overall Budget: 4,069,880 EURFunder Contribution: 2,848,910 EUR

    Recent market announcements ignited the need in all mobile phone manufacturers to incorporate 3D cameras for AR (Augmented Reality), MR (Mixed Reality), VR (Virtual Reality) and "face recognition". These cameras have application in phone unblocking, biometric analysis, cybersecurity and digital trust, but more important, they prepare the terminals for disruptive AR, VR and MR applications. The objective of this project is to develop the most disruptive cameras for AR, MR, VR and depth estimation applications for mobile phones, tablets and laptop computers: specifically, an embedded camera module, a highly competitive device aiming towards disruptive market-shares and global leadership. These devices will offer the same precision at 5 meters as competing cameras at 1 meter (the Structured Light [SL] cameras and TOF [Time of Flight] cameras), will increase drastically the resolution of the depth map (from 40 Kpixels to 1,45 Mpixels), will consume a power 7 times lower than the second lowest competitor, will also reduce by more than 50% the costs of the BOM (Bill of Material) for the camera and will provide real-time-3Dvideo and depth maps (vs. a few frames/second). The TOF and SL solutions need 4 devices to offer worse performance than our one or two devices (reducing bulkiness and power dissipation).

    more_vert
  • Funder: European Commission Project Code: 820615
    Overall Budget: 4,190,770 EURFunder Contribution: 2,913,620 EUR

    This proposal aims to develop the most competitive cameras for Face-Recognition, Mixed Reality and Augmented Reality for mobile phones, tablets and laptop computers. An embedded camera module, a highly competitive device aiming towards disruptive marketshare and global leaderhip. Our device reduces by more than 50% the costs of the BOM (Bill of Material) for the camera, increases drastically the resolution of the depth map (from 40 Kpixels to 1,45 Mpixels), provides real-time-video and consumes much lower power. Our commercial contacts with Korean, Chinese and Taiwanese manufacturers of mobile phones and tablet computers, customers for our first Depth-cameras, confirm that our disruptive 3D imaging devices will sell millions of units in the first year following initial design win(s). According to IDC and WCP, well-known analyst firms, the number of Smart Phones, tablets and laptops forecasted for 2018 will reach 2.25 billion devices. A large percentage of the manufacturers of this combined total of 2.25 billion consumer electronic products are potential customers for us. Our most disruptive competitive advantage is based on our algorithms (with only 1% of the computing power required by competition): we create high-quality depth-maps, 3D-content and all-in-focus images, we can process 60 fps videos under ANDROID processors while competitors need seconds to minutes to process a single frame with powerful GPUs. We disrupt into established value-chains accelerating the development of ideas into business-driven new products targetting to bring turnovers of € 2 billion by 2022-23 to the partners of this consortium. Synergies with existing products guarantee a swift exploitation of results starting before the end of the project.

    more_vert
  • Funder: European Commission Project Code: 646984
    Overall Budget: 71,429 EURFunder Contribution: 50,000 EUR

    This proposal aims to develop miniaturized 3 Dimensional (3D) cameras for mobile phones, tablets and laptop computers; more specifically embedded camera modules providing 2D and 3D-Images, integrating image-sensing electronics and light-field optics in a single lens Wafer Level Camera (WLC). This disruptive device is aiming towards significant market-shares and global leadership in 3D-imaging camera modules, accelerating our growth and Europe´s Competitive position in Smart Cameras for 3D-imaging. We introduce a paradigm changer in one of the highest growth areas addressable in consumer technologies, setting entry barriers difficult to overcome for current and new market players; we will hit the market while established competitors will still be trying to figure out our algorithms, our IPR and our hardware solution. Our most disruptive competitive advantage is based on our algorithms, exploiting a novel hardware structure. First contacts with Taiwanese distributors and manufacturers of laptop/network/tablet computers and mobile phones allow us to think that such a disruptive device can sell millions of units. This phase-1 study will minimize technical and market acceptance risks. We extend the well-known FSV-model to FSOV (Fabless Semiconductor and Optical Vendor). Manufacturing, both for the semiconductor image sensors and for the optical layers on top of it, will be performed by third parties (silicon foundries and optical foundries). One of the main tasks of this feasibility study is to choose the best manufacturing partners from the handful of candidate companies in both sectors (silicon and optics), as well as obtaining exact quotations for the costs of the BOM (Bill of Materials); we will also visit some potential customers to verify correctness of the data (sales volumes and price targets) and build a Business Plan that will include Product Roadmap, Project Plan and Marketing and Sales Plans.

    more_vert
  • Funder: European Commission Project Code: 101226375
    Funder Contribution: 4,477,800 EUR

    The evolving needs of society demand advanced control of electromagnetic waves. At the forefront of science are metasurfaces—ultrathin engineered layers with tailored electromagnetic responses. While traditional metasurfaces are statically predesigned, current research is shifting toward reconfigurable metasurfaces, which can dynamically adjust their properties after fabrication. This capability expands their potential for real-world applications across a range of industries. Despite their promise, most reconfigurable metasurfaces rely on the element-by-element tunability of large arrays that require high energy consumption, complex implementation, and high costs. MetaTune addresses these challenges by training the next generation of researchers to develop a new class of reconfigurable metasurfaces combining simplicity of implementation, multifunctionality, and industrial viability. These metasurfaces will feature unified tunability mechanisms, reducing complexity and energy usage, while incorporating innovative materials for enhanced adaptability, robustness, and endurance. Furthermore, the project will pioneer cost-effective fabrication techniques, ensuring compatibility with large-scale production. The program adopts a multidisciplinary approach, conducting application-driven research in four high-impact domains: communication systems, thermal management, sensing, and imaging. Beyond scientific training, participants will gain comprehensive industry exposure and develop transversal skills, preparing them for diverse career pathways in academia and industry. By tackling these critical technological challenges, MetaTune not only advances fundamental knowledge in physics and engineering but also bridges the gap between laboratory research and industrial implementation. This project paves the way for integrating reconfigurable metasurfaces into practical systems, delivering societal benefits through improved connectivity, sustainability, safety, and innovation.

    more_vert
  • chevron_left
  • 1
  • 2
  • chevron_right

Do the share buttons not appear? Please make sure, any blocking addon is disabled, and then reload the page.

Content report
No reports available
Funder report
No option selected
arrow_drop_down

Do you wish to download a CSV file? Note that this process may take a while.

There was an error in csv downloading. Please try again later.