[{"content":"November was a month filled with learning and sharing! After the inspiring XR Summit 2024 at IIT Madras, I had the privilege of attending another fantastic event—XConf 2024 by Thoughtworks India. Here are my key takeaways from this incredible gathering of technologists.\nAbout XConf XConf is Thoughtworks\u0026rsquo; annual technology event, designed by technologists for technologists. Sponsored this year by AWS and CockroachDB, it celebrates innovation and the transformative power of software in the real world.\nHaving spoken at XConf 2021, 2022, 2023, this year was extra special for me. I proudly showcased my book 📕 Exploring the Metaverse, at the Author’s Corner.\nI also supported the Emerging Tech Track and contributed to the Experience Zone, which featured hands-on demonstrations of cutting-edge tech.\nHere’s a glimpse into the highlights of the event:\n Top Tech Trends Bharani Subramaniam and Vanya Seth shared this year’s top 8 tech trends, including:\n Generative AI Beyond Coding: Moving from \u0026ldquo;10x developers\u0026rdquo; to \u0026ldquo;10x teams\u0026rdquo; with AI-assisted collaboration tools like Thoughtworks Heaven. RAG (Retrieval-Augmented Generation) and SLM: Highlighting the shift toward lightweight, efficient AI. LLM Evaluations and Local-First Software: Building reliable systems that are user-centric and scalable. Keynote Highlights Innovation at Scale: Agentic AI Kiran Jagannath and Satinder Singh from AWS shared Amazon\u0026rsquo;s vision for powering India\u0026rsquo;s growth as a global tech leader. AWS’s contributions to projects like CoWIN, DigiYatra, and UPI demonstrate technology\u0026rsquo;s transformative role in democratizing talent and driving India toward becoming the 3rd largest economy.\nThey also talked about how AWS’s in-house AI chips will be making LLM compute accessible to masses, with Amazon Bedrock Agent pushing boundaries in Agentic AI.\n 🇮🇳 Astrophysics: A Technological Marvel Hearing from Annapurni Subramaniam, Director of the Indian Institute of Astrophysics, was an awe-inspiring experience. She shared the cutting-edge tech behind varius observatory telescopes in India, India’s astrophysical exploration and contributions to the Thirty Meter Telescope (TMT) project.\nFor me, space exploration feels like the ultimate metaverse—a new world waiting to be discovered. Seeing the tech behind controlling observatories and telescopes remotely for years is truly remarkable.\n Tech Tracks XConf offered insightful tracks Data \u0026amp; AI, Distributed Systems and Emerging Tech\nThe Emerging Tech Track, where I supported to bring up following interesting talks to the audience:\n The evolution of XR Continuous compliance Engineering practices for machines I also snuck into some sessions from other tracks, and most were packed with fascinating insights!\n Experience Zones The Experience Zone showcased cutting-edge tech, including Apple Vision Pro, industrial AR/VR applications, and GenAI demonstrations. Our GenAI Booth, along with partner booths from AWS and CockroachDB, attracted enthusiastic crowds.\n Author\u0026rsquo;s Corner The Author’s Corner, introduced last year, continued to shine as a unique highlight. This year, I proudly displayed my book 📕 Exploring the Metaverse, alongside Unmesh Joshi\u0026rsquo;s Patterns of Distributed Systems. Visitors could engage with us directly, ask questions, and even get their copies signed!\nUnmesh also led a workshop, leaving participants with valuable hands-on learning experiences.\n The event was filled with knowledge-sharing, collaboration, and inspiration, all complemented by amazing food and great conversations.\nWishing everyone continued success on their tech journeys! Let’s keep innovating and exploring. 🚀\n","id":0,"tag":"xr ; ar ; vr ; mr ; thoughtworks ; event ; takeaways ; xconf ; avtech ; ai ; gen-ai ; conference ; sdv ; EmergingTech","title":"XConf-2024 - Thoughtworks India - Takeaways","type":"event","url":"/event/thoughtworks-xconf-2024/"},{"content":"I recently had the privilege of attending XR Summit 2024 at the eXperiential Technology Innovation Center (XTIC), IIT Madras. My heartfelt thanks to Prof. Mani and his team for the kind invitation and to Thoughtworks for making all the arrangements seamless.\nThe event was nothing short of extraordinary – a world-class summit showcasing groundbreaking advancements, with an exciting lineup of sessions led by industry pioneers and academics. The topics covered were diverse, thought-provoking, and visionary. In this article, I’m sharing some of my key takeaways from this incredible experience.\n🥽 Oculus: A Marvel of Engineering It was inspiring to hear directly from Steve LaVelle (Founder of Oculus) about the engineering brilliance behind Oculus. This revolutionary device created such a buzz that it was acquired by Facebook (now Meta) for $3 billion within just a few years of its inception.\nSteve shared insights into how AI, estimation techniques, and human perception engineering have been considered into the development of Oculus. He focus on designing human-centric experiences that go beyond technology resonated deeply with me.\nWhile the XR industry still faces challenges like battery life, device weight, and ergonomics, the advancements so far are awe-inspiring, and the field continues to evolve. For instance, using XR to train AI agents—similar to scenarios I wrote earlier like virtual car parking—highlights the profound potential of merging perception engineering with immersive tech.\nIt’s clear that XR will remain a significant focus area, continuing to amaze us despite the ups and downs of the industry hype cycles.\n🤝 Government, Academia, and Law: Collaboration in Action The event emphasized the critical collaboration between government, academia, and law in advancing XR. I was amazed by the various initiatives aimed at skill development, supported by state and central governments. Representatives from Media and Entertainment Skills Council (MESC) highlighted numerous certification programs and training initiatives under Skill India and PMKVY. Similarly, the NPTEL courses by IITs and othersare playing a significant role in upskilling talent in this domain.\nMr. Ashish Kulkarni, Chairman of AVGC-XR Forum, FICCI, discussed opportunities in AVGC XR and ongoing work on defining standards and policies for the AVGC sector, sharing initiatives with both state and central government in India 🇮🇳.\nMs. N.S. Nappinai, Sr. Advocate, Supreme Court, provided crucial insights on the legal aspects of XR, emphasizing the importance of creating a safe, inclusive digital ecosystem. Her discussion on cyber misogyny and gender justice underscored the need for responsible technological advancements.\nThe collaboration at Metaverse India Policies and Standards (MIPS), of which I am a proud member, also aligns here. All this with the themes of my book, 📕 Exploring the Metaverse. It reflects the need for a collective effort from governing bodies, academia, industry, and individuals.\n👁️ Lensless Camera: A Game-Changing Innovation One of the most jaw-dropping sessions was on the lensless camera, presented by Dr. Kowshik Mitra. The concept of a camera without a lens seems unimaginable, yet this innovation uses sensors to generate image masks and reverse-engineer actual pictures through algorithms.\nThis breakthrough could significantly reduce the weight and bulk of XR devices, paving the way for applications in edible devices, endoscopy, and more. While still in its early stages, the potential is immense.\n⌛️ Extending Reality: A Long-Term Vision Renowned academics such as Prof. Marimuthu Palanisamy(University of Melbourne), Prof. Mel Slater(University of Barcelona), Dr. Anna Yershova LaVelle(University of Oulu), and Prof. Mandayam Srinivasan(MIT Touchlab) shared insights into the evolutionary nature of XR technologies.\nTheir discussions reinforced that extending reality is a long-term vision, requiring continuous innovation and investment. It is not a trend tied to the hype cycle but an ongoing journey that demands perseverance and creativity.\n🗣️ Ethics, Security, and Responsible Technology I joined the panel Mr. Anumukonda Ramesh, Mr. Prem Balachandran, Mr. Aditya Raut, Dr. Pravin Nair, Mr. Alagarsamy Mayan and discussed “Convergence of XR with AI, Gaming, and AVGC”.\nThere were number of inspiring sessions covering pressing issues like ethics, trust, and security in the XR space, spanning industries such as healthcare, education, cybersecurity, manufacturing, and retail. I wish I could attend all of them, but they were in parellel tracks. The discussions were rich with insights and reflected a shared commitment to building a responsible, trustworthy future for XR, Web3 and Metaverse by industry leaders from TCS, HCL Tech, Samsung, Ericsson, Markel Haptics and more.\n🧠 Aligning XR Design with Neuroscience One of the highlight of the summit was Mohsin Memon’s workshop on how neuroscience informs XR design. He presented the AFERR model - Activation, Forecasting, Experimentation, Realization, and Reflection, this model emphasizes the importance of designing XR experiences that align with human brain functioning.\nI have often stressed in my book 📕 Exploring the Metaverse that XR is about changing beliefs, and design plays a critical role in achieving this transformation. Mohsin’s research further reinforced this perspective.\n🎭The Convergence of Art, Science, and Technology One of the most inspiring moments of the summit was witnessing the convergence of art, science, and technology. The VR movie Le Musk, created by the legendary Dr. A.R. Rahman, is a testament to this synergy.\n Dr. Rahman was honored with the XTIC Award 2024 for Innovation.\n and he facilited startup awards also unveiled the XTIC Chronicle’s Special Edition, which features an my article and book. This convergence of creativity and innovation encapsulates the essence of XR.\n🤔Reflections and Closing Thoughts The XR Summit 2024 was a perfect blend of innovation, collaboration, and inspiration. From exploring cutting-edge technologies to discussing ethical and legal frameworks, the event provided a 360-degree view of the XR landscape. Overall it fit well with the theme of my book.\n 📕 I am delighted to have shared my Exploring the Metaverse, with the audience and to see its relevance reflected in the discussions.\nThe summit also featured fascinating booths from companies like Sony, Qualcomm, StoriXR, Merkel Haptic and more. My engaging conversations with Qualcomm executives on Spaces SDK and OpenXR were a fitting reminder of the work my team and I have done with Lenovo’s ThinkReality.\nIt was great in-person meeting with leaders Mr. Anumukonda Ramesh, Keyur Bhalavat, Navin Manaswi, Rabindra Sah, Thriveni and many more. This event also clubbed with AWE nite, with series of discussions, fire-side chat and delicious dinner.\nLet’s continue to learn, share, and innovate as we extend the boundaries of reality!\n","id":1,"tag":"thoughtworks ; event ; talk ; xr ; ar ; vr ; mr ; ai ; spatial-computing ; community ; learnings ; AWE ; metaverse ; business ; panelist ; genAI ; 3d ; expert insights ; genXR ; book ; enterprise ; takeaways ; IIT ; XTIC ; MIPS ; FICCI","title":"XR Summit 2024, IIT Madras - Takeaways","type":"post","url":"/post/xr-summit-2024-takeaways/"},{"content":"~~🚀 My Book, Exploring the Metaverse, is now available in stores worldwide, including Amazon and BPB Online. It features forewords and testimonials from industry pioneers, and it aims to uncover the concept of the Metaverse, offering insights into its technological and business implications.\nA special highlight is the endorsement from Mr. Ravindra Dastikop—an esteemed academician, entrepreneur, emerging technologies leader, and author of Metaverse Glossary: Your Gateway to the Future. Collaborating with him has been an incredible journey!\n🙏 A heartfelt thank you, Ravindra, for his invaluable feedback on the early drafts and your thoughtful foreword. His insights have truly enriched this work.\n📖 Explore the Metaverse and redefine your understanding of the digital future!\n \u0026ldquo;The Metaverse stands as one of the most recent advancements in a series of technologies with the potential to profoundly shape humanity\u0026rsquo;s future. It holds the promise of transforming the ways we work, connect, and communicate. Acting as a bridge between the real and virtual worlds, the Metaverse allows individuals to metamorphose into new ‘amphibians’ capable of seamlessly navigating between these realms. This book is a valuable resource for anyone embarking on the exploration of this transformative technology. By demystifying technical jargon and presenting content in a learner-friendly structure, this book provides a comprehensive overview of the Metaverse domain. Additionally, it stands out as a suitable textbook for introductory courses catering to students, employees, and professionals, including executive education programs.\u0026rdquo; – Ravindra Dastikop Assistant Professor, Co-founder Pixuate Author Metaverse Glossary: Your Gateway to the Future\n --- 🛍️ Grab Your Copy Today! Exploring the Metaverse is available on Amazon and BPB Online in both paperback and eBook formats. Get a sneak peek of the book and explore a free preview here.\n☞ Click here for availability and discounts in your region\n☞ Read: Exploring the Metaverse - Synopsis\nDiscover more about my books here.\n","id":2,"tag":"xr ; ar ; vr ; metaverse ; book ; available now ; event ; buy ; launch ; meta ; blockchain ; NFT ; IOT ; AI ; thoughtworks ; event ; MR ; testimonial ; xros","title":"🙏 Exploring the Metaverse | Testimonial 9","type":"post","url":"/post/exploring-the-metaverse-testimonial-9/"},{"content":"🚀 Excited to announce that I’ll be part of the XR Summit 2024 hosted by XTIC, IIT Madras, a hub for driving innovation in XR technologies in India! 🌟\nIIT Madras, with its state-of-the-art Haptics Lab and the IoE Center for VR and Haptics, supported by the CAVE consortium (via XTIC.org), is at the forefront of advancing experiential technologies. This summit will bring together key stakeholders, industry leaders, and visionaries to shape the future of XR.\nI’m honored to be speaking at this prestigious event, which also serves as a trial run for a future AWE event in India! I’ll join an esteemed panel to discuss \u0026ldquo;XR with AI, Gaming, and AVGC\u0026rdquo;, exploring the latest innovations and challenges at the intersection of these domains.\n🎤 Session Details: Innovation in XR – A discussion on the convergence of XR with AI, Gaming, and AVGC. Gain insights from industry leaders, including experts from Thoughtworks, Qualcomm, IIT Madras and more:\n📅 Date: November 16–17, 2024\n🕒 Time: November 16, 11:30 AM – 12:45 PM\n📍 Location: Main Stage, ICSR, IIT Madras\n🗣️ Panelists:\n Mr. Anumukonda Ramesh (Moderator) - Game Engine Expert, Founding Partner, Advisor - APlus Associate Mr. Prem Balachandran - Founder and CEO, Aatral XR Mr. Kuldeep Singh - Me, Engineering Leader and Author, Thoughtworks Mr. Aditya Raut - Senior Staff Engineer, Qualcomm Dr. Pravin Nair - Assistant Professor, Indian Institute of Technology, Madras Mr. Alagarsamy Mayan - Award-Winning Animation \u0026amp; VFX Expert, CEO, TAS Digital Media Private Limited ✨ Register here: XR Summit Registration\nStay tuned for more updates, and I look forward to connecting with the XR community at IIT Madras! Let’s keep learning, sharing, and building the future of immersive tech. 🙌 Full Takeaway here - XR Summit 2024 - Takeaways\n","id":3,"tag":"event ; speaker ; talk ; xr ; ar ; vr ; mr ; ai ; spatial-computing ; community ; AWE ; metaverse ; business ; panelist ; genAI ; 3d ; expert insights ; genXR ; book ; enterprise ; IIT ; University ; XTIC ; thoughtworks ; AVGC","title":"XR Summit and AWE Nite | XR with AI, Gaming, AVGC","type":"event","url":"/event/xr-summit-2024/"},{"content":"🚀 My Book, Exploring the Metaverse, is now available in stores worldwide, including Amazon and BPB Online. It features forewords and testimonials from industry pioneers, and it aims to uncover the concept of the Metaverse, offering insights into its technological and business implications.\nOne of the standout testimonials is from Mr. Hemanth Kumar Satyanarayana, Founder, CEO at Imaginate XR, an award-winning innovator, MIT TR35 Young Innovator, IIT Madras alum, and TEDx speaker, who has dedicated over two decades to building the XR ecosystem.\n🙏 Heartfelt thanks to Hemanth for his invaluable feedback on early drafts and his thoughtful foreword:\n \u0026ldquo;In the heart of India, Kuldeep Singh stands out as a visionary, guiding us through the uncharted territories of the Metaverse in his groundbreaking work, \u0026ldquo;Exploring the Metaverse: Redefining Reality in the Digital Age\u0026rdquo;. As the founder of Imaginate XR, that mirrors the innovation pulsating from this vibrant nation, I find parallel inspiration in both Kuldeep\u0026rsquo;s journey and our collective mission. Kuldeep\u0026rsquo;s exploration transcends geographical boundaries, skillfully weaving together a narrative that resonates with anyone seeking a deeper understanding of our digital future. From unraveling the origins of the Metaverse to dissecting its various forms, he paints a vivid picture of this evolving landscape. In the spirit of Trishanku Swarg | त्रिशंकु स्वर्ग, symbolizing a celestial realm between heaven and earth, Kuldeep\u0026rsquo;s work becomes a guiding light—a delicate balance between the known and the unknown. As we read the chapters spanning XR, AI, IoT, and Blockchain, it becomes evident that Kuldeep\u0026rsquo;s vision transcends the local and resonates globally. Exploring the Metaverse is not just a book, it\u0026rsquo;s an invitation to join a collective exploration where ideas born in India shape the broader narrative of the digital age. Congratulations, Kuldeep, on this remarkable endeavor! Wishing him the very best as his work contributes to the global conversation on the future of the Metaverse. – Hemanth Kumar Satyanarayana Founder Imaginate XR, MIT-TR35 Young Innovator, TEDx Speaker\n The book also highlights fascinating work by Imaginate XR as we explore potential use cases and opportunities in the Metaverse.\n --- 🛍️ Grab Your Copy Today! Exploring the Metaverse is available on Amazon and BPB Online in both paperback and eBook formats. Get a sneak peek of the book and explore a free preview here.\n☞ Click here for availability and discounts in your region\n☞ Read: Exploring the Metaverse - Synopsis\nDiscover more about my books here.\n","id":4,"tag":"xr ; ar ; vr ; metaverse ; book ; available now ; event ; buy ; launch ; meta ; blockchain ; NFT ; IOT ; AI ; thoughtworks ; event ; MR ; testimonial ; xros","title":"🙏 Exploring the Metaverse | Testimonial 8","type":"post","url":"/post/exploring-the-metaverse-testimonial-8/"},{"content":"📖 My Takeaways from Our Next Reality by Alvin Wang Graylin and Louis Rosenberg\n☞ Buy - Exploring the Metaverse ☞ Buy - Our Next Reality \nOur Next Reality topped my reading list for months, and travel always helps me catch up on books! After meeting Alvin at AWE Asia 2024, I was eager to dive in and finally complete it. Following recent trips to Singapore and now the US, here’s what I found worth sharing from the book.\nIn a nod to my book,📕 Exploring the Metaverse: Redefining Reality in the Digital Age I initially used the phrase “Exploring the metaverse is our next reality.” But after reading Our Next Reality, I’ve adapted this to “Exploring the AI-powered Metaverse is our next reality,” to capture how AI will shape our immersive future.\nAs an author, I naturally drew comparisons between this masterpiece and my own work. Here are a few of my key takeaways from Our Next Reality:\n 🧠 Evolution of Intelligence Alvin’s and Louis’s perspectives converge on the evolution of intelligence, covering the journey from basic AI systems to LLMs, and onward toward AGI (Artificial General Intelligence) and ASI (Artificial Super Intelligence). Alvin shares forward-looking use cases, while Louis counters with the challenges, emphasizing responsible development. While my book begins with human and technological evolution, Our Next Reality brings focus to the evolution of intelligence and its pervasive impact on our lives.\nI particularly enjoyed their equation for the metaverse:\n Metaverse = Internet (3D)AI ⇆ XR\n The book, much like Part 2 of Exploring the Metaverse, delves into how emerging technologies—XR, AI, IoT, cloud computing, and networks—are accelerating the shift to a more interconnected, immersive future. With a balance of hard facts, figures, and over 30 years of experience, the authors offer a compelling perspective on a centralized and decentralized metaverse and its global evolution.\n🔋 AI-Powered Metaverse: Use Cases and Concerns In Part 3 of my book, I cover the metaverse across sectors, from gaming and entertainment to healthcare, education, and industry. Similarly, Our Next Reality explores AI-driven applications, from virtual sales agents that redefine marketing to AI-powered teachers and healthcare providers.\nThe book delves deeply into privacy, security, and the future of emotional intelligence. AI’s capabilities extend to sensing beyond human perception—seeing, hearing, and interpreting emotions we might not consciously reveal. This capacity raises questions about privacy and the subtle influence on our choices, which we may trust more than human interactions. Meanwhile, the book also addresses AI biases and the challenges of \u0026ldquo;AI inbreeding,\u0026rdquo; where AI may increasingly be trained on AI-generated data.\n👑 The Abundance Paradigm One unique theme is abundance—the idea that the AI-powered metaverse will grant us unlimited access to goods and services. The book touches on Universal Basic Income (UBI) and the notion of an economy of abundance, where content, production, and consumption will be accessible to all. It brought to mind the film Kalki 2898 AD, where people survive in a dystopian society yet thrive in a world of abundance by earning credits to access a richer life.\n In this setting, the metaverse could allow us to accomplish what’s impossible in reality, fostering creativity and freedom from traditional boundaries.\n☞ Buy - Exploring the Metaverse ☞ Buy - Our Next Reality \n🧬 Optimizing the Humanity Out of Humanity While AI promises unprecedented productivity, there’s a concern it may “optimize the humanity out of humanity.” With AI trained on human data, it’s critical to keep humans in the loop to retain core values. The book explores the geopolitical implications of an AI-powered metaverse and the potential of swarm intelligence to foster a collective superintelligence.\n🏹 Our Next Reality Just as I emphasize development practices, standards, and policies in Exploring the Metaverse, Our Next Reality speaks to our role in creating a balanced future. The book presents six basic rights for the metaverse, striving for an achievable utopia where tech enhances well-being. The authors share insights for policymakers, developers, industry leaders, and others on building a responsible, equitable metaverse.\n Overall, Our Next Reality is a must-read for anyone interested in AI, the metaverse, and the future of technology. I highly recommend it to readers keen on exploring what lies ahead.\n☞ Buy - Our Next Reality ☞ Buy Exploring the Metaverse\n","id":5,"tag":"book ; book-review ; available now ; buy ; launch ; blockchain ; NFT ; IOT ; AI ; thoughtworks ; xr ; ar ; vr ; mr ; ai ; spatial-computing ; learnings ; AWE ; metaverse ; genAI ; 3d ; genXR ; takeaways","title":"Exploring the AI-powered Metaverse is Our Next Reality","type":"post","url":"/post/exploring-ai-powered-metaverse/"},{"content":"In previous articles, we explored MQTT in-depth, even setting up a web-based remote controller that simulated a physical button. We did this in a “Soft IoT” way, avoiding hardware intricacies. However, in IoT, networking between devices plays a crucial role. Not all IoT devices are connected to the Internet or can communicate via MQTT over the internet, as shown in our earlier examples. These devices need a low-latency, low-power network that can auto-discover and manage device communication seamlessly. Popular protocols like Zigbee, Matter, and Thread are designed for this purpose. In this article, we’ll focus on Thread, discussing its fundamentals and building a virtual Thread network — no physical hardware required.\n🤔 About Thread Thread is a lightweight, low-power communication protocol for IoT devices. It operates on IPv6 and IEEE 802.15.4 mesh networking, providing secure, scalable, and reliable connectivity. Google’s OpenThread is an open-source implementation of the Thread protocol, widely used in smart home devices like Google Nest.\nOpenThread also offers a simulation of network nodes, including roles like leader, router, and child nodes, with automatic adjustments when nodes go offline. The protocol allows for efficient network discovery and seamless router selection, ensuring continuous connectivity.\nLearn more about node types and IPv6 addressing for Thread networks from the OpenThread Primer.\n🛜 Creating a Virtual Thread Network To see Thread in action, we’ll simulate a Thread network using Docker, creating virtual nodes that form a mesh network.\nStep 1: Set Up Docker First, download and install Docker Desktop. Then, pull the OpenThread image using the following command:\n➜ ~ docker pull openthread/environment:latest Once the image is downloaded, run it in interactive mode with network administration capabilities and IPv6 enabled:\n➜ ~ docker run --name codelab_otsim_ctnr -it --sysctl net.ipv6.conf.all.disable_ipv6=0 --cap-add=net_admin openthread/environment bash root@a5cce65da408:/# Step 2: Start an Emulated Thread Device The OpenThread Docker image includes a pre-built binary called ot-cli-ftd, a command-line tool for creating emulated Thread devices.\nroot@a5cce65da408:/# cd /openthread/build/examples/apps/cli/ root@a5cce65da408:/openthread/build/examples/apps/cli# ./ot-cli-ftd 1 \u0026gt; Next, assign a random dataset to the device, which includes a unique network ID, network keys, and a network name:\n\u0026gt; dataset init new Done \u0026gt; dataset Active Timestamp: 1 Channel: 17 Wake-up Channel: 25 Channel Mask: 0x07fff800 Ext PAN ID: abf8abcc3dd0daaf Mesh Local Prefix: fd16:2a3a:b22f:3dfe::/64 Network Key: 4084c92fe70f2d8968df19aa5de3cfe5 Network Name: OpenThread-4de8 PAN ID: 0x4de8 PSKc: bd48fa9ca97d01c2052e18951b312e25 Security Policy: 672 onrc 0 Done \u0026gt; Now, bring up the interface and start the Thread network:\n\u0026gt; dataset commit active Done \u0026gt; ifconfig up Done \u0026gt; thread start Done \u0026gt; state leader Done \u0026gt; Check the device’s state — it should show as a “leader” in few seconds, indicating that this node has become the leader of the network.\n\u0026gt; ipaddr fd16:2a3a:b22f:3dfe:0:ff:fe00:fc00 fd16:2a3a:b22f:3dfe:0:ff:fe00:6c00 fd16:2a3a:b22f:3dfe:5f94:95df:d7e6:d730 fe80:0:0:0:180e:a1f8:2561:9953 Done \u0026gt; Note the IP address, as per the IPv6 addressing, ff:fe00 addresses are router locator (RLOC), while fd16:2a3a:b22f:3dfe:5f94:95df:d7e6:d730 is the end point indentifier (EID) for this node.\nStep 3: Add More Nodes to form the Network Now, let’s add a second device to the network. Open a new terminal and start a second instance of the Docker container:\n➜ ~ docker exec -it codelab_otsim_ctnr bash root@a5cce65da408:/# /openthread/build/examples/apps/cli/ot-cli-ftd 2 \u0026gt; Ensure the new device uses the same network key and PAN ID as the first node:\n\u0026gt; dataset networkkey 4084c92fe70f2d8968df19aa5de3cfe5 Done \u0026gt; dataset panid 0x4de8 Done \u0026gt; dataset commit active Done \u0026gt; ifconfig up Done \u0026gt; thread start Done This device will initially join the network as a child node, and after some time, it will promote itself to a router.\n\u0026gt; state child Done \u0026gt;state router Done \u0026gt;ipaddr fd16:2a3a:b22f:3dfe:0:ff:fe00:2c00 fd16:2a3a:b22f:3dfe:ea5e:d106:d0ad:7a4 fe80:0:0:0:84ca:6d98:48:4c34 Done \u0026gt; This creates the a Thread network of two nodes. In similar way you may add more nodes. I added one more node with RLOC16 as 0xe000 (last 16 bit of Addr)\nStep 4: Verify the Network To check the network, use the router table command to verify that all router nodes are part of the mesh network:\n\u0026gt; router table | ID | RLOC16 | Next Hop | Path Cost | LQ In | LQ Out | Age | Extended MAC | Link | +----+--------+----------+-----------+-------+--------+-----+------------------+------+ | 11 | 0x2c00 | 56 | 1 | 3 | 3 | 1 | 86ca6d9800484c34 | 1 | | 27 | 0x6c00 | 63 | 0 | 0 | 0 | 0 | 1a0ea1f825619953 | 0 | | 56 | 0xe000 | 11 | 1 | 3 | 3 | 4 | d2be59ce650916de | 1 | You should see the RLOC16 (Router Locator) entries of each node in the table. Nodes can also communicate with each other by pinging their IPv6 addresses.\n\u0026gt; ping fd16:2a3a:b22f:3dfe:0:ff:fe00:2c00 16 bytes from fd16:2a3a:b22f:3dfe:0:ff:fe00:2c00: icmp_seq=1 hlim=64 time=4ms 1 packets transmitted, 1 packets received. Packet loss = 0.0%. Round-trip min/avg/max = 4/4.0/4 ms. Done \u0026gt; You may try adding more nodes in similar way. You may also create emulated device of type MTD (Minimal Thread Device) using command ot-cli-mtd, and this can act as child only, it would not be promoted to router. Follow all similar steps to get in on the same network.\nroot@a5cce65da408:/# /openthread/build/examples/apps/cli/ot-cli-mtd 4 \u0026gt; I have added a node 4 using this, and it got linked to router node 3.\nNow let’s test the network behaviour.\nStep 5: Simulate Network Behavior Now, let’s test how the Thread network behaves under different conditions:\nNode Failure If one node goes down, another node will take over its role. For example, if the leader goes offline, one of the other routers will be promoted to leader.\n Node 1 — disabled\n \u0026gt; thread stop Done \u0026gt; state disabled Done Node 2 — downgraded to a child\n \u0026gt; state router Done ... \u0026gt; state child Done Node 3 — promoted to leader\n \u0026gt; state router Done ... \u0026gt; state leader Done Node 4 remains as child.\n Similarly we can try stopping node 3 as well, it would cause node 2 to become router again, and attach node 4 to node 2. Its IP addr also chances as per the IPv6 addressing for childs\nNode Rejoining When a node re-joins the network, it first attaches as a child node and may later promote itself to a router as needed.\n\u0026gt; thread start Done \u0026gt; state router Done This demonstrates how Thread’s self-healing capabilities ensure the network remains stable and operational, even when nodes go offline or rejoin. Additional features, like configuring authentication, making Leader as a Commissioner to control which devices can join the network, provide further customization. For more in-depth guidance, refer to the codelab.\nConclusion This article introduced the fundamentals of the Thread protocol and demonstrated how to create a virtual Thread network using Docker. We saw how Thread’s mesh networking ensures robust and dynamic device communication. In the next tutorial, we’ll extend this setup with real hardware and explore how to connect a Thread network to the web for broader accessibility.\nIn Chapter 4 of my book 📕Exploring the Metaverse, I discuss how IoT will shape our connected future, and XR devices are the evolved IoT.\n☞ Click here for availability and discounts in your region\n☞ Read: Exploring the Metaverse - Synopsis\nStay tuned for more! Keep learning, keep experimenting, and keep sharing.\nFind the IoT Practices Publication for more details.\n","id":6,"tag":"IOT ; network ; cloud ; getting started ; learning ; technology ; fundamentals ; thread ; openthread ; docker","title":"Understanding the Thread Protocol and Network","type":"post","url":"/post/iot-the-thread-protocol-and-network/"},{"content":"Is the metaverse hype or reality? My new book explores what lies ahead \u0026ldquo;Exploring the Metaverse\u0026rdquo; is now available globally!\nRead the detail synopsis of the book here and testimonials from a diverse set of industry leaders\nBuy Paper Back: 🇧🇪 Amazon Belgium - 👉 Get it in 32,65€ (3% off) 🇧🇷 Amazon Brazil - 👉 Get it in R$436,82 (Out of Stock) 🇨🇦 Amazon Canada - 👉 Get it in $45.34 🇫🇷 Amazon France - 👉 Get it in 32,65€ 🇩🇪 Amazon Germany - 👉 Get it in €33.12 🇮🇳 Amazon India - 👉 Get it in ₹719 (20% off) 🇮🇹 Amazon Italy - 👉 Get it in 32,19€ 🇯🇵 Amazon Japan - 👉 Get it in ¥3,826 (31% off) 🇲🇽 Amazon Mexico - 👉 Get it in $672.54 🇳🇱 Amazon Netherlands - 👉 Get it in €33,74 🇵🇱 Amazon Poland - 👉 Get it in 139,18zł 🇸🇦 Amazon Saudi Arabia - 👉 Get it in ريال112 🇸🇬 Amazon Singapore - 👉 Get it in S$37.13 🇪🇸 Amazon Spain - 👉 Get it in 32,19€ 🇸🇪 Amazon Sweden - 👉 Get it in 379,62kr 🇹🇷 Amazon Turkey - 👉 Get it in 1.477,59TL 🇦🇪 Amazon UAE - 👉 Get it in AED 72 (69% off) 🇬🇧 Amazon UK - 👉 Get it in £26.45 🇺🇸 Amazon US - 👉 Get it in $32.95 👉 Rest of the world - 👉 Get it in $32.95 \nBuy Kindle Edition: 🇧🇷 Amazon Brazil 👉 Read it in R$54,69 (23% off) 🇨🇦 Amazon Canada 👉 Read it in $9.99 (52% off) 🇫🇷 Amazon France 👉 Read it in 10,37€ 🇩🇪 Amazon Germany 👉 Read it in €10.52 🇮🇳 Amazon India 👉 Read it in ₹424.21 (44% off) 🇮🇹 Amazon Italy 👉 Read it in 12€ (30% off) 🇯🇵 Amazon Japan 👉 Read it in ¥2,317 (58% off) 🇲🇽 Amazon Mexico 👉 Read it in $170.19 (33% off) 🇳🇱 Amazon Netherlands 👉 Read it in €10,72 (30% off) 🇪🇸 Amazon Spain 👉 Read it in 10,22€ (30% off) 🇬🇧 Amazon UK 👉 Read it in £12 🇺🇸 Amazon US 👉 Read it in $14.95 (55% off) 👉 Rest of the world - 👉 Read it in $14.95 (55% off) \nBuy from BPB Online (Additional 15% discount - Use code \u0026ldquo;bpbkuldeep\u0026rdquo; on checkout)\nBPB India ☞ Paper back - 👉 ₹899 ☞ Kindle - 👉 ₹719 ☞ Paper back + Kindle - 👉 ₹1349\nBPB Worldwide ☞ Paper back - 👉 $32.95 ☞ Kindle - 👉 $14.95 ☞ Paper back + Kindle - 👉 $42.95 \nFree Preview ☞ Free Preview\nVideo Promo More about my books here\n","id":7,"tag":"xr ; ar ; vr ; metaverse ; book ; available now ; event ; buy ; launch ; meta ; blockchain ; NFT ; IOT ; AI","title":"📕 Exploring the Metaverse | Available Globally","type":"post","url":"/post/exploring-the-metaverse-available-globally/"},{"content":"Simplifying IoT learning doesn’t have to mean getting bogged down with hardware details. Instead, let’s dive into Soft IoT — where we focus on software-driven IoT exploration. In this article, we’ll learn how to read inputs from a button and turn it into a web based remote control for an LED, all while keeping things software-centric..\nIn the previous article, we introduced Soft IoT using a Grove Pi board, which simplifies IoT learning by hiding wiring and connections complexity. We explored how to program an LED to blink and fade.\nNow, we’ll build upon that foundation by reading inputs and controlling the LED remotely over the internet.\nWhat You’ll Need: Grove Pi board and Raspberry Pi (as before) Grove Button and Grove Led Universal 4-pin cables 🔘 Reading Input from a Button Connecting the Grove button to the Pi is straightforward. With the button set up as an input, you can read its state using Python:\nbutton = 4 #D4 Pin grovepi.pinMode(button,\u0026#34;INPUT\u0026#34;) while True: try: print(grovepi.digitalRead(button)) time.sleep(.5) except IOError: print (\u0026#34;Error\u0026#34;) Make sure to update Python and the GrovePi libraries v1.4 using the commands like\nsudo apt-get update /home/pi/Dexter/GrovePi/Script/update_grovepi.sh /home/pi/Dexter/GrovePi/Firmware/firmware_update.sh With the right firmware, you’ll see your button inputs reflected in real-time. Next, we’ll use these inputs for more interactive control.\n🧑🏻‍💻SSH into Your Raspberry Pi To avoid switching back and forth to the Raspberry Pi desktop, you can use SSH from your laptop (e.g., Putty for Windows). Connect using:\n➜ ~ ssh pi@dex.local pi@dex.local\u0026#39;s password: Last login: Thu Oct 17 19:14:09 2024 pi@dex:~ Note: — dex.local is hostname for Raspberry PI, and “pi” is username. Default password is “robots1234” (you may not find it easily on internet)\n✅ Making the Button a Toggle Switch You can easily write the input read from button to LED, and LED will light up when button is pressed state.\n grovepi.digitalWrite(led, grovepi.digitalRead(button))\n Want to make the button more interactive? Let’s create a program where each press toggles the LED on or off:\nimport time import grovepi led = 6 #D6 button = 4 #D4 grovepi.pinMode(led,\u0026#34;OUTPUT\u0026#34;) grovepi.pinMode(button,\u0026#34;INPUT\u0026#34;) time.sleep(1) button_state = False print (\u0026#34;Press button to switch on/off LED\u0026#34;) while True: try: bInput = grovepi.digitalRead(button) if bInput == 1 : #toggle the state button_state = not button_state grovepi.digitalWrite(led, button_state) print (\u0026#34;LED :\u0026#34;, button_state) time.sleep(0.2) except KeyboardInterrupt: # Turn LED off before stopping grovepi.digitalWrite(led,0) break except IOError: # Print \u0026#34;Error\u0026#34; if communication error encountered print (\u0026#34;Error\u0026#34;) Run the program and you get what you need!\npi@dex:~/Dexter/GrovePi/Software/Python $ python grove_led_button_blink.py Press button to switch on/off LED (\u0026#39;LED :\u0026#39;, True) (\u0026#39;LED :\u0026#39;, False) (\u0026#39;LED :\u0026#39;, True) (\u0026#39;LED :\u0026#39;, False) (\u0026#39;LED :\u0026#39;, True) (\u0026#39;LED :\u0026#39;, False) Thats the power of software, it can transform the hardware in a new version. Just like above grove button now become a toggle switch. We will make it more interactive with internet.\n🛜 Integrating IOT with the Internet In earlier articles of IOT Practices, we explored about MQTT protocol for IoT, and making communication over the internet. MQTT is lightweight and perfect for IoT communication.\nSetting up an MQTT Broker You need a message broker to send and receive messages. We’ll use Eclipse Mosquitto as our broker. If you’re on a Mac (else follow documentations), you can install it using:\n brew install mosquitto\n Start and stop using as follows\n➜ ~ brew services stop mosquitto Stopping `mosquitto`... (might take a while) ==\u0026gt; Successfully stopped `mosquitto` (label: homebrew.mxcl.mosquitto) ➜ ~ brew services start mosquitto ==\u0026gt; Successfully started `mosquitto` (label: homebrew.mxcl.mosquitto) ➜ ~ Update the Mosquitto configuration to make it accessible from other devices (like RaspberryPI):\n➜ ~ ➜ ~ cd /usr/local/opt/mosquitto/etc/mosquitto ➜ mosquitto touch passwdfile ➜ mosquitto chmod 0700 passwdfile ➜ mosquitto mosquitto_passwd passwdfile thinkuldeep Warning: File /usr/local/Cellar/mosquitto/2.0.20/etc/mosquitto/passwdfile1 has world readable permissions. Future versions will refuse to load this file. Password: Reenter password: ➜ mosquitto It would add user thinkuldeep and credentials to passwdfile.\nNow add these configurations in /usr/local/opt/mosquitto/etc/mosquitto/mosquitto.conf (or wherever it is located in your installation)\nlistener 1883 0.0.0.0 protocol mqtt allow_anonymous true password_file /usr/local/opt/mosquitto/etc/mosquitto/passwdfile log_type all log_dest file /usr/local/opt/mosquitto/mosquitto.log log_facility 5 Restart the mosquitto, and let it up and running and check the logs “tail -f /usr/local/opt/mosquitto/mosquitto.log”.\nSetting up an MQTT Client Next we need to setup MQTT client that can publish message to a topic and subscribe to receive message from a topic. We need to set it up on machine (Raspberry PI) from where we need to publish, and subscribe. On the Raspberry Pi, install the Paho MQTT client for Python:\n sudo apt install python3-pip\n pi@dex:~ $ sudo pip3 install paho-mqtt Collecting paho-mqtt Using cached https://www.piwheels.org/simple/paho-mqtt/paho_mqtt-1.6.1-py3-none-any.whl Installing collected packages: paho-mqtt Successfully installed paho-mqtt-1.6.1 Here’s a basic example to connect the Pi to the MQTT broker and send messages:\nimport time import grovepi import paho.mqtt.client as mqtt hostname = \u0026#34;192.200.2.90\u0026#34; #IP OF MY MAC WHERE BROKER IS RUNNING broker_port = 1883 topic = \u0026#34;/command\u0026#34; def on_connect(client, userdata, flags, rc): print(\u0026#34;Connected to MQTT broker with result code: \u0026#34; + str(rc)) client.subscribe(topic) print(\u0026#34;Subscribed to topic :\u0026#34; + topic) def on_message(client, userdata, msg): command = msg.payload.decode() print(\u0026#34;Message received: \u0026#34; + command) client = mqtt.Client() client.on_connect = on_connect client.on_message = on_message print (\u0026#34;Connecting to MQTT broker\u0026#34;) client.username_pw_set(\u0026#34;thinkuldeep\u0026#34;, \u0026#34;mqtt\u0026#34;) #CREDENTIALS FOR USER client.connect(hostname, broker_port, 60) client.loop_start() time.sleep(1) print(\u0026#34;Publishing a message to \u0026#34; + topic) client.publish(topic, \u0026#34;Hello from thinkuldeep\u0026#34;) while True: try: time.sleep(1) except KeyboardInterrupt: client.loop_stop() client.disconnect() break except IOError: # Print \u0026#34;Error\u0026#34; print (\u0026#34;Error\u0026#34;) Run this program on Raspberry PI and if you get following then all is set. You are able to connect, subscribe, communicate with a broker over internet.\npi@dex:~ $ python mqtt.py Connecting to MQTT broker Connected to MQTT broker with result code: 0 Subscribed to topic :/command Publishing a message to /command Message Received: Hello from thinkuldeep This brings internet into PI. Now let’s make it more interesting by making a web based remote control.\n🖥️ Making Web Based Remote Control Combine everything into a simple web-based remote control. We would create a soft button on web that works in sync with hardware button.\nConfigure MQTT as Websockets Just add these configurations in /usr/local/opt/mosquitto/etc/mosquitto/mosquitto.conf\nlistener 8883 protocol websockets socket_domain ipv4 Setup Web MQTT Client Here’s how you can set up an MQTT client on a web page using JavaScript:\n\u0026lt;script src=\u0026#34;https://unpkg.com/mqtt/dist/mqtt.min.js\u0026#34;\u0026gt;\u0026lt;/script\u0026gt; \u0026lt;script\u0026gt; console.log(mqtt) const client = mqtt.connect(\u0026#34;ws://localhost:8883/mqtt\u0026#34;) const topic = \u0026#34;/command\u0026#34; client.on(\u0026#39;connect\u0026#39;, function () { console.log(\u0026#39;Connected.\u0026#39;) // Subscribe to a topic client.subscribe(topic, function (err) { console.log(err) if (!err) { // Publish a message to a topic client.publish(topic, \u0026#39;ON\u0026#39;) } }) }) client.on(\u0026#39;message\u0026#39;, function (topic, message) { console.log(message) }) client.on(\u0026#39;error\u0026#39;, function (error) { console.log(error) client.end(); }) \u0026lt;/script\u0026gt; Complete code for publishing message and receiving is available here\nIntegrating All Together Let’s merge the code into a comprehensive program that uses a button press to send commands over MQTT and controls the LED remotely.\nOn PI side, here is example code, complete code is available at grove_led_mqtt_button.py\nled = 6 #D6 button = 4 #D4 grovepi.pinMode(led,\u0026#34;OUTPUT\u0026#34;) grovepi.pinMode(button,\u0026#34;INPUT\u0026#34;) time.sleep(1) button_state = False hostname = \u0026#34;192.168.5.200\u0026#34; #IP of hostbame of broker. broker_port = 1883 topic = \u0026#34;/command\u0026#34; def on_connect(client, userdata, flags, rc): print(\u0026#34;Connected to MQTT broker with result code: \u0026#34; + str(rc)) client.subscribe(topic) print(\u0026#34;Subscribed to topic :\u0026#34; + topic) def on_message(client, userdata, msg): command = msg.payload.decode() print(\u0026#34;Message received: \u0026#34; + command) if command == \u0026#34;ON\u0026#34; : button_state = True grovepi.digitalWrite(led, button_state) client.publish(topic, \u0026#34;ON_OK\u0026#34;) elif command == \u0026#34;OFF\u0026#34; : button_state = False grovepi.digitalWrite(led, button_state) client.publish(topic, \u0026#34;OFF_OK\u0026#34;) client = mqtt.Client() client.on_connect = on_connect client.on_message = on_message print(\u0026#34;Connecting to MQTT broker\u0026#34;) client.username_pw_set(\u0026#34;thinkuldeep\u0026#34;, \u0026#34;mdqtdt\u0026#34;) # user credentials client.connect(hostname, broker_port, 60) client.loop_start() time.sleep(1) client.publish(topic, \u0026#34;OFF_OK\u0026#34;) print (\u0026#34;Press button to switch on/off LED\u0026#34;) while True: try: bInput = grovepi.digitalRead(button) if bInput == 1 : button_state = not button_state if button_state : client.publish(topic, \u0026#34;ON\u0026#34;) else : client.publish(topic, \u0026#34;OFF\u0026#34;) time.sleep(0.2) except KeyboardInterrupt: # Turn LED off before stopping digitalWrite(led,0) client.loop_stop() client.disconnect() break except IOError: # Print \u0026#34;Error\u0026#34; if communication error encountered print (\u0026#34;Error\u0026#34;) Note: The code presented here are just sample code, does not follow coding practices, don’t use these as it is in production.\nRun it now Run the python script on Raspberry PI Open the page https://thinkuldeep.com/mqtt-remote/ or checkout https://github.com/thinkuldeep/mqtt-remote and open index.html Make sure of connection url and topic and click on “Setup” Once done, have fun! Conclusion By following this guide, you’ve learned how to read input from buttons, control hardware remotely, and connect your IoT devices to the internet — all using a simplified approach. This “Soft IoT” method enables rapid learning without the hurdles of complex wiring and hardware setups. Stay tuned as we explore more ways to harness the power of IoT!\nFeel free to explore the complete code here and start creating your own IoT projects!\nIn Chapter 4 of my book 📕Exploring the Metaverse, I discuss how IoT will shape our connected future, and XR devices are the evolved IoT.\n☞ Click here for availability and discounts in your region\n☞ Read: Exploring the Metaverse - Synopsis\nStay tuned for more! Keep learning, keep experimenting, and keep sharing.\nFind the IoT Practices Publication for more details.\n","id":8,"tag":"IOT ; raspberry pi ; rebooting ; cloud ; getting started ; learning ; technology ; fundamentals","title":"Soft IoT Continued: Bringing the Internet into IoT","type":"post","url":"/post/soft-iot-continued/"},{"content":"MQTT-Remote This is a web-based remote control for a led. It is a soft button on web that works in sync with hardware button.\n ☞ Live Demo ☞ Github ☞ Learn More\nModelViewer WebXR and 3D Printing Demonstrations\n ☞ Live Demo ☞ Github ☞ Learn WebXR\n3D Book Add pages to the book, and turn them like real pages.\n☞ Github ☞ Book launch - Five thoughtful years\nArium — An Automation framework for Unity/XR Arium is a lightweight, extensible, platform-agnostic framework for writing automation tests in Unity not limited to but built specifically for XR (ARVR) Applications. I am part of core development team and a reviewer for this.\n☞ GitHub ☞ Medium Article ☞ Thoughtworks Insights\nDecoAR — Decorate the world with XR DecoAR is an App for exdended the decoration at home, office or any place. It is aimed to track different images from home and provide overlayed content on top of that, when we see them thru the smart glasses.\n ☞ More details ☞ Github ☞ Medium\nPair Generator - Conducting feedback made easy This generates pair to organise 1-1 feedback sessions/meetings in parallel for team. This has saved manual effort and enable us quickly organise feedback session. ☞ Try is Live ☞ Github\nWhy Worry - Life is simple, we make it complex Anxiety and worrying about problems will not solve anything, try this app, it will reduce your anxiety. ☞ Try is Live ☞ Github ☞ Read more about balancing life\nImage-evaluator - online assessment tool. This app can evaluate an images online with simple javascript on images. ☞ Try is Live ☞ Github\n","id":9,"tag":"me ; profile ; thoughtworks ; open-source ; arium ; practice ; motivational ; learnings ; life ; experience ; about","title":"Open-sources and Apps","type":"about","url":"/about/open-sources/"},{"content":" Defining the AI Powered Virtual World Gatherverse Virtual World Symposium 2024 - ☞ More Details India\u0026rsquo;s Metaverse Journey: Opportunities and Challenges XROM Postcast - ☞ More Details Global Insights from XR Leaders in India: AI-Powered Real-Time Adaptation in XR AWE Asia 2024 - ☞ More Details Harnessing Business Opportunity with AI Powered XR Express Computer Webinar - ☞ More Details Azure AI Fostering Sense within XR World Season of AI - Azure Developer Community - Rekill - ☞ More Details Evolving into Tomorrow, Today: Enter the Metaverse Gatherverse XREvolve Summit 2024 - ☞ More Details Exploring the Metaverse: Redefining the reality in the digital age Book promo - ☞ More Details XR in practice: the engineering challenges of extending reality Thoughtworks Podcast - ☞ More Details AI Assisted Content Pipeline for XR Thoughtworks XConf 2022 - ☞ More Details Using XR in building autonomous vehicles Thoughtworks MasterClass-2021 - ☞ More Details \u0026nbsp; The Shift from Enterprise XR to Ethical XR Thoughtworks XConf 2021 - ☞ More Details Navigating the new normal The VRARA - ☞ More Details eXtended Reality for Enterprise XROM Podcast - ☞ More Details XR - Nicety to Necessity Techgig Webinars - ☞ More Details How to get started with AR VR? Google Developer Student Club - ☞ More Details SLAM Aware Applications ThoughtWorks Away Day 2019 - ☞ More Details ","id":10,"tag":"me ; profile ; thoughtworks ; xr ; moments ; video ; ai ; learnings ; ar ; experience ; vr ; mr ; ethics ; xconf ; vrara ; xrom ; iot ; about ; google ; techgig ; metaverse ; awe ; gatherverse ; live-streaming","title":"Livestreaming","type":"about","url":"/about/streaming/"},{"content":"📢 Virtual World Symposium: Join us as we dive deep into the future of AI-powered virtual worlds:\n What are they? Where are they headed? And why is it crucial to discuss these transformative spaces to..day?\n This symposium will also unpack spatial intelligence and its role in defining and enhancing immersive environments. As we step into a new era of digital interaction, this event will provide insights into the challenges and opportunities that virtual worlds bring to society.\n🎯 What to Expect: Gain a comprehensive understanding of the critical role virtual worlds play in redefining human interaction, community building, and experiential learning.\nI\u0026rsquo;m excited to be part of the Roundtable Panel Discussion on \u0026ldquo;Defining the AI-Powered Virtual World\u0026rdquo; alongside industry leaders and innovators!\n 🗓 Session Details Date \u0026amp; Time: October 15, 2024, 11:05 AM PST (11:35 PM IST) Roundtable Panel: Defining the AI-Powered Virtual World Moderator: Daniel Andreev - Co-founder, PsyTech VR Panelists: Lisi Linares - Head of XR Media Kuldeep Singh - Me Liam Broza - CTO, Infinite Reality Claire Weingarten - Founder, Living Machines Media Vyoma Gajjar - AI Solutions Architect, IBM 📅 Reserve Your Seat! Schedule: 3-Days Event Schedule Speakers: Meet the speakers Reserve Your Seat: Register on Eventbrite 🎥 Live Streaming Day 1 - My Session (3:15:50 onwards)\n Day 2 :\n 📚 Related to My Book \u0026ldquo;Exploring the Metaverse\u0026rdquo; This event closely aligns with the themes of my book, \u0026ldquo;Exploring the Metaverse\u0026rdquo;, where I delve into the emerging metaverse and the AI-powered technologies that drive it. Available globally on Amazon and BPB Online in both paperback and eBook formats, you can access a free preview of the book here.\n☞ Click here for availability and discounts in your region\nLooking forward to an engaging session and the discussions it sparks!\n","id":11,"tag":"gatherverse ; event ; speaker ; xr ; talk ; takeaways ; standards ; policy ; metaverse ; ar ; vr ; iot ; web3 ; AI ; future ; opensource ; summit ; live-streaming","title":"Speaker - Gatherverse Virtual World Symposium 2024","type":"event","url":"/event/gatherverse-virtual-world-symposium/"},{"content":"AI is transforming the software development process, but there are many misconceptions about its capabilities. This article explores the common myths surrounding AI in software development and sheds light on its reality.\nIn recent times, Artificial Intelligence (AI) has gained momentum across various fields, especially with the introduction of Generative AI (GenAI). AI is now widely accessible, often sparking discussions about its potential, its misuse, and, unfortunately, several myths. As the hype around AI intensifies, so does the Fear of Missing Out (FOMO), leading many to adopt it without fully understanding its true value.\nSoftware development, like many industries, has evolved over time and now faces the integration of AI into its processes, practices, and standards. At Thoughtworks, we are scaling AI-assisted software development, equipping teams with practical insights. Here are my personal takeaways and lessons from practicing AI-assisted software development:\nMyth 1: AI will replace software development teams ❌ A prevalent misconception is that AI will eliminate the need for developers, business analysts, quality analysts, experience designers, and other related roles. Many fear losing their jobs to AI.\n✅ Reality: AI is far from replacing human intelligence. The idea of achieving Artificial General Intelligence (AGI) or superintelligence is still largely speculative. While AI will not replace humans, those who embrace and effectively use AI will outpace those who don’t. AI should be viewed as a powerful assistant. It can help with tasks like writing business specifications from meeting notes, fast-tracking user stories, and even generating code. However, human oversight is crucial to ensure the quality and appropriateness of the output.\nYes, there are tools that allow users to create web dashboards or mobile apps from textual prompts, but this does not negate the need for developers. Developers are still essential for guiding AI, refining its output, and, importantly, creating the platforms that AI relies on.\nRef: Haiven™ team assistant | Thoughtworks\nMyth 2: AI can handle all software development tasks ❌ Another myth is that since AI can code, it can handle the bulk of software development.\n✅ Reality: Coding constitutes less than half of the entire software development process. While AI may speed up coding (by around 30–50%), it may solve about 50–60% of development challenges. AI assists not just in coding but also in writing user stories, acceptance criteria, and more. However, the process still requires human input for tasks such as requirement agreements, evolutionary architectural design, refactoring, testing driven development and quality assurance.\nIn my experience, tools like GitHub Co-pilot Chat are incredibly useful. For instance, Co-pilot can generate a method to calculate the total price of an order or suggest test cases in less than a second.\n/* Calculate order total price. */ public Double calculateTotalPrice(final Order order) { return order.getLineItems().stream() .mapToDouble(lineItem -\u0026gt; lineItem.getProduct().getPrice() * lineItem.getQuantity()) .sum(); } There are some great tools Tabnine, Cursor, CodeSense that can provide better experience, but developers must ensure that the generated code meets security, performance, and coding standards. AI can assist, but it doesn’t replace critical development practices like Test-Driven Development (TDD) or refactoring, but these practices need to evolve.\n Myth 3: AI-generated software is future-proof ❌ Some believe that AI-generated software will be more advanced and better equipped for the future than one built with traditional methods.\n✅ Reality: AI is trained on historical data, meaning it replicates both the good and bad patterns of the past. AI-generated artifacts still require validation and must pass through quality gates. While AI can assist in creating solutions, human oversight remains essential to ensure innovation and to avoid issues like AI “inbreeding,” where AI begins learning from its own outputs, potentially leading to hallucinations and suboptimal results. AI is built on historical knowledge and must be guided to innovate effectively.\nMyth 4: AI-assisted software development is just about learning tools ❌ There’s a myth that mastering AI-assisted software development simply means learning AI tools.\n✅ Reality: AI-assisted development represents a paradigm shift. It’s not just about using tools but also about adapting how we think and communicate with AI. Since AI tools rely heavily on natural language inputs, developers must refine their ability to write concise, clear, and expressive natural language prompts to get the desired outcome from AI. They need to be the great storyteller before a programmer.\nAnother key area is defining success metrics. Traditional productivity measures may no longer apply in the AI-assisted world. Teams need to clarify which tasks AI can handle and where human effort is still needed.\nMyth 5: “Prompt Engineering” will become a job role ❌ With the rise of GenAI, some predicted that “prompt engineering” would become a dedicated job role.\n✅ Reality: While prompt engineering is a valuable skill, it’s unlikely to become a standalone career. User interfaces and tools will continue to evolve, simplifying the process of interacting with AI, just as no one has a career solely based on searching Google. The more familiar you are with a tool, the better you become at using it effectively. Similarly, prompting is about practice. As AI tools improve, prompting will become second nature.\n In conclusion, AI-assisted software development is not about replacing developers/makers but empowering them to work more efficiently. It’s important to understand that AI is a tool to assist, not a magic solution that can do everything. As we continue to integrate AI into software development, the focus will be on finding the right balance between human ingenuity and AI efficiency.\nA variation of this article is originally published at Medium\n","id":12,"tag":"evolution ; ai ; process ; technology ; thoughtworks ; insights ; agile ; software ; development ; myths ; realities","title":"Myths and Realities of AI-Assisted Software Development","type":"post","url":"/post/ai-assisted-software-development-myths/"},{"content":"Learning IoT doesn’t have to be complex, especially when starting out. In this post, we’ll explore how to get started with IoT without diving deep into hardware complexities like wiring or understanding intricate sensor details.\nIn an earlier article, we set up a Raspberry Pi. My son initially saw it as just another, albeit slower, computer. There was no “IoT” in it yet, just a basic device. But IoT is about more than just the device itself; it’s about integrating sensors and actuators.\nIn this article, we’ll take the first step to transform the Raspberry Pi into a functional IoT device — using the Grove Pi+ board to add sensors and actuators — without getting bogged down by hardware details. I call it Soft IoT — focusing on the software side of things. We’ll delve into the hardware later.\nWhat You’ll Need: In addition to what we used in the last article, you’ll need:\n Grove Pi+ Board (more details here) Grove LED and Universal 4-pin cable (can be found here) ⚙️ Setting up the Grove Pi+ Board If you have already setup Raspberry Pi with an OS as mentioned earlier, then you may choose to manually configure firmware and software for Grove PI. ❌ But I don’t recommend that as it can be tedious and prone to errors (I faced many issues with my older PI, and I gave up later!).\n✅ It is better we use a pre-built OS image provided by Dexter Industries, the makers of the Grove Pi+ board. This simplifies the process — just download the image from here, use the Raspberry Pi Imager and choose the downloaded zip in Operating System section to write it to SD card, and you’re ready to go with new OS image. Insert the SD card to your Raspberry Pi and boot up on new OS.\n🎮 Connecting the Grove Pi+ Board Once the OS is ready, everything is pre-installed. Now, it’s time to connect the board to the Raspberry Pi. Align the Grove Pi+ board’s pins with the Raspberry Pi’s, ensuring the pins match the right corners. The Grove Pi+ board has multiple ports (D1 to D7 for digital input/output and A1 to A3 for analog input/output).\nNext, connect the Grove LED to one of the digital sockets (for instance, D4). Power up the Raspberry Pi, and you should see a green light on the Grove Pi+ board, indicating that it’s ready for use.\n🧑🏻‍💻 Writing Your First Program Now, we’ll write a simple program to control the LED and introduce you to Soft IoT.\nThere are already several sample programs pre-loaded, so we don’t need to reinvent the wheel. These programs simplify hardware connections and interactions by using pre-built libraries. The programs allow us to easily read from or write to specific ports, and they wrap Arduino functions that handle communication between the Raspberry Pi and Grove sensors.\nLet’s understand a basic program that makes the LED blink continuously. It is available at /home/pi/Dexter/GrovePI/Software/Python/grove_led_blink.py\n Select the Port and Mode — For this program we use a digital port, to power ON and OFF led. Below method to set a pin mode to OUTPUT.\ngrovepi.pinMode(pin, \u0026#34;OUTPUT\u0026#34;) Make sure you connect the grove led to respective port. It connected to to D4, then use pin = 4.\n Power ON the LED — Power ON the LED by writing a positive non-zero value to the selected port.\ngrovepi.digitalWrite(pin,1) Similarly write value zero to Power OFF the Led\ngrovepi.digitalWrite(pin,0) Blinking LED — To blink an LED we need to ON and OFF the LED in a loop.\n while True: grovepi.digitalWrite(pin,1) # Send HIGH to switch on LED print (\u0026#34;LED ON!\u0026#34;) time.sleep(1) grovepi.digitalWrite(pin,0) # Send LOW to switch off LED print (\u0026#34;LED OFF!\u0026#34;) time.sleep(1) That’s it, and there are few more error handling and logs in the program.\n💡 Running a program You may browse and run the program in developer tools comes within the Raspberry OS, Eg. Main Menu \u0026gt; Programming \u0026gt; Thonny Python IDE or Just run the program from the terminal as follows:\npi@dex:~ $ cd Dexter/GrovePi/Software/Python/ pi@dex:~/Dexter/GrovePi/Software/Python/ $ python grove_led_blink.py This example will blink a Grove LED connected to the GrovePi+ on the port D4. If you\u0026#39;re having trouble seeing the LED blink, be sure to check the LED connection and the port number. You may also try reversing the direction of the LED on the sensor. Connect the LED to the D4 port! LED ON! LED OFF! LED ON! LED OFF! LED ON! LED OFF! LED ON! LED OFF! LED ON! LED OFF! For fun, try experimenting with the intervals or testing other programs like grove_led_fade.py, which ask you to connect to a port that support Pulse Width Modulation (PWM) and analog writing, allows for more complex control, such as fading the LED.\n⏭️ What’s Next? This is just the beginning of your IoT journey. The Grove Pi+ significantly simplifies the complexity of interfacing with hardware and firmware by providing a range of plug-and-play sensors. From building projects like weather stations, alarms, rain sensors, and more. Grove Pi+ makes it easy for beginners and enthusiasts to dive into IoT without getting overwhelmed by technical details.\nNote that IoT is much more than just connecting sensors. To create an end-to-end IoT solution, you’ll need to integrate these devices with the cloud, edge computing, and advanced technologies like AI and XR. Ref : Touchless IoT by Sri Vishnu S\nIn future articles, we’ll explore how IoT connects with the broader internet ecosystem, and shaping up it’s practices, and its role in the emerging metaverse. In Chapter 4 of my book 📕Exploring the Metaverse, I discuss how IoT will shape our connected future, and XR devices are the evolved IoT.\n☞ Click here for availability and discounts in your region\n☞ Read: Exploring the Metaverse - Synopsis\nStay tuned for more! Keep learning, keep experimenting, and keep sharing.\nFind the IoT Practices Publication for more details.\n","id":13,"tag":"IOT ; raspberry pi ; rebooting ; cloud ; getting started ; learning ; technology ; fundamentals","title":"Getting Started with Soft IoT","type":"post","url":"/post/soft-iot/"},{"content":"🚀 My Book, Exploring the Metaverse, is now available in stores worldwide, including Amazon and BPB Online. It features forewords and testimonials from industry pioneers, and it aims to uncover the concept of the Metaverse, offering insights into its technological and business implications.\nOne of the incredible testimonials comes from Vanya Seth, Head of Technology at Thoughtworks. Vanya is a visionary tech leader, an active speaker in various tech communities, and a fellow participant in the XROS Fellowship program alongside me. We also had the opportunity to share space at XConf earlier.\n🙏 A heartfelt thanks to Vanya for providing invaluable feedback on the early versions of the book, and for penning this thoughtful foreword:\n \u0026ldquo;There are many misconceptions and myths associated with the word “Metaverse”. Touted as the window to the dystopian world, this is one concept that remains largely misunderstood. This book in my mind addresses this challenge. It takes a logical view on technological advancements and how the human relationship with it has evolved over time. It provides a comprehensive understanding of Metaverse as a holistic concept, its association with various technologies and most importantly the business case for why it makes sense. – Vanya Seth Head of Technology, Thoughtworks India and Middle East\n 🛍️ Grab Your Copy Today! Exploring the Metaverse is available on Amazon and BPB Online in both paperback and eBook formats. Get a sneak peek of the book and explore a free preview here.\n☞ Click here for availability and discounts in your region\n☞ Read: Exploring the Metaverse - Synopsis\nDiscover more about my books here.\n","id":14,"tag":"xr ; ar ; vr ; metaverse ; book ; available now ; event ; buy ; launch ; meta ; blockchain ; NFT ; IOT ; AI ; thoughtworks ; event ; MR ; testimonial ; xros","title":"🙏 Exploring the Metaverse | Testimonial 7","type":"post","url":"/post/exploring-the-metaverse-testimonial-7/"},{"content":"Exciting conversation on the future of tech with Eddie Avil, where we dive into my journey of writing Exploring the Metaverse, discuss the opportunities and challenges of the digital age, and even touch on mind-bending concepts like Programmable Dreams and Time Travel. Are we ready for what’s coming?\nXROM Podcast 🎙️ XROM PODCast, hosted by serial entrepreneur and visionary Eddie Avil, is an incredible YouTube channel showcasing conversations with industry leaders in emerging tech. It\u0026rsquo;s always great collaborating with Eddie on various occasions. Previously, we discussed eXtended Reality for Enterprise and Generative XR – Space-aware AI, and we recently shared insights at AWE ASIA 2024.\nPodcast: Exploring the Metaverse Key Topics Covered:\n My Technology Journey – Establishing Centres of Excellence for IoT and XR Opportunities Ahead – From Connected Workers and Digital Twins to advancements in AI, cloud-native tech, and how they shape our future Challenges – The current state of tech adoption in India and globally, and India\u0026rsquo;s Metaverse potential Book Insights – Highlights from Exploring The Metaverse: Redefining Reality in the Digital Age 👉 Watch Now to explore Virtual Reality’s future, India’s tech evolution, and how businesses can adapt to the Metaverse revolution!\n 📚 Exploring the Metaverse Available globally on Amazon and BPB Online as both paperback and eBook. Access a free preview of the book here.\n\n ","id":15,"tag":"xr ; ar ; vr ; interview ; event ; xrom ; thoughtworks ; AI ; 5G ; practices ; iot ; expert insights ; GenAI ; 3d ; technology ; xr content ; genXR ; metaverse ; book ; available now ; launch","title":"India's Metaverse Journey - XROM Podcast","type":"event","url":"/event/xrom-podcast-exploring-the-metaverse/"},{"content":"A few years ago, I bought a Raspberry Pi 3, but during a house move, I misplaced it. Fast forward to this Diwali cleaning, and to my surprise, I found it! My son’s curiosity was piqued, so we decided to set it up together. While my recent focus has been on XR technologies and we setup XRPractices, but this moment brought me back to IoT — sparking the idea to share my journey and experiences through a series of articles on getting started with IoT.\nSo why this article when there’s plenty of good documentation for this topic out there? Well, while I used the very first version of Raspberry Pi years ago, it was pre-configured for me. This time, I wanted to start from scratch and share the steps for anyone wanting to dive into IoT. Here’s a quick guide to get you up and running faster with Raspberry Pi and IoT experimentation.\nWhat You Need: MicroUSB Power Cable (no USB-C for me, since my Pi is a bit older) and a 5V adapter. 16GB or larger microSD card and a card reader. A USB storage device may also work with some tweaking with some versions. A standard HDMI-compatible monitor, mouse, and keyboard. Patience — especially if you’re used to faster PCs! Get the OS Image Ready First, powering up your Raspberry Pi requires more than just plugging it in. You’ll need an operating system on a microSD card. So, the first task is to prepare the OS.\n Make sure your PC can read a microSD card. Download and install Raspberry Pi Imager on a PC. Insert the microSD card, select ‘Raspberry Pi 3’ as the device, choose the default OS, and configure settings like username, password, WiFi credentials, or even SSL. I went ahead with the default settings. Within a few minutes, the OS image was ready on my microSD card.\nBooting Up the Raspberry Pi Now, insert the microSD card, plug the monitor into the HDMI port, and connect the keyboard and mouse to the USB slots. Finally, power it up via the microUSB “PWR” port.\nVoila! Your Raspberry Pi desktop should boot up. The first-time setup will guide you through language, region, and keyboard settings. Once done, you’ll be greeted with a simple desktop environment, complete with pre-installed software like Firefox, Chromium, and Terminal. Browse around to get familiar with the interface — but remember, it won’t be as fast as your usual PC.\nRemote Access or Not? If you want to avoid cluttering your desk with an extra keyboard, mouse, and monitor, you can set up Raspberry Pi Connect to access the Pi remotely from your laptop. After creating a Raspberry Pi ID and signing in on your PI, you should be able to access it remotely.\nHowever, in my case, I was only able to get shell access — full remote access didn’t work as expected, likely because the feature is still in beta. So for now, it’s better to have dedicated peripherals for the Pi if you want a smoother experience.\nWhat’s Next? While setting up the Raspberry Pi was fun for my son, but he wasn’t impressed with its performance compared to his regular PC. He questioned the need for this “new PC” when it wasn’t exactly fast and would occupy more desk space.\nBut the fun is just beginning! In the same box where we found the Raspberry Pi, there was a GrovePi board. This will be my chance to show him the exciting potential of the Pi with sensors and real-world IoT projects. However, I’ll need to order a few cables and sensors first.\nStay tuned for the next article, where we’ll explore some basic use cases for the Raspberry Pi with GrovePi! Slowly we will get deep into IoT and edge computing.\nFind the IoT Practices Publication for more details.\n","id":16,"tag":"IOT ; raspberry pi ; rebooting ; cloud ; getting started ; learning ; technology ; fundamentals","title":"Booting Up the Internet of Things","type":"post","url":"/post/iot-rebooting/"},{"content":"I am excited to share that my article explaining my book has been featured in the recent September edition of the XTIC Chronicle, a quarterly newsletter published by XTIC (Experiential Technology Innovation Centre) at IIT Madras. As India’s first research and product innovation center dedicated to Virtual Reality, Augmented Reality, Mixed Reality, and Haptics. XTIC provides a platform for cutting-edge insights from academia, industry, startups, and CAVE members.\nXTIC Chronicle Overview The XTIC Chronicle is a valuable resource for all levels of readers interested in the latest advancements in XR technology. It regularly features contributions from experts, making it a go-to source of information on the evolution of experiential technologies.\nFeatured Article and Edition Theme The theme for this edition is \u0026ldquo;Human Wellbeing\u0026rdquo;, exploring the critical importance of prioritizing humanity in the era of AI, XR, and 5G/6G. It delves into how these technologies can be used responsibly to ensure that humans remain at the center of technological evolution. My book’s content aligns well with this theme, emphasizing the need for a human-centric approach in our journey toward the future of technology.\nHere is the content that was featured in the newsletter:\n Exploring the Metaverse: A Balanced Look at the Next Internet\nThe term \u0026ldquo;metaverse\u0026rdquo; has become a buzzword, encompassing everything from virtual game worlds to the next iteration of the internet. While some may dismiss it as hype, immersive technologies like XR are quietly integrating into our daily lives through social media, virtual collaboration, and digital experiences. Computer vision and AI are constantly blurring the lines between the physical and virtual worlds.\nIn this dynamic landscape, understanding our role in adapting to new technologies is critical. Recognizing this, I wrote 📕 Exploring the Metaverse : Redefining the Reality in the Digital Age to provide insights that separate inflated expectations from genuine opportunities. The book equips you with a balanced and informed perspective on the promises and challenges of the metaverse.\nExploring the metaverse in five parts:\n Defining the metaverse: Delve into the history of digital revolutions and the origin of the metaverse. We\u0026rsquo;ll explore various definitions, expert perspectives, and address common misconceptions to arrive at a well-rounded understanding. Building blocks of the metaverse: Explore the technological foundation of the metaverse, including XR, IoT, Cloud, Blockchain, and advancements in AI. You\u0026rsquo;ll gain insight into how these advancements collectively pave the way for the metaverse. The metaverse in action: Explore hundreds of use cases of real-world applications spanning diverse industries. From revolutionizing gaming \u0026amp; entertainment to fostering virtual social interactions, transforming travel experiences, enhancing fitness routines, and revolutionizing healthcare practices. Discover how the metaverse is reshaping retail and commerce by reimagining property and assets. Moreover, witness the enterprise metaverse\u0026rsquo;s capacity to enhance skilling and reskilling efforts, while ensuring greater compliance with safety standards Metaverse concerns: While the Metaverse offers exciting possibilities, it also presents potential pitfalls. This section addresses concerns regarding identity protection, privacy, and sustainability, ensuring readers are aware of potential risks. Understand metaverse solution ecosystem, technologies and tools. Focus on shaping up standards and practices. Learn about the ongoing collaborative efforts to strengthen the technology while fostering awareness and responsible adoption among academia, industry, governing bodies, and the public. Join the Conversation: \u0026ldquo;Exploring the Metaverse\u0026rdquo; concludes with a call to action, urging collaboration and responsible engagement from all stakeholders. The book empowers you to be part of shaping the future of technology, and the next internet.\nAvailable globally on Amazon and BPB Online as both paperback and eBook. Access a free preview of the book here. and\nBuy here : Click here to know discounts and check availability in your region\nXTIC Chronicle - Sep 24 Have a look at the white-paper here:\nThis browser does not support PDFs. Please download the PDF to view it: Download PDF.\n For more details and eariler newsletter here.\nFollow XTIC and Join CAVE I extend my sincere thanks to the XTIC and CAVE teams for the opportunity to contribute to this valuable publication.\n Join CAVE (Consortium for AR VR Engineering) : Google Form\n","id":17,"tag":"xr ; ar ; vr ; metaverse ; IIT ; XTIC ; blockchain ; NFT ; IOT ; AI ; openxr ; CAVE ; thoughtworks ; opensource ; future ; book ; available now","title":"Featured in XTIC Chronicle, IIT Madras Newsletter","type":"post","url":"/post/xtic-newsletter-sep-2024/"},{"content":"In our day-to-day work, we often create methods, processes, approaches, or features to address specific needs and challenges. Sometimes, these solutions turn into significant innovations, even if we don\u0026rsquo;t immediately recognize them as such. This is also true in our professional work setup, where we may invent without fully realizing it.\nI\u0026rsquo;m thrilled to share that a new patent has been granted to Lenovo for their groundbreaking work in the ThinkReality XR product line, and I am honored to be a co-inventor on this work.\nPatent No : US 12,081,729 B2 This patent, titled _\u0026ldquo;Square orientation for presentation of content stereoscopically\u0026rdquo;, introduces a square orientation for rendering 2D phone apps into stereoscopic/3D environment. Square orientation nullify the impact of phone\u0026rsquo;s orientation, and render phone apps smoothly.\nThis invention was developed for Lenovo\u0026rsquo;s award-winning mixed reality smart glasses, the ThinkReality A3.\n This innovation allows enterprises to use phone applications on XR devices and interact with them in new, innovative ways without altering the 2D phone apps. Part of this solution is also available in the ThinkReality VRX, which supports various enterprise XR features such as Device Enrollment to Intune and VMWare.\nInventors: Kuldeep Singh, Raju Kandaswamy and Poorna Prasad\nThis browser does not support PDFs. Please download the PDF to view it: Download PDF.\n ☞ USPTO #17/812,987 ☞ JUSTIA #20240022703 ☞ Granted #12081729\nAlso covered by NIT Kurukshetra Alumni Association\nRefer here for my more patents\n","id":18,"tag":"about ; invention ; me ; profile ; xr ; ar ; vr ; mr ; patents ; Lenovo ; ThinkReality ; Android ; Metaverrse ; design ; user experience ; nit","title":"Patent Granted: Square orientation for stereoscopic rendering","type":"post","url":"/post/patent-granted-xr-2/"},{"content":"Thoughtworks has always amazed me—whether it\u0026rsquo;s the pioneering work we do in the industry or the contributions we make to software engineering practices. What truly inspires me is our people-centered approach. Recently, I was honored by our market leadership, who put me in the spotlight to share my journey with fellow Thoughtworkers. While I\u0026rsquo;ve learned a great deal from my colleagues, I took this opportunity to share my own experiences in the hope of inspiring others.\nWhen you see me in the spotlight, you might notice the books I’ve written, the patents I’ve secured, or my two decades of experience—filled with ups and downs. I’ve gone from growing up in a small town to living in a metropolis, and from Hindi-medium schooling to becoming an international speaker. But when I reflect on my journey, I realize it has always been about going with the flow.\n💨 Going with the Flow Throughout my journey, I have lived by a few guiding principles:\nYou will find these more in my next book \u0026ldquo;Jagjivan: Living Larger Than Life\u0026rdquo;, based on my great grandfather\u0026rsquo;s life. I believe he lived life to the fullest, and we can learn a lot from him.\n📍 31ML: My Roots I was born in the small village of 31ML, near the border in Rajasthan. Growing up in a farming family, I was surrounded by the repair of machines, tractors, and the hands-on skills of family members, such as masonry , carpentry, tailoring, blacksmiths, and electricians. I picked up these skills, learning from my great-grandfather’s wisdom:\n \u0026ldquo;Skills never go to waste.\u0026rdquo;\n My grandfather made me run a stationery shop to sharpen my math skills and work at a painter\u0026rsquo;s shop to develop my creativity. This early exposure to diverse skills stayed with me throughout my journey.\n🥇 The First in My Family I moved to a nearby town Raisingh nagar for further education and became the first in my family to pass secondary school with distinction in science and mathematics. My dream was to pursue either engineering or medicine, swayed at first by a biology teacher, then later by a math teacher.\nAfter excelling in senior secondary school as the city’s top student, I was unsure of what path to take next. But the skills I had developed came in handy—even helping my father in building our family’s first home.\n💰 The Influence of Cost Being flexible and going with the flow also meant adapting to the realities of life. With two younger sisters and limited financial resources, I couldn’t burden my family with the cost of an engineering degree. So, I opted to pursue a B.Sc. at a government college in 📍Sri Ganganagar.\nAlthough I was a college topper, it was a friend of mine, who barely passed 12th grade, who got into the spotlight ⚡️ after being cleared Rajasthan Pre Engineering Test (RPET). That motivated me to crack the engineering entrance exam.\nThough I knew little about IITs or RECs, I chose an affordable prep course at 📍Kota, Rajasthan and there I found myself surrounded by students with much higher scores. When the first exam results came out, I couldn’t find my name in the list—until someone made me noticed that it is at the very first page in toppers my picture!\nIt made me believe that I could well survive there :), interestingly I celebrated my birthday with cake 🎂 very first time, and tested burger 🍔 there.\nI secured a good rank and was eligible for most Regional Engineering Colleges (RECs), but I wanted to choose nearest Government College for financial reasons. On guidance of seniors, I settled at Mechanical Engineering at REC Jaipur—until destiny had other plans.\n🎨 Skills that Shape Growth Around that time, Bill Gates’ famous aids in Rajasthan led to an increased seats, and a new IT branch, causing a re-counseling of our batch. Short stay at Jaipur taught me good about engineering and india wide culture, and I switched to Computer Engineering at 📍REC Kurukshetra. Interestingly, I haven\u0026rsquo;t seen computer till this point 🙈.\nTransitioning from Hindi to English-medium education was challenging, but I leveraged my skills in arts 🖌️ and social involvement. I became the secretary of the Fine Arts Club and head of the decoration committee at various events. I also led the National Service Scheme (NSS), organizing blood donation camps and cleanliness drives.\nThese leadership experiences helped me grow, and I graduated with honors and a high-paying job, putting myself, my family, and people around me on the path to growth.\nThe Next 20 Years To learn more about the next two decades of my journey, check out my other article, where I share insights on collective growth and the continuous flow of learning and leadership. My learning has been sharpened through there years, leading me to write over 120 articles and participate in more than 60 events. Learn how I impacted over 50,000 individuals through these endeavors. Learn about my books, patents and more.\nKeep learning! Keep sharing!\n","id":19,"tag":"communication ; motivational ; consulting ; Journey ; event ; learnings ; takeaways ; talks ; life ; tips ; writing ; reading ; general ; selfhelp ; change ; habit ; leadership ; java ; javaone ; nagarro ; impact ; Covid19 ; opportunity ; thoughtworks ; story","title":"TW Spotlight: Going with the Flow – My Journey","type":"event","url":"/event/tw-spotlight-2024/"},{"content":"This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).\n CONCURRENT RENDERING OF CANVASES FOR DIFFERENT APPS AS PART OF 3D SIMULATION This invention focuses on rendering multiple 2D mobile apps simultaneously in XR devices, arranged within a 3D spatial layout.\nThis browser does not support PDFs. Please download the PDF to view it: Download PDF.\n ☞ USPTO #17/818,346 ☞ JUSTIA #20240045207 ☞ Granted #11988832 ☞ More Details\n SQUARE ORIENTATION FOR PRESENTATION OF CONTENT STEREOSCOPICALLY This invention focuses on rendering a 2D mobile app in XR devices in 3D without being affected by 2D app orientations. We have implemented a square orientation for the 2D app content, which disregards the impact of phone orientation. As a result, we can render 2D apps stereoscopically regardless of the phone\u0026rsquo;s position.\nThis browser does not support PDFs. Please download the PDF to view it: Download PDF.\n ☞ USPTO #17/812,987 ☞ JUSTIA #20240022703 ☞ Granted #12081729 ☞ More Details\n STREAMING OF COMPOSITE ALPHA-BLENDED AR/MR VIDEO TO OTHERS This invention introduces ARCast Cloud, a casting solution for mixed reality streams from AR glasses to the cloud. ARCast Cloud enables the streaming of AR glass and AR camera feeds to a web portal. The web portal can blend these streams to simulate the AR environment to remote viewers. This solution provides a seamless way to connect AR glasses to a web portal globally, allowing others to join the stream and see exactly what the user is experiencing. ARCast Cloud is designed for streaming mixed reality and other experiences to remote users.\nThis browser does not support PDFs. Please download the PDF to view it: Download PDF.\n ☞ USPTO #17/820,033 ☞ JUSTIA #20240064387\nCONVERSION OF 3D VIRTUAL ACTIONS INTO 2D ACTIONS This invention pertains to rendering a 2D mobile app in XR devices in 3D. We have developed a method to convert 3D interactions of XR devices into 2D interactions, enabling us to interact with 2D apps within a 3D environment.\nThis browser does not support PDFs. Please download the PDF to view it: Download PDF.\n ☞ USPTO #17/812,913 ☞ JUSTIA #20240019979\n IDENTIFICATION OF CALLBACK FROM 2D APP TO RENDER 3D MODEL USING 3D APP This invention focuses on rendering a 2D mobile app within a 3D environment for XR devices. We have developed a mechanism to facilitate communication between the underlying 2D app and the 3D app. This allows for the implementation of a 3D model viewer app within a standard phone app, enabling the passing of model information from the 2D app to open in 3D within the contained app.\nThis browser does not support PDFs. Please download the PDF to view it: Download PDF.\n ☞ USPTO #17/820,017 ☞ JUSTIA #20240061657\n","id":20,"tag":"me ; profile ; patents ; XR ; Streaming ; Lenovo ; ThinkReality ; Android ; Metaverrse ; design ; user experience ; about ; invention","title":"Patents","type":"about","url":"/about/patents/"},{"content":"AWE Asia is a flagship event of the Augmented World Expo (AWE), serving as a crucial platform to unite leading XR companies, investors, and international stakeholders to drive the growth of the XR market in Asia.\nAfter an exciting Day 1 at AWE Asia 2024, I’m thrilled to share my key takeaways from Day 2, where both I and my fellow Thoughtworker Arijit had the opportunity to present as speakers.\nExploring the Metaverse as a Collective Responsibility Day two kicked off with a brilliant keynote by Alvin Wang (汪丛青) Graylin, author of Our Next Reality. 📕 His book is a treasure trove of research, showcasing AI-powered Metaverse use cases, while co-author Louis Rosenberg offered a compelling critique on the ethical implications of these technologies. As I absorbed their insights, I found deep connections to my own work, 📕 Exploring the Metaverse, which also emphasizes the opportunities, challenges, and collective responsibility in adopting these emerging technologies. It felt like \u0026ldquo;Exploring the Metaverse is Our Next Reality\u0026rdquo; was the theme of the day.\nI had the pleasure of getting my copy of Our Next Reality signed by Alvin, who also expressed appreciation for my book Exploring the Metaverse. I’m nearly finished with Alvin\u0026rsquo;s book and look forward to sharing my reflections in next article. In the session, Alvin clearly called that there are two path ahead with these technologies, very bad one or a good one, it\u0026rsquo;s on us which one we move forward.\nAcademic Research Fueling the Ecosystem I attended a fascinating session led by Mark Billinghurst, Director of IVE (Interactive and Virtual Environments) at the University of South Australia. He highlighted his research on developing cross-platform solutions and how multi-modal inputs are increasingly being supported across various platforms. Frank Guan from Singapore Institute of Technology also presented their work on overcoming challenges in integrating AI into XR, such as maintaining processing within the required framerate.\nIt was inspiring to see such strong participation from academia, including exhibits at the Expo showcasing innovations like remote rendering with 5G technology. This kind of academic-industry collaboration is something I emphasized in my book, Exploring the Metaverse, where I call for a unified effort from academia, industry, individuals, and governments to shape a better future in the Metaverse.\nXR as the Ultimate Interface for AI In a fascinating discussion moderated by Eddie, who is a founder various avenues in XR, he started it with the history of AI and the technology hype cycle. Just as the internet was once dismissed as a passing fad, so too are AI, XR, Metaverse, and Blockchain sometimes misunderstood. Rahul Mishra, who heads Web3 \u0026amp; Immersive Tech Initiatives at Shemaroo, highlighted how companies like Shemaroo are making significant bets on the Metaverse. Megha Chawla, who leads the business development at Nemo Planet, shared insights on the growing XR ecosystem in India, calling for greater investment in this growing market.\nI shared Thoughtworks’ approach to value-driven XR adoption, emphasizing how we focus on finding the right ROI rather than force-fitting technology. Our work with Lenovo ThinkReality and other key projects showcases our commitment to engineering excellence in the XR world.\n My colleague Arijit Debnath took the stage with an impactful session on spatial design. Drawing from his background in architecture, he boldly stated that many XR designs from the software industry lack true consideration of space. Arijit demonstrated how principles of classical design disciplines—such as architecture—are essential for creating immersive XR experiences.\nHis hand-drawn sketches beautifully illustrated the convergence of these concepts with today’s needs, earning praise from attendees as some of the best artifacts seen at AWE.\nXR Experiences: The Ultimate Reward The day continued with more hands-on experiences in the exhibition hall, where I explored smart gloves, 4D immersive videos, and more. The variety of showcases was impressive, and I wish I had more time to attend all the sessions and demos. The event concluded with the Auggie Awards, which brought a surge of energy and excitement to the room. Collaboration and Networking: The Key to Success AWE Asia proved to be a friendly and collaborative environment, filled with enriching conversations. My discussion with Dhawaj Agrawal from Meta about Reality Labs’ research on XR technology challenges was particularly insightful, India’s growing ecosystem and Meta’s contributions there are exciting, with AI playing a pivotal role in advancing accessible XR solutions.\nA big thank you to Hilary Chiu and Olivia Lee for hosting a delightful dinner for some AWE participants. It was a pleasure to discuss the mission of the AllStarsWomen APAC chapter and the importance of building inclusion in technology. I shared insights into how we at Thoughtworks prioritize Diversity \u0026amp; Inclusion, including our Vapsi program, which helps women return to work after a maternity break.\nThe conversation with Dr. Luke Soon from PwC, Roshan Raju, Founder of Imersive.io, and others was truly engaging, and I look forward to finding synergies in our work. It was also a pleasure to share T-shirts featuring my book cover with some of them. Looking forward to more collaborations in the future!\nAs the evening was capped off by witnessing the shared values across cultures, exemplified by the beautiful Indian native displays at Marina Bay in Singapore. As Alvin mentioned in the kickoff, when viewed from space, the world has no boundaries.\nAWE Asia 2024: An overwhelming experience AWE Asia 2024 has been an incredibly positive and enriching experience for me. A big thanks and congratulations to David Weeks and the entire team for hosting such a remarkable event.\nLet’s stay connected and keep exploring the limitless possibilities of the Metaverse and related technologies together.\nFind more pictures of AWE Asia 2024 here.\n","id":21,"tag":"thoughtworks ; event ; talk ; xr ; ar ; vr ; mr ; ai ; spatial-computing ; community ; learnings ; AWE ; metaverse ; business ; panelist ; genAI ; 3d ; expert insights ; genXR ; book ; enterprise ; takeaways ; lenovo","title":"Exploring the Metaverse is OUR NEXT REALITY - AWE Asia Day 2","type":"post","url":"/post/aweasia-2024-day2/"},{"content":"AWE Asia is a flagship event of AWE(Augmented World Expo), dubbed as “XR’s Most Essential Conference” and “The XR Conference for Everyone”. It is the world\u0026rsquo;s leading AR/VR expo after more than 10 years of dedication, development, and high-quality events. AWE organizes three annual events (USA, Asia, EU) and holds a number of other sessions all over the world. AWE Asia is an opportunity to bring together outstanding XR companies, investors, and other international stakeholders in Asia to promote the development of the local XR market.\nI am excited to announce that we will be speaking at AWE Asia to share \u0026ldquo;Global Insights from XR Leaders in India: AI-Powered Real-Time Adaptation in XR\u0026rdquo;.\nEvent Details : 📅 Date: August 26-28, 2024 📍 Location: Singapore\nRegister here : https://www.aweasia.com/get-tickets, find the packed agenda here\nSession Details: As the XR landscape continues to evolve, the quest for more immersive and responsive experiences intensifies. This discussion explores the transformative power of AI in creating dynamic XR experiences that learn and adapt to user data and real-world conditions, as informed by the panel\u0026rsquo;s experience in the Indian market. Join us as industry leaders from Shemaroo Entertainment, Thoughtworks, Nimo Planet \u0026amp; XROM share their insights on:\n Leveraging AI to personalize and optimize XR experiences- How can user data be harnessed to tailor XR environments to individual preferences and behaviors? Real-time adaptation for enhanced immersion-Explore the use of AI to adjust XR environments in real-time based on user actions, emotions, or surrounding physical conditions. Adapting XR for use cases that replace traditional computers- Discover how AI-powered XR can provide a seamless and intuitive interface for tasks traditionally performed on computers, revolutionizing productivity and interaction. The future of AI in XR training simulations- How can AI personalize training simulations and provide real-time feedback for improved learning outcomes? Challenges and opportunities- Discuss the practical considerations, ethical implications, and exciting possibilities that come with AI-powered adaptation in XR. 🕥 Time : August 28, 2024, 10:25 AM – 10:50 AM\n📍 Location: Main Stage\n🗣️ Speakers: Eddie Avil (Moderator), Rahul Mishra, Kuldeep Singh, Megha Chawla, Arpit Sharma\n Video Recording Keep in touch, keep learning, keep sharing!\n","id":22,"tag":"thoughtworks ; event ; speaker ; talk ; xr ; ar ; vr ; mr ; ai ; spatial-computing ; community ; learnings ; AWE ; metaverse ; business ; panelist ; genAI ; 3d ; expert insights ; genXR ; book ; enterprise","title":"AWE Asia | AI-Powered Real-Time Adaptation in XR","type":"event","url":"/event/aweasia-2024/"},{"content":"AWE Asia is a flagship event of AWE(Augmented World Expo), often dubbed as “XR’s Most Essential Conference” and “The XR Conference for Everyone”. With three major annual events in the USA, Asia, and Europe, along with numerous other sessions worldwide, AWE Asia is a prime opportunity to bring together leading XR companies, investors, and international stakeholders to accelerate the growth of the XR market in Asia.\nThis year, I had the privilege of attending the event in Singapore in person as a speaker. Here are some key takeaways from Day 1.\nThe Grand Opening - AI ❤️ XR David Weeks kicked off the event, highlighting the growing participation of XR in Asia and emphasizing gender diversity at the conference. Ori Inbar, AWE Co-Founder, officially opened the event with the powerful message, \u0026ldquo;AI loves ❤️ XR.\u0026rdquo;\nHe took us on a journey through 50 years of XR history, illustrating how every year brings a new breakthrough for XR, even though 2023 has been challenging. Ori\u0026rsquo;s message was clear: AI and XR are not in competition; instead, they complement each other beautifully. AI enhances human capabilities, and together with XR, they are poised to revolutionize the future.\nMeta Goes Big with XR I then attended a session by Mark Jeffrey from Meta, who discussed Meta\u0026rsquo;s ambitious expansion in the XR space. He showcased how Meta is bringing 2D apps like Zoom and Teams, Office 365, etc into spatial environments on their XR devices. Mark highlighted compelling use cases, such as Pfizer transforming extensive documents into XR training modules and Lufthansa utilizing XR for sales and soft skills training.\nMeta’s Quest for Business was presented as a comprehensive solution for all enterprise needs, supported by over 1,000 ISV solutions. The session offered great insights into the future of mixed reality and workspace collaboration.\nMapping the Real-World Metaverse Niantic, a pioneer in the real-world metaverse, is mapping the world to enable immersive, real-world collaboration. Brian McClendon discussed their approach—Capture, Create, and Play—and how it’s bringing the real-world metaverse to life. They recently launched Scaniverse 4.0, encouraging everyone to join the community. Niantic is accelerating adoption by integrating their 8th Wall ecosystem for web-based XR.\nI also had the opportunity to discuss my book 📕, \u0026ldquo;Exploring the Metaverse\u0026rdquo;, with Brian, who appreciated the comprehensive approach I took in defining the metaverse.\nThe Rise of Virtual Commerce There was a dedicated session on virtual economy and commerce. Mr. Aditya Mani and others explored the evolving role of avatars and how expressions are becoming crucial in virtual commerce. While it might seem unusual to invest real money in virtual goods, this is the reality for the next generation of users.\nFrom virtual try-ons to AI-enhanced avatars, the possibilities are expanding rapidly. Virtual commerce, despite some current limitations, is becoming an integral part of the digital economy.\nHealthcare, Wellness, and Responsible Adoption Several sessions focused on the application of XR in healthcare, climate change awareness, and sustainability. I was particularly impressed by the work of Srushti Kamat and Lucy Hayto.\nDr. Prash P presented fascinating concepts in psychedelic therapy and how VR is naturally suited to certain steps in this therapy. The potential to create and replay spatial memories, much like the Pensieve in Harry Potter, was especially intriguing.\n Networking and Promoting \u0026ldquo;Exploring the Metaverse\u0026rdquo; I took every opportunity to promote my book, 📕 \u0026ldquo;Exploring the Metaverse,\u0026rdquo; which is now available globally. The book cover was proudly displayed on my T-shirt as I engaged with attendees.\nI also represented Thoughtworks, showcasing our pioneering work in engineering practices for software development. We continue to inspire the XR software space with practices like automation testing, Agile methodologies, CI/CD, and AI-backed accelerators to fast-track development. The conversations were incredibly engaging, and I look forward to continuing them tomorrow.\nFirst-Hand Experience at the EXPO Hall The EXPO Hall was packed with fascinating XR use cases from companies like Lenovo, Meta, and Sony, featuring a variety of devices, from smart contact lenses to smart gloves and rings. I had the chance to experience some of these solutions firsthand, including dancing with a virtual partner, thanks to Dark Art Software. Although I couldn\u0026rsquo;t cover all the booths, I spent quality time at Lenovo’s XR Booth, where ThinkReality products are showcased, these are also backed by Thoughtworks.\nIt was great to meet Martand and Bon Teh in person with ISV partners. We made a copy of my book available at the booth. Martand also presented How Enterprise XR is expanding and role ThinkReality Eco-System in this space.\nReady for Day 2 These highlights barely scratch the surface of what I experienced on Day 1. The overwhelming response has left me eager for Day 2, which will also include two sessions from Thoughtworks— one panel discussion by me and another session by Arijit.\nStay tuned for more insights from this incredible event!\n","id":23,"tag":"thoughtworks ; event ; talk ; xr ; ar ; vr ; mr ; ai ; spatial-computing ; community ; learnings ; AWE ; metaverse ; business ; panelist ; genAI ; 3d ; expert insights ; genXR ; book ; enterprise ; takeaways ; lenovo","title":"AI ❤️ XR - AWE Asia Day 1 Takeaways","type":"post","url":"/post/aweasia-2024-day1/"},{"content":"My book \u0026lsquo;Exploring the Metaverse\u0026rsquo;, is now available in stores globally, including Amazon and BPB Online. It features forewords and testimonials from a diverse group of industry leaders, adding valuable perspectives to the content.\nThe book explores the opportunities and concerns of the metaverse, outlining a path forward that calls for collective action from academia, industry, government, and individuals. I initiated this collective effort by gathering testimonials from a diverse group of experts.\nOne of the most distinguished endorsements comes from Shri Arjun Ram Meghwal, Hon\u0026rsquo;ble Minister of State, Ministry of Law and Justice (Independent Charge), Government of India.\nIt has been a privilege to meet the Hon\u0026rsquo;ble Minister on various occasions to discuss the future of technology.\n🙏 I extend my heartfelt thanks to the Hon\u0026rsquo;ble Minister and his team for reviewing early versions of the book, providing invaluable insights, and contributing the following foreword:\n \u0026ldquo;It gives me immense pleasure to learn that Mr. Kuldeep Singh is bringing out a book titled \u0026ldquo;Exploring the Metaverse: Redefining Reality in the Digital Age\u0026rdquo; embarking the ongoing technological advancements and unleashing the transformative potential for the impetus to solve the global challenges. Bharat has taken a leap forward in the technology-dominated era. The New Age Governance requires technology-based solutions for amicable redressal of local as well as global concerns. The Digital India program aims to transform the nation into a knowledge-based economy and a digitally empowered society by ensuring digital services, digital access, digital inclusion, digital empowerment and bridging the digital divide. Society and Science always co-evolves, and tech-based tools always become an important bridge in this process. The changing paradigm of the digital age has made an imperative to redefine the realities of the digital age for betterment of the humanity. The success of the digital public platforms like Aadhaar, UPI, Digilocker, UMANG, Cowin, AarogyaSetu, GeM, Diksha, E-Sanjeevani, E-Hospital, E-Office, Tele-Law, Tele- medicine, and AI-assisted tools like Bhashini, OpenNyAI have been phenomenal. These platforms not only facilitated “Ease of Living” but also empowered the lives of common citizens. As the Nation is now poised to internationalize India’s most significant contribution to the Digital world in this Techade, this book is a commendable effort towards showcasing technological acumen for the reference of the tech fraternity. Considering the challenges of the digital world, the emphasis of the book to build upon responsible tech is an appreciable intervention. This skillfully and lucidly written book emerges as an essential addition to the library of any reader interested in the innovations and reflections of technology as well as its multidimensional application for the global good. I convey my best wishes for the successful publication of the book and the author\u0026rsquo;s future endeavors. — Shri Arjun Ram Meghwal, Minister of State (I/C) for Law and Justice and Minister of State for Parliamentary Affairs and Culture, Government of India\n This browser does not support PDFs. Please download the PDF to view it: Download PDF.\n I aim to keep fostering collaboration between industry and government, as highlighted in my book. The future depends on our collective action, and we hold the responsibility for how wisely—or poorly—we embrace and shape technology.\n🛍️ Get Your Copy Today! 🇧🇪 Amazon Belgium 🇧🇷 Amazon Brazil 🇨🇦 Amazon Canada 🇫🇷 Amazon France 🇩🇪 Amazon Germany 🇮🇳 Amazon India 🇮🇹 Amazon Italy 🇯🇵 Amazon Japan 🇲🇽 Amazon Mexico 🇳🇱 Amazon Netherlands 🇵🇱 Amazon Poland 🇸🇦 Amazon Saudi Arabia 🇸🇬 Amazon Singapore 🇪🇸 Amazon Spain 🇸🇪 Amazon Sweden 🇹🇷 Amazon Turkey 🇦🇪 Amazon UAE 🇬🇧 Amazon UK 🇺🇸 Amazon US 👉 Rest of the world \nBPB Online (use code \u0026ldquo;bpbkuldeep\u0026rdquo; to get 20% off)\n☞ BPB Online Worldwide ☞ BPB Online India ☞ Free Preview\n☞ Exploring the Metaverse - Synopsis\nMore about my books here\n","id":24,"tag":"xr ; ar ; vr ; metaverse ; book ; available now ; event ; buy ; launch ; meta ; blockchain ; NFT ; IOT ; AI ; thoughtworks ; govt ; dpg ; event ; MR ; testimonial ; india ; MIETY ; goi ; tele-law ; UPI ; ONDC ; policies","title":"🙏 Exploring the Metaverse | Testimonial 6","type":"post","url":"/post/exploring-the-metaverse-testimonial-6/"},{"content":"My book \u0026lsquo;Exploring the Metaverse\u0026rsquo;, is now available in stores globally, including Amazon and BPB Online. It features forewords and testimonials from a diverse group of industry leaders, adding valuable perspectives to the content.\nOne of the most distinguished endorsements comes from Prof. M Manivannan, who leads the Experiential Technologies Innovation Center (XTIC) at IIT Madras.\nProf. Mani has spearheaded innovation at XTIC and have made India proud. His pioneering work in areas like iTad(interactive Touch Active Display) and integrating haptics into virtual reality solutions is noteworthy. He believes that India will become \u0026ldquo;Metaverse corridor\u0026rdquo; for the world and lead towards Vasudhaiva Kutumbakam. Under his leadership, startups like Merkel Haptics have successfully transitioned academic concepts into practical applications for industry and consumers.\nIt has been a privilege to share the stage with Prof. Mani at the XROS Fellowship Summit, the Metaverse India Policy and Standards (MIPS) initiative, and through our collaboration on the white paper \u0026ldquo;XR in India\u0026rdquo;.\n🙏 I extend my heartfelt thanks to Prof. Mani for reviewing early versions of the book, offering his invaluable insights, and contributing the following foreword:\n \u0026ldquo;Future of Metaverse in India presents a captivating parallel to the concept of Maya, a concept that has existed for centuries in the country. Kuldeep offers insights, analyses, and visions that collectively contribute to a deeper understanding of the Metaverse and its transformative potential. In this remarkable journey through the pages of this book, the readers are invited to explore the impacts and the challenges of this uncharted territory of a digital universe that has the potential to transcend the confines of screens and pixels to become an integral part of our everyday existence soon. This book has highlighted the need for a collective effort from industry, government, academia, and people to make this Metaverse come true. The role of academia in the Metaverse is multifaceted and dynamic. As the Metaverse continues to evolve, academia stands as a guiding force, illuminating the path toward a more interconnected and knowledge-rich digital reality. I hope the readers would embark with an open mind and an adventurous spirit, for the Metaverse awaits. \u0026quot; — Prof. M Manivannan, Head of Experiential Technologies Innovation Center (XTIC), IIT Madras\n I hope to continue fostering collaboration between academia and industry, as emphasized in my book. The future hinges on collective action from all of us, and we will be responsible for how wisely or poorly we embrace and shape technology.\n🛍️ Get Your Copy Today! 🇧🇪 Amazon Belgium 🇧🇷 Amazon Brazil 🇨🇦 Amazon Canada 🇫🇷 Amazon France 🇩🇪 Amazon Germany 🇮🇳 Amazon India 🇮🇹 Amazon Italy 🇯🇵 Amazon Japan 🇲🇽 Amazon Mexico 🇳🇱 Amazon Netherlands 🇵🇱 Amazon Poland 🇸🇦 Amazon Saudi Arabia 🇸🇬 Amazon Singapore 🇪🇸 Amazon Spain 🇸🇪 Amazon Sweden 🇹🇷 Amazon Turkey 🇦🇪 Amazon UAE 🇬🇧 Amazon UK 🇺🇸 Amazon US 👉 Rest of the world \n20% Off on BPB Online\n☞ BPB Online Worldwide ☞ BPB Online India ☞ Free Preview\n☞ Exploring the Metaverse - Synopsis\nMore about my books here\n","id":25,"tag":"xr ; ar ; vr ; metaverse ; book ; available now ; event ; buy ; launch ; meta ; blockchain ; NFT ; IOT ; AI ; thoughtworks ; xtic ; iit ; event ; MR ; testimonial ; XROS ; MIETY ; ficci ; MIPS ; university ; students ; open source ; whitepaper ; practices ; standards ; policies","title":"🙏 Exploring the Metaverse | Testimonial 5","type":"post","url":"/post/exploring-the-metaverse-testimonial-5/"},{"content":"My book \u0026lsquo;Exploring the Metaverse\u0026rsquo;, is now available in stores globally, including Amazon and BPB Online. It features forewords and testimonials from a diverse group of industry leaders.\nOne such testimonial comes from Venkatesh Ponniah, Global Head of Emerging Technologies at Thoughtworks.\nWe call him Venki. It has been a privilege to receive guidance from Venki on technology strategy and governance. He has great insights into emerging technology businesses, particularly in AI, SDV, Digital Twins, and XR. His ability to delve deep into these technologies and provide clear, strategic direction is invaluable.\n🙏 I extend my heartfelt thanks to Venki for reviewing early versions of the book, sharing his valuable insights, and writing the following foreword:\n \u0026ldquo;Kuldeep\u0026rsquo;s book navigates the complex landscape of recent technological advancements, from the convergence of our physical and digital worlds to the rise of the Metaverse and the accessibility of emerging technologies. In a clear and accessible manner, he demystifies these concepts and provides valuable insights for both technologists and businesses. By incorporating practical software engineering concepts and standards, especially in the context of XR, Kuldeep ensures a practical and adaptable dimension. Drawing from his extensive experience with global enterprises, the book offers a well-rounded perspective on how emerging technologies can benefit various industries. With a thoughtful exploration of topics like privacy and sustainability, this book is a valuable read for both tech enthusiasts eager to learn and businesses contemplating the adoption of emerging technologies.\u0026rdquo; — Venkatesh Ponniah Global Head of Emerging Technologies, Thoughtworks\n 🛍️ Get Your Copy Today! 🇧🇪 Amazon Belgium 🇧🇷 Amazon Brazil 🇨🇦 Amazon Canada 🇫🇷 Amazon France 🇩🇪 Amazon Germany 🇮🇳 Amazon India 🇮🇹 Amazon Italy 🇯🇵 Amazon Japan 🇲🇽 Amazon Mexico 🇳🇱 Amazon Netherlands 🇵🇱 Amazon Poland 🇸🇦 Amazon Saudi Arabia 🇸🇬 Amazon Singapore 🇪🇸 Amazon Spain 🇸🇪 Amazon Sweden 🇹🇷 Amazon Turkey 🇦🇪 Amazon UAE 🇬🇧 Amazon UK 🇺🇸 Amazon US 👉 Rest of the world \n20% Off on BPB Online\n☞ BPB Online Worldwide ☞ BPB Online India ☞ Free Preview\n☞ Exploring the Metaverse - Synopsis\nMore about my books here\n","id":26,"tag":"xr ; ar ; vr ; metaverse ; book ; available now ; event ; buy ; launch ; meta ; blockchain ; NFT ; IOT ; AI ; thoughtworks ; event ; MR ; testimonial","title":"🙏 Exploring the Metaverse | Testimonial 4","type":"post","url":"/post/exploring-the-metaverse-testimonial-4/"},{"content":"In a world where AI is ubiquitous, technologies like AR, VR, and XR often raise eyebrows, despite their vast potential. AI is not only propelling these technologies forward but also preparing them for the next stages of development. There are tremendous opportunities to leverage their potential for business transformation. I am honored to be invited by Express Computers - India Leading IT Magazine, to speak on this topic and dispel common myths.\nIt is a privilege to accept this invitation for such a prestigious event for CXOs and senior executives in Indian businesses. Special thanks to Thoughtworks for sponsoring it.\n 👉 Register here\nThe Session Highlights: This webinar delves into how AI-powered XR is transforming various industries, offering innovative solutions that enhance productivity, streamline operations, and create immersive customer experiences.\n Understanding the XR World: Explore how XR digitally extends our surroundings, including the technologies, trends, myths, and realities involved. AI-Powered XR: Discover how AI helps make sense of our environment and the capabilities this machine-generated sense offers. Empowering Communication: Learn how artificial sense in XR can revolutionize industries such as customer support, training, education, consulting, marketing, sales, and entertainment. Enhancing Productivity: See how working with AI-enhanced senses can improve productivity, efficiency, and effectiveness, making us more compatible with challenging and complex work environments. Beyond Imagination: Understand how this artificial sense can transcend physical boundaries, allowing experiences before realization or even after life, rethink art and creativity. Considering Costs: Discuss the potential side effects and define actionable steps to mitigate them. 📕 The Book \u0026lsquo;Exploring the Metaverse\u0026rsquo; This session draws significant inspiration from my book, \u0026lsquo;Exploring the Metaverse.\u0026rsquo; The book is now available globally.\nYou can get your copy from the links below:\n🇦🇺 Amazon Australia 🇧🇷 Amazon Brazil 🇨🇦 Amazon Canada 🇩🇪 Amazon Germany 🇪🇸 Amazon Spain 🇫🇷 Amazon France 🇮🇳 Amazon India 🇮🇹 Amazon Italy 🇯🇵 Amazon Japan 🇳🇱 Amazon Netherlands 🇲🇽 Amazon Mexico 🇬🇧 Amazon UK 🇺🇸 Amazon US ☞ BPB Online Worldwide ☞ BPB Online India ☞ Free Preview\nThe session presentation This browser does not support PDFs. Please download the PDF to view it: Download PDF.\n Video Recording On it on demand on Express Computers on youtube below.\n Keep in touch, keep learning, keep sharing!\n","id":27,"tag":"thoughtworks ; event ; speaker ; talk ; xr ; ar ; vr ; mr ; ai ; spatial-computing ; community ; learnings ; computer express ; metaverse ; business ; lecture ; genAI ; 3d ; expert insights ; genXR ; book ; takeaways ; enterprise","title":"Harnessing Business Opportunity with AI Powered XR","type":"event","url":"/event/ai-powered-xr/"},{"content":"Welcome to Internet 3.0, a transformative era where reality takes on new dimensions, driven by eXtended Reality (XR) at its core, along with AI, Cloud, IoT, and more. This evolution is the product of collaborative efforts involving academia, government, industry, and individuals. My book, Exploring the Metaverse, available globally, outlines this kind of collaboration as a call to action for all of us.\nFICCI XROS Fellowship Program Federation of Indian Chambers of Commerce \u0026amp; Industry (FICCI), is the voice of india\u0026rsquo;s business and industry for policy change. The FICCI XROS Fellowship Program curated to empower Indian XR developers through stipends, mentorship, and open-source project contributions.\nThe initiative aims to cultivate digital public goods and advance careers in XR technology. It is supported by National e-Governance Division (NeGD) under the Electronics India, Ministry of Electronics and IT, Govt. of India as the technical partner, along with the Office of the Principal Scientific Adviser to the Government of India (PSA) as the knowledge partner. Implemented by Reskilll, and well supported by Meta\nI also had the privilege of participating as a panelist at XROS Fellowship Summit event.\nXROS Fellowship Report A recently released XROS Fellowship Report showcases the entire XR community\u0026rsquo;s industry projects, mentor contributions, and outcomes. Download the full report here.\nThis browser does not support PDFs. Please download the PDF to view it: Download PDF.\n Pleased to share that it covers my article in expert insights.\neXtended Reality - the Enabler A special thanks to the team Rohit Sardana, for orchestrating this program, and inviting me for writing for the final report. It was truly an inspiring and collaborative experience.\nKeep learning, Keep sharing, Keep growing!\n","id":28,"tag":"xr ; ar ; vr ; mr ; whitepaper ; open source ; metaverse ; ficci ; university ; students ; startup ; expert insights ; meity ; xros","title":"XR - the Enabler | XROS Fellowship Report","type":"post","url":"/post/extending-reality-the-enabler-xros/"},{"content":"Season of AI, a virtual event series of the Microsoft Azure Developer Community by Reskill, stands as a flagship event in empowering aspiring professionals. Recently, I had the privilege of delivering a session, where I had the opportunity to guide the community and share insights from my experience in XR and AI development.\nIt was an honor to accept this invitation, and special thanks to Rohit Sardana and Azkar Khan for extending the invitation.\n The Session Highlights: In this session, titled \u0026ldquo;Azure AI Fostering Sense within XR world\u0026rdquo; I took a different approach to deliver the content, considering the existing sessions on Azure AI.\n I just revered it \u0026ldquo;XR World \u0026gt; Sense Within \u0026gt; Fostering AI \u0026gt; Azure AI\u0026rdquo;\n The session began by explaining the concept of the \u0026ldquo;eXtended Real World (XR World)\u0026rdquo;. We explored what it means to sense within this world and how we, through our intelligence, make sense of our surroundings. We then discussed how artificial intelligence (AI) can understand, interpret, and extend this world.\nThe session highlighted the intertwined relationship between AI and XR, emphasizing their deep integration and the emergence of Generative XR.\nMany of these concepts are discussed in depth in Chapter 4 of my book, Exploring the Metaverse. You can get your copy from Amazon Globally or BPB Online\nWe then delved into what Azure AI, AI Vision are bringing to the table, discussing its tested applications and the future direction of AI in the XR domain.\nThe session presentation and recording Here are the slides I presented:\nThis browser does not support PDFs. Please download the PDF to view it: Download PDF.\n For video recording is available on Azure Developer Community\u0026rsquo;s YouTube Channel\n Keep in touch, keep learning, keep sharing!\n","id":29,"tag":"microsoft ; event ; speaker ; azure ; talk ; xr ; ar ; vr ; mr ; ai ; spatial-computing ; community ; learnings ; reskill ; metaverse ; students ; speaker ; lecture ; genAI ; 3d ; expert insights ; genXR ; book ; takeaways","title":"Speaker - Azure AI Fostering Sense within XR World","type":"event","url":"/event/azure-ai-fostering-sense-within-xr-world/"},{"content":"On July 5, 2004, I embarked on my professional journey. These past 20 years have been nothing short of adventurous and fulfilling.\n \u0026ldquo;Learning is a natural process, and it matures by sharing\u0026rdquo;\n I\u0026rsquo;ve had a wealth of learnings throughout these two decades, as I worked on over 50 projects spanning various domains and technologies. I\u0026rsquo;ve had the privilege of directly working with over a thousand individuals from diverse backgrounds across different organizations. My learning has been sharpened through my experiences, leading me to write over 120 articles and participate in more than 60 events in roles such as judge, speaker, panelist, or guest lecturer. I may have impacted over 50,000 individuals through these endeavors.\nAlso authored and foreworded books, received few patents, and awards. As time passes, these stats may continue to accumulate, but reflecting on the past becomes a vital aspect of charting the future.\nMy early life experience is shared here and In this article I present 20 reflections of past 20 years that have shaped my path, and continue to guide me.\n1. A Luxurious start\u0026hellip;but destiny prevails After graduating from NIT Kurukshetra, I, along with Vineet, Vishnu, and a few friends, joined Quark Media House near beautiful city Chandigarh. The office offered 5-star facilities: mornings began at the gym, followed by a royal breakfast, and work was complemented by stocked pantries and excellent lunch options. Evenings were spent in swimming pool or play ground.\nBefore I knew it, a month had passed, and with my first salary, I bought my first motorbike. Months flew by as I worked in R\u0026amp;D on Java and CORBA interoperability projects and explored nearby hill stations on weekends. Soon, we celebrated our first work anniversary.\nAs time passed, I realized the need to learn more and grow, just like everyone else. During this period, I met Ritu, and it felt my time in Chandigarh was meant to meet my life partner, while my professional journey would eventually lead me to the NCR region.\n2. A small co-relation can take you far\u0026hellip;just find it While exploring various work opportunities, I landed an interview at Nagarro. After a few technical rounds, I met the co-founder, who surprised me by asking about Apollo and Appoly, the canteens near the boys\u0026rsquo; and girls\u0026rsquo; hostels at my college. Soon, I realized, I was speaking to Vikram Sehgal, an alumnus and gold medalist from my college. This unexpected co-relation led me to leave other options behind and join Nagarro, a small company with few hundred employees at the time.\nTransitioning from the luxurious life at Quark to a more modest environment at Nagarro was challenging. However, I was not accustomed to that level of luxury, and my primary motivation was the responsibility of supporting my sister\u0026rsquo;s education and uplifting my family. This sense of responsibility gave me direction and purpose.\nLittle did I know that I would contribute quite long here, help Nagarro grow multifold in the coming years.\n3. Building strong bash\u0026hellip;don\u0026rsquo;t miss the fun My journey at Nagarro began on a high note, thanks to strong techies like Deepak Nohwal, Amit Chawla, and Ashish Manchanda, who constantly encouraged me with challenging problems. I focused on building core strengths and become an exceptional troubleshooter. During this time, I delved deeply into Java, JVM internals, and JEE, mastering software development practices for complex and demanding projects.\nAmidst all the hard work, we never missed out on fun. I was part of a team that firmly believed in working hard and partying harder. Around this time, Ritu completed her engineering and started her professional journey in Gurgaon at another company. She joined most of our Nagarro parties, adding to the joy and camaraderie.\n4. Balancing work and married life As my work responsibilities increased, so did my life responsibilities. Balancing married life and work can be challenging, but in my case, Ritu joined Nagarro soon after our marriage, and we became colleagues. This unique situation blurred the lines between work and life for us. We traveled together, ate together, stayed late at the office together, watched movies at work, partied together, and commuted home together.\nThis close-knit routine helped us focus well on our jobs. We realized that balancing work and life is just a state of mind. We enjoyed both work and life together while fulfilling our family responsibilities. I continued supporting my sister\u0026rsquo;s engineering and medical education, and one sister got married soon after her education during this time.\nBy keeping an open mind and embracing our responsibilities, we found a harmonious path forward.\n5. Leading the path by expanding the horizons I was guided onto the leadership path by mentors like Sharad Narayan. They encouraged me to be flexible, push beyond boundaries, and embrace diversity in technology, work, and life. I expanded my expertise beyond Java, diving into C++, PHP, and more to apply engineering practices broadly. This helped me deal with my first recession in IT industry in 2009-10.\nIn this period, I established myself as a T-shaped professional, leading multiple complex projects. I became an early adopter of continuous integration concepts, build scripts, and more.\nSome of complex projects required me to revisit my engineering mathematics textbooks to meet the demands of a client with a PhD in statistical analysis. This overwhelming experience broadened my perspective in multiple directions.\n6. Embracing multi-cultural experiences through travel Traveling to different countries and places broadens your horizons and exposes you to diverse cultures. My first onsite visit was a delightful experience, especially flying business class. Most of my early business travels took me to Germany for an aviation domain client, but I also had the opportunity to explore various parts of Europe with colleagues.\nRitu and I have traveled together to many domestic destinations but our first international trip together was to Malaysia and Langkawi. We discovered that travel and tourism offer the best learning experiences, expanding our perspectives on people and their cultures. These experiences enhanced our acceptance of diversity and inclusion, embracing different cultures, people, and practices.\n7. Growth takes on a new meaning The concept of growth transforms when you become a parent. Welcoming our first child, Lakshya, was a profound turning point for us. Suddenly, all our thoughts and priorities revolved around him.\nLakshya introduced a new sense of responsibility, while fulfilling the existing responsibilities. My another sister completed engineering, and got married. Professionally, I was thriving, leading multiple teams and projects, and gaining the confidence to mentor others. Nagarro was well recognising my work. This personal and professional growth created a richer, more fulfilling journey.\n8. Establishing a foundation for sharing, caring, and growing We set up an Architecture Group for Java technologies at Nagarro, which became a platform for building standards and practices, mentoring, and training people. It also supported sales and fostered innovative concepts. Manmohan Gupta and Archna Aggarwal were instrumental in establishing this foundation, supported by talented colleagues like Amit, Rajneesh, Piyush and more. This initiative allowed me to work beyond project boundaries, operating at a higher level as a consultant and trusted advisor for the clients.\nDuring this time, I supported multiple consulting assignments at client locations and expanded my expertise in Enterprise Java Technologies, including ESB, BPM, and ODM solutions. Our focus was on creating a group dedicated to learning and sharing with a wider audience. This period marked the beginning of my sharing journey, starting with my first international session at JavaOne. These experiences established me as an enterprise architect and consultant.\n9. Fostering innovation with centers of excellence Building on my experience of leading the Architecture Group, I led initiatives to establish innovation centers within the organization for emerging technologies like IoT and AR/VR. This opportunity allowed me to explore various untapped tech domains beyond software, including agriculture, wind energy, oil fields, power plants, manufacturing, and automotive.\nWe developed interesting showcases and prototypes for connected workers and Industry 4.0 use cases, and supported numerous clients. I attended various conferences and hosted showcases, including partnering with Google for their Glass event in France, accompanying Kanchan Ray. These enriching experiences helped me cultivate entrepreneurial leadership skills and furthered my professional growth. It took me around the world, helped me to reconnect with old friends, make new ones, and experience extreme weather conditions and the associated fun. Not to miss that our kids started joining the fun at our team outings!\n10. Material wealth is important\u0026hellip;but it\u0026rsquo;s not everything Material wealth is also a symbol of growth and can provide great satisfaction and stability, especially when it comes to owning a home. Amidst the hustle and bustle of life, we finally took possession of our first flat in Gurgaon. Juggling between office, daycare, and getting our new home ready was challenging, but after months of effort, it felt entirely worth it.\nSoon, we also bought a new car, which gave us the opportunity to enjoy traveling even more. While the happiness derived from material possessions is significant, it is important to remember that it is temporary. True fulfillment comes from within, a lesson I learned from my Guru. This teaching was reinforced by an unexpected episode, which I will explain next.\n11. Prioritising health\u0026hellip;the greatest wealth We often realize the importance of something only when we lack it. All my achievements and material wealth felt insignificant when I was diagnosed with Pott\u0026rsquo;s Spine, a form of tuberculosis affecting the spine. It was hard to believe since my only symptom was acute back pain during sneezing or coughing, which I had dismissed.\nAfter months of seeing specialists with no clear answers, a general physician at a small hospital insisted on thorough tests after I visited for a fever. The diagnosis was shocking, and I had to be hospitalized immediately. Overnight, I became well-versed in Pott\u0026rsquo;s Spine and its treatment, seeking multiple opinions to confirm the diagnosis. Leaving all work behind, I drove myself to the hospital and was admitted for 15 days of intense medication, but this treatment left me so weak that I could barely walk. I was prescribed two months of bed rest, over a year of TB medication, a spine support belt, and a high-protein diet.\nMy workaholic nature made staying in bed doing nothing incredibly difficult. The treatment felt so long that I often questioned whether I was actually getting better. Thankfully, I received tremendous support from my family and Nagarro. Special thanks to Vikas Burman, whose guidance and personal experience with a more severe condition kept my spirits up and strengthened my belief in recovery.\nThis episode taught me that true wealth is health, and taking care of it should always be a priority.\n12. Becoming a generalist before a specialist My doctor emphasized that starting with a general physician can provide a more comprehensive understanding and guide us to the right specialist when needed. This inspired me to focus on being a generalist before specializing in the technology domain.\nDuring my recovery, I expanded my knowledge breadth while maintaining depth. Despite wearing a spine support belt for months, I didn\u0026rsquo;t miss out on fun or work events. I continued leading the IoT/ARVR Center of Excellence with Umang Garg, supporting clients, and giving demos with devices like Hololens and Google Glass, often resembling a superhero with my medical gear. Soon I resumed traveling.\nThe support and humor from those around me made this challenging time enjoyable and highlighted the value of a broad perspective in both health and professional growth.\n13. Embracing bigger roles and responsibilities I assumed the role of Technology Director and took on the additional responsibility of delivering a highly complex project involving the implementation of machine learning algorithms for aviation forecasting and estimation.\nThe scale of this project was immense, requiring the management of over 200 microservices that needed to work harmoniously day and night, processing multiple gigabytes of data in a short span of time. This role provided deep experience of cloud-native architectures and numerous challenges to overcome, find some key lessons here.\nThis experience also prepared me for even greater responsibilities that were on the horizon.\n14. Growing together and celebrating milestones My wife, Ritu, was also advancing in her career at Nagarro and embarked on her first business onsite to Germany. I later joined her with our son, marking his first plane ride and making it a memorable Europe trip.\nWe never missed opportunities to connect with friends and family on various occasions. We were blessed with the arrival of our second child, Lovynash. I was at a workshop with a client when I received the news and made a quick trip home from Germany to meet the new addition to our family.\nLife was changing rapidly, bringing new joys and challenges along the way.\n15. Change is the only constant As life took its turns, we realized that growth often requires sacrifices. With the kids needing more attention, Ritu made the tough decision to take an extended maternity break and focus on being a homemaker for a while.\nAs the saying goes, \u0026ldquo;change leads to changes.\u0026rdquo; After a long tenure at Nagarro, I recognized the need to step out of my comfort zone and explore new horizons.\nThis led me to Thoughtworks, a company with an inspiring ideology, a people-centric approach, minimal hierarchy, and a culture focused on social values. The emphasis on thought leadership, community contributions, democratic decision-making, and agility was particularly appealing. I was eager to see how these values played out in practice and felt confident that it would be the place to evolve into the next version of myself.\nThe next few reflections will explain how this move became a great milestone for me.\n16. Understand your why, discover yourself Right after joining, I traveled across offices, met people, and learned about Thoughtworks\u0026rsquo; purpose. This immersion gave me immense learning and a deep understanding of the culture.\nI noticed that many Thoughtworkers write and share a lot, both internally and externally, which inspired me to start writing book reviews and self-reflections, eventually setting up my own blog, thinkuldeep.com. I discovered a love for reading, particularly books authored by Thoughtworkers.\nInitially testing water at Thoughtworks and Thoughtworks testing me. I was able to apply my microservices knowledge in my first project, building an enterprise blockchain platform. I quickly started engaging with the community, sharing technology learnings, and participating in events.\nI felt, I traveled and learned about india more than ever before, and I was able to give more time to myself and my family.\n17. Crafting your impactful stories Thoughtworks\u0026rsquo; culture provided me with ample space to grow. I embraced TW\u0026rsquo;s engineering practices, which became second nature to me. I found a community in AR/VR and began sharing knowledge both internally and externally through various events with people like Raju, Neel and more. As I mentioned in my article \u0026ldquo;Two Thoughtful Years\u0026rdquo;, these experiences were transformative.\nI focused on building the brand within and outside Thoughtworks and participated in various leadership trainings. Thoughtworks instilled in me a deeper understanding of human behavior, biases, and brought a sense of peace. I detailed these changes in impactful change stories.\nWe learned Unity, C#, and XR development, and established the XR Practices publication to share knowledge, and were there in most flagship event of Thoughtworks. Our efforts were well recognized by clients. We excelled in creating high-performing autonomous teams. Working with such talented people is always rewarding.\n18 Finding opportunities amid adversity Opportunities often arise from the most challenging situations, and nothing exemplifies this more than the Covid-19 pandemic. As we faced lockdowns and had to leave our routines behind, we have to embrace the change that came our way.\nI viewed this as a year of transformation—a change that would be remembered for a lifetime—and aimed to create positive memories. We adapted to celebrating remotely and seamlessly transitioned to remote work. I took the opportunity to work from my native place, spending quality time with my parents and family. It was a touching experience, cycling through nature and staying busy by renovating the home and exploring new innovations at work. We even celebrated our wedding anniversary by recreating our wedding scenes.\nThis period taught us to accept change, even when it\u0026rsquo;s difficult, better find the ways to navigate the new normal. We were uncovering hidden talents and realized we could do much more than we ever imagined.\n19. Everything will pass\u0026hellip;good or bad. We were preparing for the post-Covid world, the focus was shifted to make enterprises pandemic-proof. I returned to the workplace, it coincided with another Covid wave, but this time, I was better prepared and ready to exploit more of my potential.\nI contributed significantly to the community, delivering webinars for students and universities, and enhancing my skills in technology, business, and leadership. My articles and contributions were featured in renowned platforms such as the Economic Times, Tech.de, Thoughtworks Insights, NASSCOM, XConf, SIH, Ministry of Electronics and Information Technology, and more.\nI become accidental product manager, collaborating with experts like Dinker Charak and Sachin Dharmapurikar. Together, we explored innovative product management and its lifecycle, discussing what truly matters and should be measured, adhering to 4KM and EEBO concepts. This enriched my technical skills and helped me become a true Tech@Core of Business persona.\nAs the world started opening up post-pandemic, we also hit the Taj and nearby hills. In this duration My father retired from government service leaving great work life lessons for all, and I dedicated a new, larger home in Gurgaon to him on this occasion. The process of making over this new home provided lessons akin to corporate learnings, marking yet another chapter of growth and adaptation.\n20. Directing the technology world as a professional author and innovator The world has moved past the COVID pandemic, but its impacts linger. Some struggle to return to the old ways of working or hybrid models, while others explore new approaches. I can say these new ways are bringing better diversity and inclusion, my wife resumed her professional carrier due to remote working in her comfortable hour at DesignersX.\nI was presented with the opportunity to take on the additional role of Engineering Director for Thoughtworks Gurugram Engineering Center, working alongside Sumeet Moghe, who pioneers async-agile ways of working, his book \u0026lsquo;The Async-First Playbook\u0026rsquo; is a masterpiece. We practiced the async-first approach, challenging many of my established beliefs about brainstorming, in-person meetings, and agile practices. This shift allowed me to free up my calendar significantly, fostering a habit of deep work over shallow tasks and focusing on value-driven work rather than busy work.\nThese experiences enabled me to manage my time efficiently and share my guiding thoughts through a book, \u0026ldquo;My Thoughtworkings\u0026rdquo;, I also became a professional author by writing \u0026ldquo;Exploring the Metaverse : Redefining Reality in the Digital Age\u0026rdquo; with BPB publisher, now available globally on Amazon and BPB online. I also reached the milestone of contributing to 100 articles and 50 events, and shared how to overcome your obstacles. I now have a patent granted for my work. These accomplishments have been incredibly fulfilling and will continue to propel me forward.\nWhat\u0026rsquo;s Next? With this, I conclude my 20 reflections on the past 20 years of work. The future, however, remains uncertain for everyone. We can never be 100% sure even about our next breath, so plans are just plans. Nothing is permanent—not us, the things around us, technology, businesses, or the work we do. Everything will change with time. Today, we are exploring ways to use AI, and tomorrow, AI will explore how to use us. It\u0026rsquo;s crucial to stay aware of technology and use it responsibly and ethically. Living fully in the present is the best way to hope for a better future.\nBy the way, I am now writing another book, \u0026ldquo;Jagjivan: Living Larger Than Life\u0026rdquo;, based on my grandfather\u0026rsquo;s life. I believe he lived life to the fullest, and we can learn a lot from him.\nKeep in touch and stay tuned! Thank you.\n","id":30,"tag":"communication ; motivational ; consulting ; blogs ; learnings ; takeaways ; talks ; life ; tips ; writing ; reading ; general ; selfhelp ; change ; habit ; leadership ; java ; javaone ; nagarro ; impact ; Covid19 ; opportunity ; thoughtworks","title":"20 reflections of 20 years","type":"post","url":"/post/20-of-20/"},{"content":"Artificial Intelligence has made significant strides with the development of Large Language Models (LLMs) or Generative AI, which are trained to understand and respond to human language naturally. Tools like OpenAI’s ChatGPT, Google’s Gemini, and Microsoft’s Co-pilot have become the go-to resources for \u0026ldquo;Ask me anything\u0026rdquo; queries. These innovations have been touted as potential Google Search Killer, disrupting the way we search for information, communicate with machines, and receive machine responses. While AI has become ubiquitous and inevitable in most of the technology domains, however the spotlight on Generative AI has led many to believe that technologies like XR (AR, VR, Metaverse, IoT, Blockchain) have taken a backseat or even become obsolete.\nIn this article, we will explore how AI is evolving beyond Generative AI to encompass Generative XR, leveraging real-world data to create a new era of immersive experiences faster than anticipated.\nFirst, let’s delve into what XR entails and then examine the interplay between AI and XR.\nExtended Reality (XR) Extended Reality (XR) is a dynamic fusion of technologies that redefine our reality through artificial means, offering innovative digital experiences:\n Augmented Reality (AR): Through devices like smartphones, smart glasses, and AR headsets, AR enhances our real-world environment by overlaying digital content such as information, graphics, or animations onto physical surroundings. Virtual Reality (VR): VR immerses users in entirely virtual environments using VR headsets. These devices create a fully simulated reality, transporting users to places and scenarios that are entirely computer-generated. Mixed Reality (MR): MR merges the virtual and physical worlds by integrating digital content into the real environment. MR Devices enable users to interact with holograms and digital objects within their physical space. XR also encompasses advancements such as smart mirrors and 3D projection displays. Beyond visuals, XR offers multisensory interactions with devices like smart gloves, bodysuits, and wearables, paving the way for a new era of spatial computing.\nXR continues to evolve with the seamless integration of AI within all aspects of XR—devices, services, operating systems, and more. Let\u0026rsquo;s explore this further in the following sections.\nAI powering XR AI has evolved to provide insights previously inaccessible and has the potential to outperform humans in specific tasks. AI powers XR solutions by extending human senses such as vision, hearing, touch, and feel, enabling the brain to perceive the artificial as real.\nAs I explain in my book \u0026ldquo;Exploring the Metaverse: Redefining the Reality in the Digital Age\u0026rdquo;, AI powers XR in multiple ways:\n Environment and Motion Tracking: XR immersive experiences require 6DOF tracking and movement, and for that devices are equipped with numerous sensors, computing capabilities, computer vision algorithms, and AI-backed estimation and learning. Simultaneous Localization and Mapping (SLAM) algorithms, enable the generation of a virtual representation of the environment and its ongoing tracking, and AI further detects objects, images, and plane surfaces. It involves cameras, accelerometers, gyroscopes, magnetometers and lidar. AI can now estimate depth just from camera feed without lidar even data, allowing for lighter and more natural XR devices. Augmenting virtual objects in the real environment, considering occlusion, collision, and gravity, relies on AI and physics engines. Face and Body Tracking: Digital identities, such as avatars or digital replicas, are becoming integral to immersive environments. AI powers these by enabling face detection, expression recognition, and body tracking. It senses body expressions and hand movements, making interactions in XR more natural and immersive. User Interactions: XR interactions now incorporate multiple input and output modalities, including head movement, eye tracking, hand tracking, gesture detection, voice inputs, and even brain inputs via thought. AI is key to enabling these advanced interaction methods. Digital Twins and Simulation: Building simulations and digital twins of real environments and humans is an important use case for training and education in XR. Navigating within a simulated environment requires dynamic actions such as walking, running, and driving, and needs randomization to appear realistic. AI can bring this dynamicity, which we\u0026rsquo;ll discuss further in the next section on scene understanding. AI advancements continue to empower XR, enhancing its capabilities. It\u0026rsquo;s important for businesses to recognize that AI does not diminish the importance of XR but instead enhances it. In the next section, we\u0026rsquo;ll explore how the hype around Generative AI can further propel XR technology.\nGenerative XR Generative AI has already revolutionized text content generation and is now extending its impact to other forms of content. While XR is already used across various industries, its expansion relies on generating relevant, dynamic content. This intersection of XR and Generative AI, which I call Generative XR, is key to creating immersive, realistic experiences.\n AI for XR Content Generation: To make XR a reality, we need data at scale. This can be achieved by replicating physical environments and creating digital twins of the real world. Tools like RoomPlan, RealityScan, photogrammetry scans, hardware-assisted 3D scans like Matterport have been successful in this area. Generative AI can introduce dynamism into XR environments through natural language prompting. Not just for gaming and entertainment, AI can well generate architectural models with concepts like ArchiGAN, can further create detailed representations of rooms, large buildings, roads, and adaptive floor plans, even for fictional cities and malls. Deep Training and Testing on 3D Simulation: XR-based simulations, such as those offered by Carla and NVIDIA’s Isaac Sim, make autonomous vehicle and robotics testing and training more affordable and effective. Machine learning (ML) models need detailed structural data for training, which is facilitated by tools like PointNet, VoxelNet, and ShapeNet. PointNet classifies and segments 3D point clouds, VoxelNet divides point clouds into 3D voxels, and ShapeNet offers a vast library of annotated 3D shapes for training ML models. AI-assisted 3D Modeling:Integrating AI into the XR content pipeline involves handling various inputs (like sketches and photographs) and using advanced processing capabilities (driven by edge, 5G, and cloud technologies) to generate content. This content can be reviewed and optimized by humans or AI systems before final approval and publication. For instance, Convrse.AI focuses on AI-based content optimization. For XR and the metaverse to become truly mainstream and deliver significant business value, AI-generated content will play a vital role. As we continue to evolve, we will encounter new opportunities and possibilities. It is up to us in the industry to seize these opportunities and create extraordinary experiences.\n While generative AI is addressing some of the key content challenges, there remains a level of skepticism regarding the trustworthiness of GenAI data, even as it begins to reveal more accurate insights. However, when XR becomes the interface for AI, this trust can be significantly enhanced. For instance, there is a profound difference between discussing medical treatment through a textual bot interface versus interacting with a digital doctor\u0026rsquo;s avatar in a familiar, comfortable environment. Recent advancements, such as OpenAI’s GPT-4 and Google’s Project Astra, are pioneering spatially aware AI, with XR playing a crucial role in establishing trust in these technologies. Similar development is seen in the announcement by Microsoft in bringing volumetric apps to Meta Quest\n As we move forward, the synergy between AI and XR will unlock new dimensions of immersive and intuitive experiences. Businesses and individuals alike must embrace these advancements to fully realize their potential. By leveraging this Generative XR, we stand on the cusp of a new era where the boundaries between the digital and physical worlds continue to blur, creating unprecedented opportunities for innovation and connection.\nThis article is originally published at XROM, being republished here with minor updates.\n","id":31,"tag":"xr ; ar ; vr ; article ; event ; xrom ; thoughtworks ; AI ; 5G ; practices ; iot ; expert insights ; GenAI ; 3d ; technology ; optimization ; xr content ; convrse.ai ; genXR","title":"Generative XR – Space aware AI bringing new era of immersion","type":"post","url":"/post/generative-xr-space-aware-ai-bringing-new-era-of-immersion/"},{"content":"AI is pervasive, whether we sense it or not. Staying updated has become essential for survival. We are increasingly immersed in artificially generated worlds, with the internet rapidly evolving into what is known as the metaverse. In my book, \u0026ldquo;Exploring the Metaverse\u0026rdquo;, I delve into its various forms and interpretations, usecases and challenges, and the way forward.\nDesignersX, a company specializing in software development, invited me to speak about my book and discuss how AI and the metaverse will shape the future.\nThe interactive session was fantastic, covering a range of topics. Here are the key highlights:\n🤔 Recap - Forward Looking Reflection We began by recalling my previous session at DesignersX, Forward Looking Reflection. We had discussed the importance of reflection and how to move forward from there.\nIt was gratifying to see that DesignersX\u0026rsquo;s leadership took the collected insights seriously, not only growing their team but also investing in infrastructure and moving to a beautiful new office space. They have implemented interesting routines and processes, fostering a more connected way of working.\nIn the previous session, we also explored what it means to be a consultant, the essence of becoming an agile IT organization, and our definitions of success, growth, and satisfaction for both individuals and the organization.\nContinuous learning and sharing are crucial aspects of this journey.\n🪓 Sharpen the Axe As Learning is essential for survival. I shared the story of the woodcutter to illustrate the importance of regularly sharpening our axe—enhancing our strengths and efficiency. Without continuous improvement, productivity declines despite our best efforts and intentions.\n Writing this book was my way of sharpening my axe. Find your way to sharpen the axe.\n📕 The Book - \u0026ldquo;Exploring the Metaverse\u0026rdquo; To stay aware of current trends, I wrote \u0026ldquo;Exploring the Metaverse\u0026rdquo;. Although the term \u0026ldquo;metaverse\u0026rdquo; has been around for a while, its meaning is still not fully understood. It\u0026rsquo;s often defined to suit individual or organizational narratives. I aimed to demystify the metaverse, providing its building blocks, use cases, and the challenges it brings. The book also explores how AI is propelling us toward the future. Each concept is thoroughly covered in dedicated chapters.\nA detailed synopsis of the book is available here. The book can be purchased on all Amazon domains, including Amazon.in and Amazon.com. It is also available at BPB India and BPBOnline.com with a 20% discount using the code \u0026ldquo;bpbkuldeep.\u0026rdquo;\n💥 Collective Action - Ethical and Responsible Adaptation Learning is a collective effort that matures through continuous sharpening. My book emphasizes the importance of ethical and responsible adaptation of the metaverse and emerging technologies in general. While these technologies offer great use cases and convenience, it is crucial to use them wisely and purposefully. Every technological advancement comes with a cost, and we must be mindful of it.\nIndustries, academic institutions, government bodies, and individuals must understand the technology, create awareness, define standards, practices and contribute to its betterment.\n🙏 Thanks DesignersX for hosting such a wonderful session It was wonderful to joining such an engaging audience, and listening to me. Happy to launch and introduce my book with this group.\nKeep learning, Keep Sharing.\n","id":32,"tag":"DesignersX ; guest ; lecture ; ar ; vr ; mr ; xr ; metaverse ; available now ; launch ; nft ; takeaways ; technology ; ai ; challenges ; usecases ; future ; book ; event ; speaker ; reflection ; technology ; genXR ; genAI","title":"Guest Lecture: AI and Metaverse reshaping the future","type":"event","url":"/event/guest-lecture-ai-and-metaverse-reshape-the-future/"},{"content":"In our day-to-day work, we often create methods, processes, approaches, or features to address specific needs and challenges. Sometimes, these solutions turn into significant innovations, even if we don\u0026rsquo;t immediately recognize them as such. This is also true in our professional work setup, where we may invent without fully realizing it.\nI\u0026rsquo;m thrilled to share that a new patent has been granted to Lenovo for their groundbreaking work in the ThinkReality XR product line, and I am honored to be a co-inventor on this work.\nPatent No : US 11,988,832 B2 This patent, titled \u0026ldquo;Concurrent Rendering of Canvases for Different Apps as Part of 3D Simulation\u0026rdquo;, introduces innovative methods for managing 2D applications simultaneously within a 3D environment on AR or VR devices. The patent encompasses various techniques for positioning and interacting with 2D apps in a 3D space, including programmatically transforming 3D inputs into 2D applications and vice versa.\nThis invention was developed for Lenovo\u0026rsquo;s award-winning mixed reality smart glasses, the ThinkReality A3.\n This innovation allows enterprises to use multiple phone applications on XR devices simultaneously and interact with them in new, innovative ways without altering the 2D phone apps. Part of this solution is also available in the ThinkReality VRX, which supports various enterprise XR features such as Device Enrollment to Intune and VMWare.\nInventors: Kuldeep Singh, Raju Kandaswamy, Andrew Hansen, Mayan Shay May-Raz, Cole Heiner\nThis browser does not support PDFs. Please download the PDF to view it: Download PDF.\n ☞ USPTO #17/818,346 ☞ JUSTIA #20240045207 ☞ Granted #11988832\n Also covered by NIT Kurukshetra Alumni Association\nRefer here for my more patents\n","id":33,"tag":"about ; invention ; me ; profile ; xr ; ar ; vr ; mr ; patents ; Lenovo ; ThinkReality ; Android ; Metaverrse ; design ; user experience ; nit","title":"Patent Granted: Manage multiple 2D apps in a 3D layout","type":"post","url":"/post/patent-granted-xr-1/"},{"content":"44th Anniversary of Parents - Jun, 2024 Book in parent\u0026rsquo;s Hand - May, 2024 Vrindavan - Mar, 2024 House warming ceremony and RakshyaBandhan - Aug, 2023 Jim corbett trip - Jun, 2023 Lakshya\u0026rsquo;s visit to Thoughtworks Gurugram - May, 2023 16th Wedding anniversary - Feb, 2023 Father Retirement Speech - Jun, 2022 ☞ More Details Welcoming Aarav Remotely! - Apr, 2021 14th Wedding anniversary - Feb, 2021 Keep Moving - Jan, 2021 3rd Birthday Lovyansh - Jun, 2020 ☞ More Details ","id":34,"tag":"me ; profile ; thoughtworks ; change ; moments ; video ; motivational ; learnings ; life ; experience ; about","title":"Some moments","type":"about","url":"/about/moments/"},{"content":"My book \u0026lsquo;Exploring the Metaverse\u0026rsquo;, is now available in stores globally, including Amazon and BPB Online. It features forewords and testimonials from a diverse group of industry leaders.\nOne such testimonial comes from Kanchan Ray, Chief Technology Officer at Nagarro. We collaborated on building and conceptualizing various cutting-edge proof-of-concepts using emerging technologies. His support and advice were always invaluable.\n🙏 I extend my heartfelt thanks to Kanchan for reviewing early versions of the book, sharing his valuable insights, and writing the following foreword:\n \u0026ldquo;It is with great pride that I introduce Kuldeep Singh\u0026rsquo;s upcoming book, Exploring the Metaverse: Redefining reality in the digital age. Having collaborated with Kuldeep on technological innovation, especially in Extended Reality, I have witnessed his profound insights and futuristic perspective. This book serves as a timely compass in our rapidly evolving digital landscape, expertly navigating the intricate layers of the metaverse. Kuldeep seamlessly combines technological expertise with a storyteller\u0026rsquo;s finesse, offering a comprehensive narrative and real-world use cases. The book is an invitation to a transformative journey, catering to both seasoned technology professionals and newcomers. Beyond speculation, it provides practical insights into the metaverse\u0026rsquo;s impact on industries, societies, and individuals. As we witness the digital revolution, Kuldeep\u0026rsquo;s guide explores augmented and virtual realities, blockchain, and artificial intelligence, unraveling the metaverse\u0026rsquo;s building blocks. This work is a testament to Kuldeep\u0026rsquo;s passion for innovation, demystifying complex technology topics and issuing a call to action to rethink our digital reality. I am confident that readers will find inspiration and enlightenment within these pages, echoing the privilege I\u0026rsquo;ve had collaborating with Kuldeep.\u0026rdquo; — Kanchan Ray, Chief Technology Officer, Nagarro\n 🛍️ Get Your Copy Today! 🇧🇪 Amazon Belgium 🇧🇷 Amazon Brazil 🇨🇦 Amazon Canada 🇫🇷 Amazon France 🇩🇪 Amazon Germany 🇮🇳 Amazon India 🇮🇹 Amazon Italy 🇯🇵 Amazon Japan 🇲🇽 Amazon Mexico 🇳🇱 Amazon Netherlands 🇵🇱 Amazon Poland 🇸🇦 Amazon Saudi Arabia 🇸🇬 Amazon Singapore 🇪🇸 Amazon Spain 🇸🇪 Amazon Sweden 🇹🇷 Amazon Turkey 🇦🇪 Amazon UAE 🇬🇧 Amazon UK 🇺🇸 Amazon US 👉 Rest of the world \n20% Off on BPB Online\n☞ BPB Online Worldwide ☞ BPB Online India ☞ Free Preview\n☞ Exploring the Metaverse - Synopsis\nMore about my books here\n","id":35,"tag":"xr ; ar ; vr ; metaverse ; book ; available now ; event ; buy ; launch ; meta ; blockchain ; NFT ; IOT ; AI ; nagarro ; event ; MR ; testimonial","title":"🙏 Exploring the Metaverse | Testimonial 3","type":"post","url":"/post/exploring-the-metaverse-testimonial-3/"},{"content":"The VRAR Association(VRARA) is an international organization designed to foster collaboration between XR, mixed reality, spatial computing, and artificial intelligence (AI) solution providers and end-users that accelerates growth, fosters research and education, helps develop industry best practices, connects member organizations and promotes the services of member companies.\nVRARA has various industry focused committees, and I am invited to speak at an event by Energy Committee.\nThanks Hemanth Satyanarayana and Susan Spark for invitation.\n Harnessing the Metaverse: Power, Pitfalls, and Responsibility I share an insightful exploration of the metaverse as a powerful force. This session delves into highlights on the \u0026ldquo;Exploring the Metaverse\u0026rdquo;, how we can leverage its potential for positive change while acknowledging potential downsides. Kuldeep will explain the purpose of writing this book, and share way forward to make metaverse a tool for good.\nGoogle Meet link - http://meet.google.com/yzb-ensu-mca\nAt - May 29 Wed 11:30am EST/15:30 GMT/9:00 PM IST\nExploring the Metaverse: I delve into the purpose behind penning \u0026ldquo;Exploring the Metaverse\u0026rdquo;, aiming to shed light on the often misunderstood concept of the metaverse and its associated technologies.\nThe book unfolds in five distinct parts,\n commencing with an exploration of the evolution of technology and human interactions, gradually leading the reader toward the concept of the metaverse as the next frontier of the internet. Subsequent sections delve into the technological advancements propelling us closer to this futuristic realm, including XR, AI, IoT, Cloud, 5G/6G, and Blockchain as the building blocks. Dedicated part covering the diverse applications of the metaverse across various business and technological domains. In another part of book underscored the concerns of the metaverse on privacy, identity and sustainability It Highlights pressing need for standards and practices within the metaverse ecosystem, emphasizing the importance of collective initiatives to foster an ethical and responsible metaverse. ☞ Amazon.com ☞ Amazon.in ☞ BPB Online Worldwide ☞ BPB Online India ☞ Free Preview ☞ Exploring the Metaverse - Synopsis\nThe Session I was a nice session, Thanks for joining! and Susan Spark for introductions.\nFor those interested, here are the slides from my presentation: This browser does not support PDFs. Please download the PDF to view it: Download PDF.\n Cross posting\n ","id":36,"tag":"event ; speaker ; xr ; practices ; webinar ; discussion ; vrara ; book ; metaverse ; ar ; vr ; iot ; web3 ; blockchain ; AI ; future ; thoughtworks","title":"Harnessing the Metaverse at VRARA Energy Committee Event","type":"event","url":"/event/exploring-metaverse-vrara-energy-committee/"},{"content":"XREvolve Summit serves a comprehensive platform for exploring the latest advancements, applications, applied sciences and progresses within AR, VR, and the broader domains of the metaverse such as spatial computing and mixed reality. It is backed by Gatherverse community, that advocates humanity-first standards in accessibility, education, equality, community development, safety \u0026amp; privacy, wellness, and ethics within the metaverse and emerging technologies.\nThis year\u0026rsquo;s focus is XR : Interface of AI, that open up the extraordinary and near limitless possibilities for humanity.\nI am excited to participate at Roundtable Panel discussion on \u0026ldquo;Evolving into Tomorrow, Today: Enter the Metaverse\u0026rdquo;.\nSession Details Date and Time - (May 22, 2024, 10:30 AM PST) Roundtable Panel: Evolving into Tomorrow, Today: Enter the Metaverse Moderator: Lynn Gitau Panelists: May Masson - VRARA Chapter President - Finland, Head of Developers Community - Immersal Paige Dansinger - Senior Director of Metaverse and community development BSDXR, And Virtual World Society Josephus KIlale M - Exam Invigilator - EDTECH, VICTVS GLOBAL Ganesh Raju - CEO, Akshaya.io Arome Ibrahim - Founder \u0026amp; Program Director, Immersive Tech Africa Kuldeep Singh - Me Reserve Your Seat! Schedule - https://gatherverse.org/xre/schedule/ Speakers - https://gatherverse.org/xre/speakers/ Reserve Your Seat : https://www.tickettailor.com/events/gatherverse/1170843 Live Streaming here Day 1 - May 21, 2024\n Day 2 - May 22, 2024 (My Session)\n My Book, Resonating very will it The topic is very closed my book \u0026ldquo;Exploring the Metaverse\u0026rdquo; that aims to shed light on the often misunderstood concept of the metaverse and its associated technologies.\n☞ Amazon.com ☞ Amazon.in ☞ BPB Online Worldwide ☞ BPB Online India ☞ Free Preview ☞ Exploring the Metaverse - Synopsis\n","id":37,"tag":"gatherverse ; event ; speaker ; xr ; talk ; takeaways ; standards ; policy ; metaverse ; ar ; vr ; iot ; web3 ; AI ; future ; opensource ; summit","title":"Speaker - Gatherverse XREvolve Summit 2024","type":"event","url":"/event/gatherverse-xrevolve-summit/"},{"content":"My book \u0026lsquo;Exploring the Metaverse\u0026rsquo;, is now available in stores globally, including Amazon and BPB Online. It features forewords and testimonials from a diverse group of industry leaders.\nOne such testimonial comes from Dr. Rebecca J Parsons, former Chief Technology Officer at Thoughtworks. It has been a privilege to receive guidance from Rebecca at numerous technology strategy events, where her insights on responsible technology usage and our collective responsibility towards sustainability and technological adaptation have been invaluable.\n🙏 I extend my heartfelt thanks to Rebecca for reviewing early versions of the book, sharing his valuable insights, and writing the following foreword:\n \u0026ldquo;Our industry is ripe with hype and promises for the next new thing that will solve all our problems, make everything easier, engage our customers better, and much more. However, these promises do not often result in the large-scale transformation of how we deliver value to our customers. It is often overlooked that many foundational technologies do end up resulting in important changes to how we deliver systems. We continue to see advances in the underlying technologies for the metaverse, with the most prominent being virtual reality and their virtual worlds. This book serves as a guidepost to some of those technologies and their applications within our systems. Kuldeep’s approach to this topic is wide ranging, where he begins with an analysis of the technology revolutions that have come before metaverse, beginning with the first industrial revolution. He tackles the complex issue of what exactly is the metaverse, or more precisely to me, a metaverse. The lack of an agreed definition plagues discussions of the metaverse, and Kuldeep tackles this by stating his own and justifying it. The exploration of the supporting technologies, such as AI and VR hardware, grounds the subsequent sections where he explores a variety of applications of the technology to both, more and less obvious scenarios. In the later sections, he addresses the concerns with utilizing features of metaverse, including privacy, safety, security, and sustainability. All technologies come with a cost and our job as responsible technologists is to evaluate them, and do what we can to mitigate anticipated costs. We should also work to expand our horizons and uncover costs and risks that aren’t immediately obvious. These technologies will become common as we understand more about deriving value from them, through the use cases explored in the earlier sections of this book. We can be responsible technologists and still experiment with deriving value from these technologies. Such experimentation is essential to advance our capabilities in the underlying metaverse technologies. Kuldeep’s experience of working with these technologies’ grounds the section on the practicalities. He appropriately highlights the critical role standards should play in allowing interoperability. He understands and communicates to the reader what it takes to make these technologies deliver value. While the science fiction version of the metaverse may not come to pass anytime soon, technology will continue changing how we approach designing systems to deliver value to our customers. Kuldeep’s book provides a broad roadmap to not only the technologies and their uses, but also the practical issues that to be addressed. This roadmap guides us to our own roadmaps for experimentation in understanding how metaverse technologies will impact our organizations today and in the future.\u0026rdquo; — Dr. Rebecca J Parsons Chief Technology Officer Emerita, Thoughtworks\n 🛍️ Get Your Copy Today! 🇧🇪 Amazon Belgium 🇧🇷 Amazon Brazil 🇨🇦 Amazon Canada 🇫🇷 Amazon France 🇩🇪 Amazon Germany 🇮🇳 Amazon India 🇮🇹 Amazon Italy 🇯🇵 Amazon Japan 🇲🇽 Amazon Mexico 🇳🇱 Amazon Netherlands 🇵🇱 Amazon Poland 🇸🇦 Amazon Saudi Arabia 🇸🇬 Amazon Singapore 🇪🇸 Amazon Spain 🇸🇪 Amazon Sweden 🇹🇷 Amazon Turkey 🇦🇪 Amazon UAE 🇬🇧 Amazon UK 🇺🇸 Amazon US 👉 Rest of the world \n20% Off on BPB Online\n☞ BPB Online Worldwide ☞ BPB Online India ☞ Free Preview\n☞ Exploring the Metaverse - Synopsis\nMore about my books here\n","id":38,"tag":"xr ; ar ; vr ; metaverse ; book ; available now ; event ; buy ; launch ; meta ; blockchain ; NFT ; IOT ; AI ; thoughtworks ; event ; MR ; testimonial","title":"🙏 Exploring the Metaverse | Testimonial 2","type":"post","url":"/post/exploring-the-metaverse-testimonial-2/"},{"content":"Metaverse India Policy and Standards (MIPS) is a pioneering initiative led by Experiential Technology Innovation Centre (XTIC), serving as India’s premier research and product innovation hub for Virtual Reality, Augmented Reality, Mixed Reality, and Haptics, hosted at IIT Madras.\nIt\u0026rsquo;s an honor to be the member of MIPS and showcase my book Exploring the Metaverse: Redefining Reality in the Digital Age, which aligns closely with the initiative\u0026rsquo;s objectives.\nI extend my gratitude to Thriveni Palanivelu for orchestrating a seamless event. Here\u0026rsquo;s a recap of the event:\nIntroducing MIPS Members The meeting commenced with a round of introductions, revealing a remarkable diversity within the group. We were joined by esteemed academician from premier institutes, technologists representing renowned enterprises, founders of emerging tech startups, legal experts from the Supreme Court, distinguished individuals with experience in law enforcement as Director Generals of Police, policy makers, and more. It was truly inspiring to witness such a talented pool of individuals from across India coming together in one cohesive group!\nIntroduction to MIPS and XTIC Prof. M Manivannan began by tracing the evolutionary journey of (XTIC) at IIT Madras and its expanding influence beyond, highlighting various initiatives and celebrating remarkable achievements in research and innovation. He emphasized XTIC\u0026rsquo;s collaborative approach, bridging the gap between industry and academia.\nOne such initiative was CAVE (Consortium for AR VR Engineering), marking a milestone as India\u0026rsquo;s inaugural consortium for XR innovations, fostering collaboration between Indian academia and industries. Prof. Manivannan articulated a grand vision for CAVE as part of XTIC\u0026rsquo;s broader mission: to establish India as the XR Corridor for the world by 2047. More details on this vision can be found here.\nTo further engage with these initiatives, join CAVE or explore the XTIC Chronicle Newsletter and the XR In India White Paper:\n Join CAVE XTIC Chronicle - Newsletter XR In India - White Paper After Prof. Mani\u0026rsquo;s insightful presentation on XTIC\u0026rsquo;s journey, he graciously handed over the floor to me to present my book.\nBook Review: Exploring the Metaverse In my presentation, I delved into the purpose behind penning \u0026ldquo;Exploring the Metaverse\u0026rdquo;, aiming to shed light on the often misunderstood concept of the metaverse and its associated technologies.\nThe book unfolds in five distinct parts,\n commencing with an exploration of the evolution of technology and human interactions, gradually leading the reader toward the concept of the metaverse as the next frontier of the internet. Subsequent sections delve into the technological advancements propelling us closer to this futuristic realm, including XR, AI, IoT, Cloud, 5G/6G, and Blockchain as the building blocks. Dedicated part covering the diverse applications of the metaverse across various business and technological domains. In another part of book underscored the concerns of the metaverse on privacy, identity and sustainability It Highlights pressing need for standards and practices within the metaverse ecosystem, emphasizing the importance of collective initiatives to foster an ethical and responsible metaverse. It was heartening to observe the alignment between the MIPS initiative and the call to action outlined in my book, highlighting the collective responsibility of academia, industry, governing bodies, and individuals to ensure a safe transition into the future. The diversity of MIPS members aptly reflected this collective endeavor.\nFor those interested, here are the slides from my presentation: This browser does not support PDFs. Please download the PDF to view it: Download PDF.\n ☞ Amazon.com ☞ Amazon.in\nAs an added incentive, BPB has graciously extended a 20% discount to XTIC initiative. Take advantage of this offer and delve into the realm of the metaverse!\n☞ BPB Online Worldwide ☞ BPB Online India ☞ Free Preview\n☞ Exploring the Metaverse - Synopsis\nOpen Discussions and Next Steps In our open discussions, we delved into various aspects of the MIPS initiative, with members contributing valuable insights on execution, operations, and expected outcomes. It\u0026rsquo;s great to note that MIPS has garnered a strong start, attracting like-minded individuals eager to contribute. Moving forward, we will consolidate the inputs gathered and chart the course of action for the future. Additionally, we would aim to broaden the scope of the group to enhance its reach and impact.\n","id":39,"tag":"thoughtworks ; event ; speaker ; takeaways ; xr ; community ; takeaways ; practices ; standards ; policy ; metaverse ; ar ; vr ; iot ; web3 ; blockchain ; AI ; future ; india ; meity ; IIT ; XTIC ; opensource","title":"MIPS kick off event, XTIC, IIT Madras","type":"event","url":"/event/metaverse-india-policy-and-standards/"},{"content":"My book \u0026lsquo;Exploring the Metaverse\u0026rsquo; featured on Industry4.0. It is available in stores, including Amazon and BPB Online.\n The metaverse, a loosely defined term, has gone from a virtual game world to the next internet and sparks conversations on technology’s potential. Despite eye-rolling at terms like metaverse and eXtended Reality (XR), immersive technologies are silently becoming part of our daily life through social media, virtual collaboration, and digitized experiences. We seamlessly embrace digital tools, scanning codes, engaging in virtual trials, and altering reality through digital filters. In this evolving landscape, computer vision and AI consistently redefine our perception of reality.\nStrategic investments are crucial for technological evolution, but a delicate balance is needed. Even though businesses hesitate to invest without a clear outcome, an initial investment is required to ascertain them. Technology adoption, following the Diffusion of Innovations theory, varies — some invest early, others wait. Hype can drive uninformed investments, emphasizing the need to separate the hype and reality. Identifying promising opportunities in the metaverse is vital, demanding careful consideration and awareness of potential pitfalls.\nUnderstanding our responsibility to adapt to new technologies is crucial in a dynamic era of ever-evolving technology and human interactions. With this recognition and a sense of responsibility, author embark on the journey to write this book, with a to provide insights that distinguish between exaggerated expectations and genuine opportunities of the metaverse, and guiding readers to have a balanced and informed perspective on its promises and challenges. This book moves across 16 chapters, neatly organized into five parts.\nPart 1 — Introduction: Unveiling the Metaverse Chapter 1: Exploring the Metaverse Origin explores the evolution of technology and human interactions, from industrial revolutions to the digital web, and traces the origin of the metaverse. Chapter 2: Metaverse : Various Forms and Interpretations — examines the expert perspectives and the loosely defined nature of the metaverse and tries to define it with its characteristics. The chapter also addresses myths and reality of the evolving metaverse landscape. Part 2 — Metaverse: A Result of Technological Evolutions Chapter 3: Understanding XR: Metaverse Foundation — explores the crucial role of XR technologies in enabling the metaverse. It introduces different forms extending the realities (AR, VR, MR, and more) and its immersive capabilities. Chapter 4: AI Empowering the Metaverse — explores the crucial role of Artificial Intelligence (AI) in enabling intelligent and seamless interactions within the metaverse. Chapter 5: IoT, Cloud, and Next-gen Networks — covers technological advancements in the Internet of Things (IoT), cloud-based solutions, and next-generation networks, which are instrumental to realizing the full potential of the metaverse. Chapter 6: Decentralization and the Role of Blockchain — discusses the need for decentralized architectures to make the metaverse open, interoperable, and accessible. It covers the evolution of Blockchain and its role in shaping the metaverse economy. Part 3 — Metaverse: An Opportunity to Extend the Beliefs Chapter 7: Gaming Redefined: The Metaverse Revolution — discusses how the metaverse would redefine the gaming and entertainment industry. It covers immersive experiences in confined spaces like museums, gaming studios, amusement parks, and location independent collaborative internet. Also covers the filmmaking and visualizations in the metaverse. Chapter 8: Connecting and Engaging in the Metaverse — covers the transformative use cases of the metaverse in connecting people and engaging the world, from easing the communications barriers to redefining physical and virtual travels and tourism. Chapter 9: Revolutionizing Fitness and Healthcare — covers the metaverse experiences for fitness and sports and how it would revolutionize healthcare by seeing inside more deeply and naturally. Chapter 10: Exploring the Metaverse Economy — uncovers the metaverse potential of redefining traditional notions of commerce, property, and value. It reimagines digital commerce, digital property, and real estates, and is backed by associated digital value in the metaverse economy. Chapter 11: Skilling and Reskilling in the Enterprise Metaverse — discusses the metaverse’s potential to revolutionize the education industry, enterprise training, onboarding, and shaping up the enterprise metaverse. Part 4 — Metaverse: The Concerning Part Chapter 12: Identity Preservation and Privacy Protection — discusses safeguarding the metaverse identity and protecting privacy in the metaverse. It addresses the need to mitigate these risks and ensure trust in metaverse environments. Chapter 13: Metaverse and Sustainability — presents an understanding of sustainability, the challenges introduced by the metaverse, and the sustainability considerations for shaping up the metaverse. Part 5 — Shaping the Metaverse: Standards and Practices Chapter 14: Getting Started with Metaverse Development — lays the foundation for metaverse development by exploring the metaverse solutions ecosystem, essential development tools, techniques, and infrastructure. Chapter 15: Metaverse Practices, Standards, and Initiatives — explores the essential practices for creating engaging metaverse experiences. It also sheds light on the emerging standards and collective initiatives shaping the metaverse landscape. Chapter 16: Metaverse: A Way Forward — wraps up the book by giving us all a plan on how to use and build the metaverse responsibly. It tells us that we can’t avoid the natural evolution and changes happening, so it’s better to get ready and adapt to them. As the book advocates that ethical adoption is a collective responsibility of academia, industry, governing body and people, and it has forewords and endorsement from diverse group of leaders across domain:\n Vishal Shah — General Manager of XR(ARVR) and Metaverse, Lenovo Dr. Rebecca J Parsons — Chief Technology Officer Emerita, Thoughtworks Arjun Ram Meghwal — Minister of State (I/C) for Law and Justice, Minister of State for Parliamentary Affairs and Culture, Government of India Kanchan Ray — Chief Technology Officer, Nagarro Vanya Seth — Head of Technology, Thoughtworks India and Middle East Venkatesh Ponniah — Global Head of Emerging Tech, Thoughtworks Prof. M Manivannan — Head of Experiential Technologies Innovation Center (XTIC), IIT Madras Ravindra Dastikop — Assistant Professor, Author Metaverse Glossary: Your Gateway to the Future Hemanth Kumar Satyanarayana — Founder Imaginate XR, MIT-TR35 Young Innovator, TEDx Speaker Dinker Charak — Principal Product Manager, Thoughtworks, Author #ProMA, Product Management Untangled Mayan Shay May Raz — AR Lead at Amazon Sumeet Gayathri Moghe — Product Manager, Thoughtworks, Author of The Async-First Playbook Pradeep Khanna — Executive Director VRAR Association CEO Global Mindset, Co-Founder INSquare, Adjunct Professor Raju Kandaswamy — Principal Consultant, Thoughtworks Find the book here ☞ Amazon.com ☞ Amazon.in ☞ BPB Online Worldwide ☞ BPB Online India ☞ Free Preview\n📕 Links for Amazon\u0026rsquo;s other region here 📕\nMore about my books here\n","id":40,"tag":"xr ; ar ; vr ; metaverse ; book ; available now ; event ; buy ; launch ; meta ; blockchain ; NFT ; IOT ; AI ; lenovo ; thoughtworks ; event ; mr ; Industry4.0","title":"Exploring the Metaverse — A Synopsis","type":"post","url":"/post/exploring-the-metaverse-a-synopsis/"},{"content":"XTIC (Experiential Technology Innovation Centre), is India’s 1st Research \u0026amp; Product Innovation centre for Virtual Reality, Augmented Reality, Mixed Reality and Haptics at IIT Madras. It has published a white-paper and skill gap analysis for Experiential Technologies in India.\nWhitepaper - XR in India\nIt was great to be a contributor in this research. It is authored by 19 industry leader from various enterprises, institutes and government body. Excursions from this research also got published in The Hindu Businessline.\nHave a look at the white-paper here:\nThis browser does not support PDFs. Please download the PDF to view it: Download PDF.\n More details and high quality version of the white paper here.\n","id":41,"tag":"xr ; ar ; vr ; metaverse ; IIT ; XTIC ; miety ; blockchain ; NFT ; IOT ; AI ; openxr ; thoughtworks ; opensource ; future ; whitepaper","title":"XR in India — A Whitepaper by XTIC, IIT Madras","type":"post","url":"/post/xr-in-india-a-whitepaper-by-xtic-iit-madras/"},{"content":"My book \u0026lsquo;Exploring the Metaverse\u0026rsquo;, is now available in stores globally, including Amazon and BPB Online. It features forewords and testimonials from a diverse group of industry leaders.\nOne such testimonial comes from Vishal Shah, General Manager of XR and Metaverse at Lenovo. I\u0026rsquo;ve had the pleasure of hearing Vishal speak at events like CES and AWE, where he shared invaluable insights on the metaverse and XR technology. Meeting him in person at Lenovo Tech World 2023 was a highlight, where we discussed this book, and he graciously agreed to contribute to its exploration of the metaverse.\n🙏 I extend my heartfelt thanks to Vishal for reviewing early versions of the book, sharing his valuable insights, and writing the following foreword:\n \u0026ldquo;The concept of the metaverse has been around us for decades, an enduring quest to seamlessly integrate artificial reality into our physical existence. Our entire journey of digital transformation and modernization revolves around embracing an alternative reality through trusting digital images and videos, forging our social lives in the realm of digital media. As we navigate the diverse landscapes of enterprise and consumer metaverse use cases, each with its unique concerns, it\u0026rsquo;s evident that this phenomenon is still evolving, awaiting a comprehensive definition, and striving to reach the stature of the next internet. The critical question is, how are we adapting to this imminent reality, and are we actively contributing to its evolution? I commend Kuldeep for dedicating his efforts to craft the book Exploring the Metaverse. It\u0026rsquo;s truly inspiring to see Kuldeep openly dissect the loosely defined definitions, building blocks, characteristics, and the metaverse\u0026rsquo;s journey towards becoming the next internet. He delves into our own evolutionary trajectory, examining how we\u0026rsquo;ve welcomed technology into our lives. By analyzing the current state of technology and identifying the driving forces behind the metaverse, he paints a vivid picture of our proximity to the next stage of the internet. This book brilliantly explores a number of use cases where the metaverse will revolutionize not only entertainment and gaming but also our approach to health and fitness, interpersonal connections, livelihoods, lifestyle, and education. Kuldeep fearlessly addresses the challenges associated with metaverse adoption, highlighting concerns about identity preservation, privacy protection, sustainability, and ethical considerations. He places a strong emphasis on the development of metaverse standards and practices, advocating for shared and open development—a sentiment closely aligned with Lenovo\u0026rsquo;s commitment to shaping an Open Ecosystem in the ThinkReality XR product lines and the OpenXR ecosystem. Kuldeep calls for collective metaverse development, urging enterprises, educational institutions, governing bodies, and individuals to come together and collaboratively build the future. In this call to action, he emphasizes the crucial role each entity plays in this collective effort, encouraging safe, ethical, and responsible technology adaptation. I extend my congratulations to Kuldeep for producing this exceptional work, and I wish him the best of luck in the future. This book is poised to serve as an indispensable guide to the metaverse.\u0026rdquo; — Vishal Shah General Manager of XR(ARVR) and Metaverse, Lenovo\n 🛍️ Get Your Copy Today! 🇧🇪 Amazon Belgium 🇧🇷 Amazon Brazil 🇨🇦 Amazon Canada 🇫🇷 Amazon France 🇩🇪 Amazon Germany 🇮🇳 Amazon India 🇮🇹 Amazon Italy 🇯🇵 Amazon Japan 🇲🇽 Amazon Mexico 🇳🇱 Amazon Netherlands 🇵🇱 Amazon Poland 🇸🇦 Amazon Saudi Arabia 🇸🇬 Amazon Singapore 🇪🇸 Amazon Spain 🇸🇪 Amazon Sweden 🇹🇷 Amazon Turkey 🇦🇪 Amazon UAE 🇬🇧 Amazon UK 🇺🇸 Amazon US 👉 Rest of the world \n20% Off on BPB Online\n☞ BPB Online Worldwide ☞ BPB Online India ☞ Free Preview\n☞ Exploring the Metaverse - Synopsis\nMore about my books here\n","id":42,"tag":"xr ; ar ; vr ; metaverse ; book ; available now ; event ; buy ; launch ; meta ; blockchain ; NFT ; IOT ; AI ; lenovo ; thoughtworks ; event ; mr ; testimonial","title":"🙏 Exploring the Metaverse | Testimonial 1","type":"post","url":"/post/exploring-the-metaverse-testimonial-1/"},{"content":"The CareerMaker meetup, hosted by the Microsoft Azure Developer Community and managed by Reskill, stands as a flagship event in empowering aspiring professionals. Recently, I had the privilege of delivering a special keynote address at Microsoft Gurugram, where I had the opportunity to guide students from various institutes and share insights from my experience in product management.\nIt was an honor to accept this invitation, as engaging with the young, fresh minds of our nation is not only an opportunity to learn from them but also a responsibility to impart valuable knowledge. Special thanks to Rohit Sardana for extending the invitation and Lakshit Pant for coordination.\n It proved to be quite the adventure for me. At one point, I found myself on the verge of not reaching the venue — a tale I\u0026rsquo;ll delve into later in this article. But first, let\u0026rsquo;s explore the key highlights:\nBuilding a Career in the Cloud Industry A big shoutout to the organizers for their flexibility in keeping the stage engaging and seamlessly adjusting the session order. Many thanks to Brijender Ahuja for sharing his valuable insights in the changed order. Brijender, a seasoned cloud expert with 12 years of IT experience, shared importance of cloud, needed skills, certificates and career paths. He emphasized the importance of mastering core software engineering principles like programming, clean coding, data structures, and networking. While exploring AI and emerging tech is vital, Brijender stressed the need to solidify these foundational skills.\n In his word, AI, XR and emerging tech are like a cherry on a cake, and we need to prepare the cake well to put a cherry on it.\n Building a Career in Product Management Stepping into the stage set by Brijender, I found myself in a captivated audience. Drawing from my two decades of experience in software development, I embarked on a reflective journey from my roots as a techie to becoming an author, profoundly influenced by the books such as Async-Agile Playbook, Atomic Habits, Deep Work, and others that shaped my path. This journey of a reader led me to author my own book, \u0026ldquo;Exploring the Metaverse\u0026rdquo;.\nBut why am I talking product management? It\u0026rsquo;s because, much like many others, I found myself an Accidental Product Manager, a revelation catalyzed by my friend, Mr. Dinker Charak. Dinker, the author of product management masterpieces like #ProMa and Product Management Untangled, served as a guiding light in this realization.\nSharing insights from Dinker\u0026rsquo;s works, we explored the essence of product and its management:\n \u0026ldquo;A product is the result of a process or action on a set of asks, possessing an associated value that can be exchanged with other products. Through further processes, additional products can be generated.\u0026rdquo;\n Delving deeper, we contemplated the evolution of products—from material to machines, from digital to virtual, where AI enables them with heightened sensory experiences. It\u0026rsquo;s evident that our world is poised for unprecedented change, challenging us to envision possibilities beyond our current reality.\nAs products evolve, so too does their management. Product management covers orchestrating the ask, the process, the product itself, and its value. Over time, software product development has witnessed a paradigm shift, Agile methodologies, and adoption of Product Thinking.\nToday, the need for effective product management transcends organizational boundaries, extending its influence across service and product sectors alike. We discussed the diverse skill sets required for proficient product development and explored potential career paths. Whether one holds a formal title in product management or not, the principles of product management can be applied to enhance their roles.\nAttached are the slides I presented:\nThis browser does not support PDFs. Please download the PDF to view it: Download PDF.\n The Surprise As the anticipation built throughout the session, I had a surprise in store for everyone. A heartfelt thank you to the publisher of my book, who generously offers a 20% discount exclusively for Careermakers attendees!\nThis continues, Lakshit delivered inspiring and motivating talks, shedding light on the upcoming events in the series. He emphasized that it\u0026rsquo;s up to each individual to carve their own path and become the driver of their destiny. In a world filled with both opportunities and challenges, he encouraged everyone to navigate wisely and find their desired destination.\nA big thank you to the participants, and making it a wonderful session and cheers to all! And some nice goodies!\nThe Unexpected Adventure Wait, now the adventure part!\nMy morning began smoothly, cruising to Microsoft Gurugram with FM radio as my companion. Despite Gurgaon-Delhi\u0026rsquo;s usual traffic, I was ahead of schedule. Then, a missed underpass turn (don\u0026rsquo;t we all have those?) threw a wrench in the plan. No worries, I thought, there was still time.\nSuddenly, a strange sound from the engine. The car sputtered and lurched to a stop in the middle of a flyover. Stuck in rush hour traffic, calling a tow truck meant a long wait in the Delhi heat. With the event looming, I informed Lakshit, to please manage the show, and move my session later, chances were less. Thankfully, the Delhi Police came to my rescue, assuring me they\u0026rsquo;d deliver my car to a mechanic.\n The mechanic\u0026rsquo;s verdict? Major engine damage. With a heavy heart, I locked the car and hailed a cab to the venue. This experience got me thinking: getting a taxi is so convenient, why do we even own cars?\nArriving at Microsoft, I was ushered into a session for a much older audience than I expected. Confused, I called Lakshit, who was still waiting for me. It turns out, I\u0026rsquo;d stumbled into the wrong room! Finally, I joined where Brijender is taking the session for over 200 eager students. I then took my session, and despite the car drama, the talk went well, discussed various things, address queries from students. I was skipping call from mechanic during the session, who wanted to inform me that I left a window glass open (Oops!).\nSoon after the session, I rushed to car, get it lifted to service center, and found that The car, sadly, was a goner – the repairs were more than its worth.\nIt was a stark reminder of life\u0026rsquo;s impermanence, just like a car\u0026rsquo;s value can decline, so can ours. But like any good product, we can add value by sharing, caring, and taking care of ourselves and others. Keep doing your part!\n","id":43,"tag":"microsoft ; thoughtworks ; event ; azure ; speaker ; product development ; talk ; product management ; community ; learnings ; reskill ; career ; students ; speaker ; lecture ; meetup","title":"An Adventure to Remember: Making it to CareerMaker Meetup","type":"event","url":"/event/careermaker-meetup-azure-developer-community/"},{"content":"The era of isolated development and secret-keeping innovation is fading. In the past, companies, individuals could thrive by becoming subject matter experts within their own walls. However, the rise of generative AI, fueled by shared knowledge, has changed the game. AI can now imagine and generate solutions faster than ever. The future belongs to those who embrace openness, collaboration, and shared learning.\n Expertise alone isn’t enough. Today, success hinges on building strong networks, fostering partnerships, and collaborating within communities.\n 📕 In my book, Exploring the Metaverse: Redefining Reality in the Digital Age, available on Amazon and BPB Online. I advocate for this collaborative approach. The book delves into hundreds of potential metaverse use cases, along with the challenges that require collective solutions.\n🤝 XR collaboration in action! In the esprit of collaboration, I shared about the potential of AI in addressing adoption challenges for XR content from my conversation with the Convrse.AI team at the Thoughtworks Gurugram Center. Recently, I had the privilege of collaborating with the team from Digital Jalebi — Experiential Design Studio, who visited our center and provided valuable insights on XR use cases, success stories, and key considerations. Many thanks to the DJ team for their enriching visit.\n🤔 XR training: Where imagination meets reality XR devices are reaching new heights of realism, with natural hand and eye tracking enhancing the immersive experience. This opens doors for transformative training and onboarding, allowing learners to experience scenarios impossible to recreate in the real world.\nImagine mastering fire safety procedures in a simulated burning building, or perfecting CPR techniques on a virtual patient. Team showcases how XR can recreate these situations with remarkable accuracy, fostering deeper understanding and muscle memory compared to traditional methods.\n🤷‍♂️ Beyond talking points: XR makes car features come alive Similar to the limitations of traditional fire safety training, salespeople often rely on verbal descriptions for complex automotive features like ADAS (Advanced Driver-Assistance Systems) and airbags. However, XR allows customers to experience these features firsthand in a safe, simulated environment. Check it out here\n🥽 VisionPro: Redefining XR design and accelerating adoption Our recent discussions explored how VisionPro will revolutionize XR design paradigms, significantly accelerating adoption rates. We delved into the exciting potential of XR converging with technologies like AR, 3D printing, IoT, Wearables, and autonomous vehicles.\nTo unlock this potential, fostering an open ecosystem for XR engineering practices is crucial. Similar to Arium, the open-source automation testing framework for XR applications, we need collaborative efforts to establish open standards and tools.\n⛑️ Drive to safe and meaningful future! Ethical considerations shouldn’t be an afterthought. 📖 In my book, I delve into the crucial role of ethical tech adoption, a topic often neglected during the initial development phase.\nOnce again Thank you Digital Jalebi team to come forward, and sharing your learning, interesting XR use cases that you have built, and discuss our possible synergies for future! Let’s continue putting our effort for safe and meaningful technology adoption.\nMore about my books here, this article is republished from XR Publication\n","id":44,"tag":"xr ; ar ; vr ; metaverse ; book ; collaboration ; event ; partnership ; usecases ; takeaways ; learnings ; sharing ; NFT ; IOT ; AI ; GenAI ; Thoughtworks ; optimization","title":"🤝 Why collaboration is essential for building a successful XR future?","type":"post","url":"/post/why-collaboration-is-essential-for-building-a-successful-xr-future/"},{"content":"Is the metaverse hype or reality? My new book explores what lies ahead The concept of the Metaverse keeps evolving, leaving many confused. 📖 In my book, Exploring the Metaverse: Redefining Reality in the Digital Age I leverage my years of experience to:\n Unravel the Metaverse: Demystify the term and explore its true potential. Separate Myth from Reality: Address misconceptions surrounding the metaverse. Prepare for the Future: Equip you with insights to navigate the evolving digital landscape. Are we rushing towards the technology hype without fully understanding the present potential? The book delves into this crucial question, exploring the role of AI and its impact on our journey to a truly immersive future.\nThe book also prompts thought-provoking questions:\n Will we transcend our screens as computing power transforms everyday objects into interactive experiences?\n Will high-speed networks lead us beyond video calls and into a fully immersive internet experience?\n The Metaverse is on the horizon, bringing both exciting possibilities and challenges. My book helps you understand what to expect and prepare for this new world.\n🛍️ Get Your Copy Today! ☞ Amazon.com ☞ Amazon.in ☞ eBook ☞ BPB Online ☞ Free Preview\nFurther Teasers ☞ Exploring the Metaverse - Synopsis ☞ Testimonial 1\n Please do share your feedback and book reviews.\nMore about my books here\n","id":45,"tag":"xr ; ar ; vr ; metaverse ; book ; available now ; event ; buy ; launch ; meta ; blockchain ; NFT ; IOT ; AI","title":"📕 Exploring the Metaverse | Available Now","type":"post","url":"/post/exploring-the-metaverse-available-now/"},{"content":" 📕 Exploring the Metaverse Redefining Reality in the Digital Age \u0026ldquo;Exploring the Metaverse\u0026rdquo; is a comprehensive guide to the metaverse, the future state of the internet. Prepare yourself for what\u0026rsquo;s ahead by understanding the opportunities and challenges of the metaverse. This book calls us to action, emphasizing that its evolution is in our hands. You can find a detailed synopsis of the book here.\nAvailable globally on Amazon and BPB Online as both paperback and eBook. Access a free preview of the book here.\nGet your copy: Click here to know discounts and check availability\n 📕 My Thoughtworkings The guiding thoughts that work for me ☞ Book launch - Five thoughtful years ☞ Download\nIn celebration of Thoughtworks\u0026rsquo; 30th anniversary, I have launched this book featuring guiding thoughts from a diverse set of 30 thoughtworkers. These thoughts have resonated with me and shaped my journey within the organization and life.\nThis ebook is launched on my fifth anniversary at Thoughtworks. I believe this compilation of thoughts will not only serve as a personal reflection but also as a valuable resource for others on their own professional journeys.\nThis collection is my personal observations, and presented in no particular order, find more about the book here.\n☞ Book launch - Five thoughtful years ☞ Download\n 📕 Jagjivan Living Larger than Life ☞ जगजीवन - जीवन से बढ़कर जीना ☞ Amazon - Coming in 2024. ☞ Flipkart - Coming in 2024.\nThe book \u0026ldquo;Jagjivan: Living Larger than Life\u0026rdquo; is life lessons taken from life of Shri Jagguram, who lived larger that life, He has been a role model for me and many others. This book is a collection of stories from his life, which I have heard from him and others, it sets guiding principles for living a happy and fulfilling life.\n☞ जगजीवन - जीवन से बढ़कर जीना ☞ Amazon - Coming in 2024. ☞ Flipkart - Coming in 2024.\n 📕 Product Management Untangled Art and Science of Product Management ☞ Amazon ☞ Flipkart ☞ notionpress.com ☞ more books from dinker ☞ My takeaways\nThe book Product Management Untangled, authored by Dinker Charak, offers a thorough exploration of product development, diving deep into the essence of effective product management. This book is invaluable for both seasoned product professionals and those who find themselves accidentally stepping into the role of Product Manager, much like myself. Dinker has popularized the term \u0026ldquo;ProMa\u0026rdquo; to distinguish it from project management, in his other book #ProMa.\nHaving had the opportunity to review this book, I am pleased to provide my endorsement, which is also included within its pages. Product Management Untangled is essential reading for anyone seeking a true understanding of product management.\n☞ Amazon ☞ Flipkart ☞ notionpress.com ☞ more books from dinker ☞ My takeaways\n 📕 Metaverse Glossary Your gateway to the future ☞ Amazon ☞ Flipkart ☞ EvincePub Publishing ☞ Book Launch\nMetaverse is a very vast topic. I appreciate author Mr. Ravindra Dastikop for taking a step forward and collecting 300+ terms and coming up with a book “Metaverse Glossary”, it covers a variety of terms from history, busineses, user experience design, development, tools, and techniques. People can easily traverse through it and speak the language of metaverse. I have written foreword for the same. ☞ Amazon ☞ Flipkart ☞ EvincePub Publishing ☞ Book Launch\n 📕 The Async-First Playbook Remote Collaboration Techniques for Agile Software Teams ☞ Amazon ☞ Barnes \u0026amp; Noble ☞ O\u0026rsquo;Reilly ☞ asyncagile.org\nIn \u0026ldquo;The Async-First Playbook\u0026rdquo;, Sumeet Gayathri Moghe presents a comprehensive guide to integrating remote-native, asynchronous practices into conventional agile methodologies. This book is a must-read for those currently engaged in remote work or considering it, especially in the realm of software development. The techniques discussed not only enhance remote work efficiency but also offer valuable insights for synchronous collaboration. These practices, poised to shape the next Agile Manifesto, have the potential to revolutionize how we approach work in a world increasingly defined by flexibility and remote capabilities. I am honored to have contributed the foreword to this transformative book. ☞ Amazon ☞ Barnes \u0026amp; Noble ☞ O\u0026rsquo;Reilly ☞ asyncagile.org\n","id":46,"tag":"me ; profile ; books ; about ; metaverse ; technology ; thoughtworks ; learnings ; thoughtworkings ; asyc-agile ; async-first ; agile ; life ; selfhelp ; motivational ; takeaways ; product management ; proma","title":"Books","type":"about","url":"/about/books/"},{"content":"eXtended Reality (XR/AR/VR) technology is changing the reality of almost every industry today by enhancing user experiences. XR adoption in the retail industry is also quite natural, let\u0026rsquo;s understand how it is shaping up the future of retail.\nIn today’s retail landscape, technology is reshaping customer experiences, with XR leading the charge. Extended Reality (XR), including VR, AR, and MR, offers immersive experiences blending the physical and virtual worlds. From head-mounted displays to smart glasses and smartphones and beyond, XR is changing how we interact with retail environments.\nNow, Apple VisionPro must have already ignited the XR or spatial computing discussion around you, let’s explore the various use cases of XR in retail and how it is transforming the way we shop.\nMulti-Dimensional Marketing and Advertising XR can be used to deliver the brand’s values to consumers in engaging experiences, reinforcing consumer expectations, and making them feel more connected. Here are few examples of XR enabled marketing :\n XR In-App Advertisement: Imagine scrolling through your favorite XR app or playing a virtual game and encountering targeted advertisements seamlessly integrated into the virtual environment. XR in-app ads offer a new level of engagement, capturing the attention of users immersed in virtual worlds. Holographic Displays and Smart Mirrors: Retailers can leverage holographic displays and smart mirrors powered by XR to create captivating advertisements and interactive experiences. These displays can respond to user gestures and proximity, providing personalized recommendations and information about products. Example below 3D Signage and Interactive billboards: Traditional signage is given a futuristic upgrade with XR, as retailers can showcase products in 3D and offer interactive displays that engage customers on a deeper level. This immersive approach to marketing enhances brand visibility and product appeal. AR/VR Hotspots: XR hotspots/location within stores/city allow customers to interact with virtual elements overlaid on the physical location. By scanning codes or tapping on XR-enabled displays, just the world anchors, shoppers can access additional product information, promotions, and social interactions. AR Packaging: The package of products can be a great marketing tool, and AR can put life on the packaging, by bringing XR content on them. \nExtended Online Shopping Covid time, we anticipated of XR making way in shopping experiences and now it is happening in multiple forms :\n Virtual Try-On / Try Before Buying — One of the most compelling use cases of XR in retail is virtual try-on. Customers can use XR applications to try on clothing, makeup, accessories, and even furniture in a virtual setting. This “try before you buy” experience enables customers to make informed decisions about fit, style, and color without physically trying on items. \n Product Insights : XR provides an immersive way for customers to explore products in detail. They can virtually dissect products, view internal components, and understand how they work. This level of product insight enhances customer understanding and confidence in their purchase decisions.\nProduct customization and visualization at Thoughtworks\n Virtual Communal Shopping: Shopping becomes a social experience in XR with virtual communal shopping. Users can invite friends to join virtual shopping sessions, browse products together, and share feedback and recommendations in real-time. This collaborative approach enhances the social aspect of shopping.\n Shopping in Games: For gaming enthusiasts, XR offers integrated shopping experiences within virtual worlds. Players can shop for virtual items and accessories to enhance their gaming experience, creating new revenue streams for retailers within the gaming ecosystem.\n Web3 and Metaverse shopping: With the advent of Web3 gaming and the metaverse environment, along with the rise of NFTs and decentralized gaming, shopping experiences are becoming integrated into these platforms.\n\n Extended In-Store Experience Indoor Navigation: Navigating large retail spaces can be daunting, especially for new visitors. XR-based indoor navigation provides interactive maps and directions within stores, guiding customers to specific products or departments. Language barriers are overcome with instructions displayed in the user’s preferred language.\nIndoor Navigation at one of Thoughtworks Office\n Offers and Promotions: Retailers can deliver personalized offers and promotions through XR experiences. Customers receive notifications and alerts on their XR devices, allowing them to explore exclusive deals and discounts as they move through the store.\n Virtual Try-On: In-store XR displays and smart mirrors enable customers to virtually try on products without physically handling them. From clothing and accessories to makeup and eyewear, shoppers can see themselves in different styles, colors, and sizes, enhancing their shopping experience.\n \n Social Collaboration: XR facilitates social interactions and collaboration among shoppers. Customers can leave comments, reviews, and recommendations directly in the XR environment, creating a community-driven shopping experience. Friends can virtually try on items together, share opinions, and make joint purchasing decisions. A concept by XR community at Thoughtworks for a collaboration while live shopping, and virtaul try on\n Smart Assistant: Imagine having a virtual shopping assistant at your fingertips while browsing in-store. XR devices can serve as smart assistants, providing personalized product recommendations, answering questions, and offering detailed information about products. This hands-free assistance enhances customer engagement and satisfaction. Product Storytelling: With XR, retailers can narrate the story behind their products in a compelling and immersive way. Customers can learn about the history, craftsmanship, and unique features of products through interactive XR experiences. This storytelling approach adds depth and authenticity to the shopping journey. Thoughtworks’s Tellme More. Blurring Physical and Virtual Experience Virtual Store Experience: For those unable to visit physical stores, XR offers a virtual shopping experience that replicates the ambiance and layout of brick-and-mortar locations. Customers can browse aisles, explore products, and make purchases within a virtual store environment. This immersive experience brings the store to the customer’s doorstep.\n Zero Store : The zero store concept redefines retail by offering a physical space customers without any physical product, they can explore product virtually and purchase products without the need for owning XR devices. With a diverse catalog of items and the ability to host multiple product categories in the same space, the Zero Store provides a seamless blend of entertainment and shopping.\n Journey from Home: Planning a shopping trip becomes effortless with XR. Customers can create virtual shopping lists at home, complete with preferred products and promotions. When ready to shop, the XR app guides them through optimal paths in the city and stores, highlighting offers along the way. This curated shopping journey maximizes convenience and efficiency.\n Shared experience: Imagine going to a physical store, and inviting friends to join the location virtually. Shared experience of physical and virtual would be interesting for collaborative shopping. Gucci has introduced Live shopping provide store experience from remote.\nLive shopping\n Operations Warehouse Management: XR technologies revolutionize warehouse operations by providing inventory tracking and management. Using computer vision, warehouse staff can locate items, optimize storage, and fulfill orders efficiently. XR-based indoor navigation guides workers through the warehouse, reducing errors and improving productivity. Training and Onboarding: XR is a powerful tool for training retail staff and onboarding new employees. From store operations to customer service scenarios, XR simulations provide hands-on training in a safe and controlled environment. Employees can practice tasks, learn protocols, and familiarize themselves with products, enhancing their skills and confidence. Compliance and Audit: Ensuring operational compliance and efficiency is essential for retailers. XR solutions streamline workflow processes, guide staff through standard operating procedures, and provide on-demand assistance. Compliance checks and audits become seamless with XR, monitoring effectiveness and adherence to regulations. Dynamic pricing: Placing XR could help device out dynamic pricing, and offers based on the footfall, and navigation. Managing these with XR enabled solution is much easier then physical tags. After-Sale Services Installation and Assembly: XR simplifies the installation and assembly process for customers. Whether assembling furniture, electronics, or appliances, XR provides step-by-step guides and interactive instructions. Customers can follow along in augmented reality, reducing errors and frustration. Repair and Service: When products require repair or servicing, XR tools offer valuable support. Customers can troubleshoot issues using XR-guided instructions, minimizing downtime and the need for expert assistance. Service technicians can remotely assess problems and provide visual guidance, improving efficiency and customer satisfaction. For example refer here Dell\u0026rsquo;s AR Assistant App. Product Rating and Feedback: Gathering feedback and product ratings becomes more engaging with XR. Customers can leave spatial reviews and ratings in 3D environments, providing detailed feedback on their experiences. This immersive approach to feedback enhances the authenticity and usefulness of customer reviews. Alerts and Warnings: XR solutions can proactively alert customers about product usage and safety. Whether it’s proper handling instructions or maintenance reminders, XR-enabled alerts ensure customers use products safely and efficiently. This feature adds an extra layer of customer care and product longevity. Conclusion The integration of Extended Reality (XR) in retail is transforming the industry by providing immersive and engaging experiences for customers. From multi-dimensional marketing and virtual try-on to smart assistants and virtual store experiences, XR offers a multitude of benefits for retailers and shoppers alike.\nAs retailers embrace XR technologies, they enhance customer engagement, streamline operations, reduce returns and create personalized shopping journeys. Whether in-store or online, XR blurs the lines between physical and virtual worlds, opening up new possibilities for the future of retail. By leveraging XR use cases in retail, businesses can stay ahead of the curve, offering innovative and interactive experiences that resonate with modern consumers. The future of retail with XR is bright, promising a more immersive, efficient, and customer-centric shopping experience for all.\nThis article is originally published on XR Practices Publication\n","id":47,"tag":"xr ; ar ; vr ; mr ; usecases ; thoughtworks ; Enterprise ; technology ; retail ; advertisement ; marketing ; metaverse","title":"The Future of Retail with Extended Reality (XR)","type":"post","url":"/post/the-future-of-retail-with-extended-reality-xr/"},{"content":"Product management is a unique blend of art and science, distinct from roles such as project management, program management, business analysis, or business development. Some projects or products may have dedicated product managers, but I believe these skills are valuable for various roles, even if not explicitly labeled as a product manager. Even technologists can benefit from these skills to enhance the value of their work.\nThis is where Dinker Charak\u0026rsquo;s book \u0026lsquo;Product Management Untangled\u0026rsquo; becomes essential. Dinker is a guiding force for product managers and an influential figure aiming to improve product development practices. In a world where \u0026lsquo;PM\u0026rsquo; is often associated with Project Management, Dinker has popularized the term \u0026rsquo;#ProMa\u0026rsquo; for Product Management.\nI had the privilege of getting early access to the book and writing a testimonial for it. 🙏 Thanks Dinker for including it as the very first page of the book.\n📖 Product Management Untangled : The Art and Science of Product Management It is a comprehensive guide to product management, providing a detailed overview that includes the history of the discipline, the various perspectives on the role of a manager, and the different types of product managers. The book also delves into the responsibilities inherent in being a product manager and offers insights on advancing one\u0026rsquo;s career in this field. Also guide to organisation on how to hire and shape-up such roles.\nWhether you are looking to develop your skills as a product manager or understand it within your role, this book provides valuable information. It covers detailed facts and figures from industry standards, making it an essential resource for anyone involved in product management.\nHere is the Table of Content of the Book.\n and my Early Praise for the book. 🛍️ Get your copy Amazon : https://www.amazon.in/dp/B0CYH51MLR Notion Press : https://notionpress.com/read/product-management-untangled Be fast and get 11% off via coupon THX2BFFS, before it lasts. Follow Dinker for more updates.\n✍️ About Dinker Dinker Charak is a prolific writer, speaker, and thought leader in the product management space. As a Principal Product Manager at Thoughtworks, he continues to drive the development of products that matters the most.\nDinker has authored various books including #ProMa and fiction works like The Neutrinos are Coming, The Evolutionist and more, and sold over 5000 copies of books. Follow Dinker for updates on his insightful work :\n EEBO - Upcoming book on EEBO Metrics, that is a framework of performance indicators through the engineering lifecycle, right from software development to deployment in production to business outcomes. Product Life Cycle Untangled - next interesting work, on how product takes it\u0026rsquo;s shape and evolve. Dinker\u0026rsquo;s Blog - host of tools, techniques, and more\u0026hellip; LUSWIG - Hear him regularly on YouTube with other leaders like Sachin Dharmapurikar. 📚 My Books Dinker has also written a foreword for my upcoming book\nExploring the Metaverse : Redefining the Reality in Digital Age.\nMore about my books and the books I endorse are here.\n","id":48,"tag":"product management ; proma ; book ; coming soon ; available now ; book launch ; takeaways","title":"📕 Product Management Untangled | Available Now","type":"post","url":"/post/book-product-management-untangled/"},{"content":"Generative AI is revolutionizing content generation across various mediums, disrupting traditional methods in text, images, art, movies, and music creation. XR (Extended Reality) is also undergoing this transformation. In a previous article, Raju Kandaswamy and I discussed how AI aids in generating content for XR, meeting the dynamic needs of XR and the metaverse.\nWe highlighted content management as a key challenge in XR and metaverse adoption. While generative AI methods offer some solutions, the significant hurdle of content optimization remains. Without optimization, it\u0026rsquo;s difficult to utilize the generated content effectively. Traditional CAD or 3D design tools produce sophisticated content, but transforming this raw generated content, or complex 3D models into usable forms for various device use cases requires specialization and expertise.\nThat\u0026rsquo;s where products like Convrse.AI come into the picture. I had the pleasure of meeting with Mr. Virant Singh, Founder and CEO of Convrse.AI, at Thoughtworks Gurugram. He showcased Convrse.AI\u0026rsquo;s impressive capability of optimizing 3D content with ease.\nI am amazed to see how smoothly Convrse.AI can identify areas for optimization. It pinpoints problem areas with weight distributions and provides AI-assisted suggestions for optimizations. It highlighted unnecessary areas hidden behind surfaces or meshes that could be removed without visually impacting the scene. Additionally, it allows users to find similar-looking areas, apply rules to multiple areas simultaneously, and much more. All of this is achievable without the need for any additional tools — simply use it in your favorite browser!\nI feel tools like Convrse.AI address a significant challenge in XR adaptation by bridging the skill gap in 3D design expertise. Many companies and entities may not have the resources to afford or recognize the need for in-house 3D designers. While they excel in developing great software, their UX designers may still be lacking in 3D art skills. Convrse.AI may well aim to ease this gap, allowing even novice developers or those in traditional roles to easily catch up with these types of optimizations.\nThis tool can also be seamlessly integrated through public or private hosted APIs into the XR CI pipeline. With support for texturing, baking, and AI-assisted enhancements, Convrse.AI has the potential to become a great success. All the best to the Convrse.AI team! 👍\nOnce again, 🙏 thank you Virant, for the insightful discussion on potential ideas and synergies with Thoughtworks. We delved into how Thoughtworks is guiding our clients in developing software in the most optimal ways possible.\nWhile we finished our discussion, my son also visited the Thoughtworks office. He also had the opportunity to experience XR and 3D printing. He asked me if this kind of 3D optimization could expedite the 3D printing process from his previous attempts.\nAnd the answer is, ‘Why not!’ A less complex model would require less time for conversion, resizing, slicing, and overall printing.\nSo, the possibilities are truly endless. Keep optimizing, keep utilizing!\nThis article is also published at XR practices publication\n","id":49,"tag":"xr ; ar ; vr ; mr ; metaverse ; ai ; GenAI ; content management ; 3d ; optimization ; technology ; thoughtworks ; insights ; xr content ; collaboration ; convrse.ai","title":"🤝 Making the XR content usable with AI!","type":"post","url":"/post/making-the-xr-content-usable-with-ai/"},{"content":"In the world of tech buzzwords, “metaverse” has been topic of discussion, largely understood and misunderstood. It comes, goes, and comes again in new shapes and forms, definitions and understanding. The term has been around for three decades and is poised to continue its evolution.\nIn the past, it has been defined to suit the narratives of organizations and individuals alike. Some see it as a game, others as a virtual world, while some view it as a tool for global connection and collaboration. Apple’s VisionPro recently reignited the debate, shining a spotlight on another term: Spatial Computing. I delved deep into this concept and explored various perspectives in my upcoming book with BPB\n📖 Exploring the Metaverse : Redefining reality in the digital age This book aims to dispel myths and the reality of the metaverse. It covers hundreds of use cases of metaverse technologies across consumer and enterprise spaces, ranging from gaming and entertainment to healthcare, fitness, education, and industrial applications.\nHowever, nothing in this world is without its costs. The metaverse comes with its concerns and challenges, and several chapters are dedicated to highlighting these issues and their implications for humanity and sustainability. There are also chapters dedicated to metaverse practices, standards, and initiatives aimed at evolving the metaverse ethically. Ultimately, it’s up to all of us to navigate these waters together.\n The metaverse is on its way, whether we are ready or not. Advancements in AI, GenAI, and other technologies are bringing us even more closer to its larger goal: the Next Internet. It’s time to stay informed and get ready to embark on an exploration of the metaverse. 🚀\n Stay tuned for more updates on “Exploring The Metaverse.”\nLinkedIn :\n More about my books here\n☞ Exploring the Metaverse - Available Now ☞ Exploring the Metaverse - Synopsis\nThis article is originally published at XR Practices\n","id":50,"tag":"xr ; ar ; vr ; metaverse ; book ; coming soon ; event ; technology ; facebook ; meta ; blockchain ; NFT ; IOT","title":"📕 Coming Soon! Exploring the Metaverse","type":"post","url":"/post/exploring-the-metaverse-comming-soon/"},{"content":"🙋‍ ️As International Women\u0026rsquo;s Day is celebrated worldwide, efforts are underway to recognize women\u0026rsquo;s representation across diverse fields. In India, the spirit of \u0026lsquo;Nari Shakti\u0026rsquo; is alive as discussions focus on various schemes and initiatives. Increased enforcement is advocating for equal opportunities in underrepresented sectors. The promising shift towards inclusivity is shaping a future grounded in equity and equality.\n👩‍💻 Thoughtworks is synonymous with diversity and inclusion, a leading brand in this space. This ethos is seen on the ground, starting from my social interview round, and it has stayed with me since. I found myself more at ease, my senses heightened. Coming from a family where women traditionally held roles as homemakers, my wife broke that mold as the first working woman. Now, my sisters, cousins, and sisters-in-law are doctors, engineers, teachers, bankers and founders. Inclusivity was always a part of me, and Thoughtworks allowed me to truly \u0026lsquo;Be Me\u0026rsquo;. ‘Being me’ at thoughtworks is quite embedded, as echoed by Deepali, Mohd Quazi and others. It inspired me to write a book about My Thoughtworkings, sharing guiding thoughts inspired by the ethos of Thoughtworks.\n👩‍🏫 The sentiment of \u0026lsquo;Being Me\u0026rsquo; was strongly echoed by our CMO, Julie Woods-Moss, during her recent visit to India, where she shared her professional journey with us. Having worked with her on various emerging technology initiatives, I\u0026rsquo;ve always been amazed by her blend of technical prowess, business acumen, and deep understanding of humanity in everything she does. This was our first face-to-face meeting for a very short span of time, but it left a powerful impact on me.\n🤽‍️ This is the essence of \u0026lsquo;Nari Shakti\u0026rsquo; or \u0026lsquo;Women Power\u0026rsquo;—moments like these. This conversation inspired me to write this article. Through Julie\u0026rsquo;s words, I gained a new understanding of \u0026lsquo;being inclusive, being me’\n👩‍🍼 She shared that she learns from her kids that \u0026ldquo;being me\u0026rdquo; is not just about individuality; it\u0026rsquo;s about understanding others\u0026rsquo; \u0026ldquo;being me\u0026rdquo; as well. We truly become ourselves and feel accepted as \u0026ldquo;me\u0026rdquo; when we start accepting others for who they are, embracing their own unique \u0026ldquo;being me\u0026rdquo;. In the early days, she used to worry when people said she was too empathetic. Is being empathetic a bad thing? No, not at all. She later learned that being empathetic is perfectly fine, but expecting others to have the same level of empathy is not always realistic. We might strive for perfectionism, but expecting everyone else to meet those same standards is not fair. It\u0026rsquo;s okay to want to support others and help them grow, but expecting them to follow our advice or behave exactly like us is not the right approach.\n👩‍💼 Consciously or unconsciously, we often find ourselves teaching and advising our kids and the world around us on what to do and how to be, without truly understanding their unique \u0026ldquo;being me\u0026rdquo;. I might be someone who loves waking up at 5 am (which I\u0026rsquo;m not 🙂), enjoys going to the gym, and values being disciplined and punctual - and that\u0026rsquo;s perfectly fine for me. However, expecting everyone else around me to follow the same routines, thinking it will benefit them as it does me, is not realistic.\n⛹️ In reality, it doesn\u0026rsquo;t work that way, and this mismatch in expectations can lead to concerns and conflicts, regardless of whether others around us comply or not. It\u0026rsquo;s natural to want to guide others based on our experiences, but it\u0026rsquo;s equally important to let them decide and follow what works best for them.\n🧘‍ ️I believe this thought will stay with me and become My Thoughtworkings. It doesn\u0026rsquo;t mean we shouldn\u0026rsquo;t coach or mentor others; instead, this realization will help us become better individuals. There\u0026rsquo;s such a sense of peace in understanding this concept, isn\u0026rsquo;t there? I wish I had grasped it earlier, but as they say, better late than never.\n🙏 In conclusion, I want to extend my heartfelt thanks to Julie for sharing such a powerful message on International Women\u0026rsquo;s Day. Her words have resonated deeply, and I wish power to her and all women, the \u0026lsquo;Nari Shakti\u0026rsquo;. Together, may we continue to empower and uplift each other. Thanks for such a warm meeting.\n","id":51,"tag":"inclusive ; reflection ; diversity ; D\u0026I ; organization ; takeaways ; event ; change ; motivational ; thoughtworks","title":"Being inclusive, being me","type":"post","url":"/post/being-inclusive-being-me/"},{"content":"The way we interact with each other, and the world around us, is changing fast. And with consumer expectations continuing to grow, enterprises are racing to find new ways to deliver immersive experiences. I am republishing this article from Thoughtworks Insights here.\n As we discuss in the evolving interactions lens of our latest Looking Glass report, extended reality (XR) technologies offer new opportunities to engage and entertain. But what exactly is XR? And how do you separate science fiction from business fact—to develop a robust XR strategy that serves your enterprise and your customers?\nRise of the machines XR spans augmented, virtual, and mixed reality—and everything in between. While it’s been around for decades, the rise of AI is ushering in a new era for these transformative technologies. From its roots in the gaming and entertainment sectors, XR is emerging as a key tool for marketers, sales teams, and product designers across industries.\nThink about the production lifecycle of a new car. With XR, teams can create a virtual simulation of a new model—reducing costly design iterations and accelerating time to market. Or let’s say you’re in the manufacturing business. Instead of inviting prospects to your factory floor, you can transport them into a virtual environment to test and view your solutions in situ. Whether you’re demonstrating intangible tech or selling high-end couture, the ability for customers to ‘try before they buy’ is a powerful way to seal the deal.\nDesigning for digital natives The opportunities for XR, though, go far beyond product development and sales. By next year, millennials and Gen Z will make up around 75% of the workforce—and companies will need to adapt their processes to suit these digital natives. Using XR in training and onboarding is one way in which organizations are already improving outputs and reducing costs.\nIt can also be particularly valuable in hard-to-recreate situations like fire evacuation drills or demonstrating a vehicle\u0026rsquo;s safety features. Doctors can complete elements of their training in a virtual environment. Experts can be on the ground, providing assistance in a disaster zone in minutes. Astronauts can ‘walk’ on Mars. The sky really is the limit.\nNavigating an emerging risk landscape As follows all seismic leaps forward, great technological power comes with great responsibility. The ethical, security, and privacy implications of AI are something industries—and indeed governments—are beginning to grapple with. But there’s yet to be clear focus on XR specifically which, in pushing the boundaries between human and machine, has the potential to invade privacy and damage trust.\nWhat’s more, in the rush to innovate, sustainability initiatives can fall by the wayside. These are resource-hungry technologies and it’s critical that organizations come together to think about creating a responsible ecosystem. Investments need to be made in the right standards, practices, and green technologies, so that XR can be a collaborative development that benefits all.\nRead our 2024 Looking Glass report to dive into the reality behind this virtual world, and learn how you can harness emerging technologies to elevate experiences.\n","id":52,"tag":"xr ; ar ; vr ; mr ; thoughtworks ; ai ; emerging-tech ; ai ; gen-ai ; future ; iot ; technology ; metaverse ; looking-glass","title":"Immersive is the new interactive","type":"post","url":"/post/thoughtworks-immersive-is-the-new-interactive/"},{"content":"Thoughtworks unveils its latest tech trends report, \u0026lsquo;Looking Glass\u0026rsquo;, a comprehensive analysis of emerging technologies designed to empower organizations and leaders in making informed decisions. The 2024 edition of report, covering over 100 trends, is segmented through five lenses, each offering insights, signals, and strategic recommendations for adoption, analysis, and anticipation.\nKey takeaways: Here are my key takeaways:\n AI\u0026rsquo;s ubiquitous reach - AI is going in every facet of technology, from conversational interfaces to AI-assisted software development and software-defined vehicles. The surge in AI-based investments necessitates establishing harmonious collaboration between humans and AI, emphasizing the need for standardized practices in AI usage. Read more on Lens - AI everywhere. Rise of AI and data platforms - The rise of AI and data platforms signifies a shift towards realizing tangible value from data. Emerging open data sharing standards and evolving infrastructure are indicative of a data-driven future. Read more on Lens - Data and AI platforms. Evolving human-computer interactions - Interactions between humans and computers are evolving towards greater naturalness and intuitiveness. Advancements in XR, GenAI, and computer vision create opportunities for enhanced engagement. Enterprise XR stands to benefit from cost and time savings, improved training, and onboarding, while consumers explore new dimensions of spatial computing. Read more on Lens - Evolving interactions. Blurring real and virtual realities - The lines between the real and virtual worlds are increasingly blurring, driven by the growing acceptance of artificial or technologically defined systems. This convergence spans autonomous vehicles, drones, robots, and digital twins. Read more on Lens - Physical-digital convergence. Responsible and ethical tech imperative - In an era dominated by AI, XR, GenAI, and the metaverse, the integration of responsible and ethical tech practices becomes paramount. Establishing standards and practices is crucial for the sustained success of technology. Read more on Lens - Responsible tech. For in-depth insights: For a more comprehensive understanding of these trends and recommendations, explore the Thoughtworks Looking Glass and download the full report there. Stay ahead of the curve with Thoughtworks as we navigate the ever-evolving landscape of technology together.\n","id":53,"tag":"xr ; ar ; vr ; mr ; thoughtworks ; ai ; takeaways ; emerging-tech ; avtech ; ai ; gen-ai ; future ; iot ; technology ; metaverse ; looking-glass","title":"Thoughtworks' Looking Glass 2024 - Takeaways","type":"post","url":"/post/thougthworks-looking-glass-2024/"},{"content":"As we say goodbye to 2023 and welcome 2024, it\u0026rsquo;s a time to think about the good and tough parts of the year. Not everything goes the way we want, but we can learn from both the good and bad experiences. I see 2023 as a year of making things better, like fixing the base of a building. I\u0026rsquo;ve picked my top five observations on foundational changes from this year, and some might continue into 2024. Now, it\u0026rsquo;s exciting to see them happen and see how they make a difference.\n🔄 Navigating restructuring and contractions In 2023, industry giants like Google, Amazon, and Microsoft underwent significant contractions and restructuring, and the overall industry faced a substantial loss of 2.5 lac jobs in a year, 50% higher than previous year. These measures, projected as responses to the prevailing micro-economic situation, also involved cost-cutting in operational expenses, perks, and salary increments. This trend of optimization is expected to persist into 2024.\n However, true optimization extends beyond cost reduction—it involves strategic investment in the right areas. In 2024, organizations are likely to shift towards recognizing this crucial aspect of optimization; aligning investments for long-term success. Those quick to realize this need and adapt will likely thrive, while those resistant to change may face challenges in navigating the landscape beyond 2024.\n 🤖 Shaping the generation AI AI isn\u0026rsquo;t new, but in 2023, a whole new era started—Generation AI. Everything, from products to giants like Google, felt its impact. Microsoft went all-in on the OpenAI episode, \u0026ldquo;.ai\u0026rdquo; domains gained popularity over the traditional \u0026ldquo;.com,\u0026rdquo; and nearly every digital thing now has AI or assisted with AI, including this article.\nWhat started as generative AI hype is now a reality, profoundly altering the landscape of digital work. Take a look at posts on LinkedIn and other platforms—crafted with emojis, bullets, refined English, grammar, and captivating graphics. Even the emails we receive are nicely composed, and we know that it is not because every content editor suddenly acquired these skills; GenAI took the place. Skills that were crucial before might not carry the same weight now, as AI steps in either to replace or enhance them:\n This change is substantial, and in 2024, it will manifest in the day-to-day, challenging individuals to adapt. Those dismissing it as a mere hype or a passing trend risk being left behind permanently. Embrace the change, accept, learn, utilize, and challenge established norms.\n 👩‍💻 Reimagining how we work: No going back As 2023 concludes, the notion of reverting to traditional work norms is challenged. Organizations, amidst contractions, attempted a return to the office and may continue more in 2024, while others sought innovative alternatives. Schools adapting to online modes during disruptions exemplify a shift away from pre-COVID practices. Likewise, businesses are integrating remote work as a staple, acknowledging a generation reveling in workation. Flexi work is considered essential for better DEI strategy.\n The question arises: Is a return to the pre-COVID norm even feasible, or should we embrace a more pragmatic, profound, and productive approach?\n In 2023, conversations stirred around these sentiments, challenging conventional agile methods. Sumeet Moghe\u0026rsquo;s Async-First Playbook and asyncagile.org stand as proof. The Async-First Manifesto prioritizes effective, intentional work with room for experimentation, discarding outdated practices linked to fixed locations and times.\nThis signifies a fundamental paradigm shift that would help focus on right no matter whether you work from the office or somewhere. The tussle between office-first, hybrid or remote-first would continue in 2024, but better focus on deep work, steering clear of superficial tasks.\n Clear your calendar, embrace the power of GenAI, and start expressing yourself better. “Don’t be busy in 2024, but go deep!”\n 💰 Measuring tech beyond funding In 2023, the fall of Byju\u0026rsquo;s, once hailed as an edtech success, debunked the myth that a startup\u0026rsquo;s funding dictates the triumph or failure of technology. This setback didn\u0026rsquo;t signal the failure of online education; instead, it ushered in a new era marked by diverse and accessible learning methods. Traditional educational institutions also adapted, offering an array of courses that don\u0026rsquo;t necessitate physical presence, providing individuals with more choices to acquire skills.\nSimilarly, people raise their eyebrows when they hear XR and metaverse, especially with Meta\u0026rsquo;s shift from metaverse emphasis to AI. Despite skepticism, the interconnection between XR, metaverse, and AI remains evident.\n As we approach 2024, expect the unexpected—ideas previously deemed impractical will become commonplace. Widespread acceptance of technologies will manifest in people unknowingly incorporating tech like XR or others their daily lives, similar to what they are already experiencing - virtual trials, playful camera features, scanning and artificially generating the environment or simply changing backgrounds on images or while on video calls.\n More voices will join the conversation on technology, discussing use cases, concerns, and navigating beyond the allure of hype or funding. As part of this sprint, I\u0026rsquo;m working on my book, Exploring the Metaverse: Redefining Reality in the Digital Age. 📚.\n🌐 Embracing open and responsible tech ecosystem A notable trend that unfolded in 2023 was the emphasis on fostering an open and experimentation-friendly environment, exemplified by emerging initiatives like OpenXR, OpenBCI, OpenAI and hundreds of AI models on hugging faces. Organizations and individuals globally shifted their focus towards open-sourcing, 2023 saw a sharp rise of first time contributors. A surge of commitment to responsible tech and sustainable development emerged. Notably, products began including green nudges such as carbon emission while showing flights, software getting green, carbon aware software architecture, solutions and infrastructure.\n The momentum towards ethical and responsible development is anticipated to gain further momentum in 2024. Industries will shift from merely discussing Diversity, Equity, and Inclusion (DEI) to integrating these principles into their core business strategies—moving beyond rhetoric to tangible action. This evolution is poised to create collaborative spaces, where like-minded producers and consumers converge, co-building and aligning their efforts towards ethical practices and sustainability.\n Expect to witness a surge in such partnerships in the coming years. 🌱\nConclusion Absolutely, 2023 has been a year marked by challenges, but within those challenges lie valuable lessons and opportunities for growth. It\u0026rsquo;s a year that shattered myths about foundations and traditions, urging us to think beyond the established norms. In this dynamic landscape, nothing remains fixed, not our beliefs, practices, or ways of working. The key is to evolve continuously to meet the demands of the time.\nAs we transition into 2024, it presents a unique opportunity to witness the impact of the changes set in motion in 2023. It\u0026rsquo;s a chance to course correct, align with evolving needs, and embrace the transformative potential that lies ahead. Let\u0026rsquo;s navigate the future with adaptability, openness, and a commitment to continuous improvement. 🌟🚀\n Embrace Change, Seek Growth\n ","id":54,"tag":"software ; reflection ; process ; organization ; takeaways ; agile ; technology ; success ; change ; motivational ; development ; anticipation ; AI ; async-agile ; open-source ; metaverse ; XR ; sustainability ; DEI","title":"Foundational shifts: Reflecting on 2023 and anticipating ahead","type":"post","url":"/post/2023-reflections-2024-anticipation/"},{"content":"Welcome to Internet 3.0, the transformative era where reality takes on new dimensions, driven by eXtending Reality (XR) at it\u0026rsquo;s core, along with AI, Cloud, IoT, and more. This evolution would be the product of collaborative efforts involving academia, government, industry, and individuals. I recently had the privilege of participating as a panelist at an event hosted by the pioneers shaping the future of the internet. In this article, I share my insights and a summary of this remarkable event.\nFICCI XROS Fellowship Program Federation of Indian Chambers of Commerce \u0026amp; Industry (FICCI), is the voice of india\u0026rsquo;s business and industry for policy change. The FICCI XROS Fellowship Program curated to empower Indian XR developers through stipends, mentorship, and open-source project contributions. With a focus on AR, VR, MR, 3D modeling, and development, the program provides a platform for 100 shortlist developers (out of 10000+ applicants) to collaborate on projects with industry mentors.\nThe initiative aims to cultivate digital public goods and advance careers in XR technology. It is supported by National e-Governance Division (NeGD) under the Electronics India, Ministry of Electronics and IT, Govt. of India as the technical partner, along with the Office of the Principal Scientific Adviser to the Government of India (PSA) as the knowledge partner. Implemented by Reskilll, and well supported by Meta\nThe Event | XROS Fellowship Summit XROS Fellow Summit is the graduation day for the selected 100 developers, and awarding the top 10, along with keynotes and panel discussions, and networking. Here is how event goes, and my takeaways from it.\nXROS Fellowship Journey Dr. Ashish Kulkarni, Chair of FICCI, AVGC – XR Forum and Founder of Punneryug Artvision Pvt. Ltd., initiated the event with compelling insights into the XROS Fellowship journey. He highlighted the program as a testament to the broader scope of XR beyond gaming and entertainment, emphasizing its reach beyond high-end global cities. With an impressive 10,000 applicants, notably, 65% hailed from smaller Indian cities.\n Dr. Kulkarni discussed the program\u0026rsquo;s role in shaping XR policies and converging science, art, and technology, aligning with AVGC (Animation, Visual Effects, Gaming, and Comics) policies.\nOpen Source and Open Science Proj. Ajay Kumar Sood, Principal Scientific Adviser, Govt. of India, shed light on the pivotal role of the open-source and open science ecosystem in India\u0026rsquo;s digital journey. Emphasizing its potential for co-creation and nation-building, he highlighted initiatives such as One Nation One Subscription, i-STEM and the patent library, Patestate.\n Dr. Sood showcased how the open science model is fostering contributions in biodiversity, astronomy, and more, even from non-specialized individuals—a sentiment reminiscent of Thoughtworks\u0026rsquo; involvement in projects like the Ten Meter Telescope and E4R collaborations.\nEncouraging a departure from traditions, Dr. Sood touched upon transformative initiatives like XTIC and the establishment of a global metaverse corridor, aligning to the vision of \u0026ldquo;Vasudhev Kutumkcum\u0026rdquo;.\nOpen Innovation, AI and Metaverse, Digital Public Goods Mr. Shivnath Thukral, Director and Head of Public Policy - India at Meta, delved into the transformative power of open innovation and its role in fostering diversity and inclusion. Highlighting the belief that people are the driving force, he emphasized technology as the ultimate democratization tool. Thukral outlined Meta\u0026rsquo;s commitment to open research and innovation, aligning with programs such as Manthan and contributing significantly to ecosystem development.\nContinuing the discussion, Mr. Sunil Abraham, Public Policy Director at Meta, showcased Meta\u0026rsquo;s global initiatives, including participation in pledges like the Open Covid Pledge and Low Carbon Patent Pledge. Notable contributions to the open-source community, such as PyTorch, React, and more, were highlighted. Abraham delved into recent MetaAI projects like the Segment Anything Model, ImageBind and Massively Multilingual Speech, with over 650 models openly available on GitHub and Hugging Face. This collaborative and open approach is steering us toward the next frontier of the internet—the metaverse.\nMr. Abhishek Singh, IAS, President \u0026amp; CEO, National e-Governance Division (NeGD), Ministry of Electronics and Information Technology (MeitY), Government of India also addressed on enabling digital sovereignty through the creation of digital public goods.\nPanel Discussions - Open Source, XR, and Internet 3.0 I had the privilege of participating as a panelist in a discussion skillfully moderated by Dr. M Manivannan, Professor and Head of XTIC at IIT Madras. Dr. Manivannan eloquently introduced the concept of the immersive internet, and I emphasized that openness is a key characteristic of the next internet. In this evolving landscape, where reality loses its traditional meaning, and the boundaries between physical and artificial realities blur, the responsibility of shaping Internet 3.0 lies not only with creators but also with consumers and governing bodies. Together, we are architects of this emerging metaverse.\nAs the XR Practice Leader, I highlighted the need for developing practices in XR development, touching on tools like Arium and Selenium for automation testing. I emphasized Thoughtworks\u0026rsquo; community-driven culture, keeping us focused on open source as a collective responsibility — where we are not just consumers but active producers, jointly solving challenges.\nMr. Sahil Ahuja, CTO of GMetri Technologies Private Limited, shed light on WebXR within the OpenXR ecosystem, managing concerns related to interoperability and standardization. Mr. Milind Manoj, CEO of Pupilmesh, focused on open-source XR hardware development, addressing privacy and security concerns at the foundational layers. Niloshima Srivastava, Engineering Manager at Tata 1mg, shared her experience in building solutions at the scale of India, highlighting how open source played a crucial role and addressing associated challenges. The discussion was well concluded by Dr. Manivannan.\nPanel Discussion - Role of XR Technologies in Today\u0026rsquo;s Businesses A captivating panel discussion unfolded, featuring esteemed panelists Mr. Ajit Padmanabh, Mr. Manas Bairagi, Dr. Padma Priya PV, and Dr. Rashmi Thakur, who delved into the pivotal role of XR in today\u0026rsquo;s businesses. The engaging session was skillfully moderated by Mr. Dhiraj Gyani.\n Felicitation of Top 10 and Graduation Ceremony Recognizing the top 10 participants is undoubtedly a challenging task, and I extend my gratitude to the program mentors and evaluators for their dedicated efforts. It\u0026rsquo;s evident that every participant, not just the top 10, will play a significant role in shaping the future of XR in India.\nCongratulations to all for reaching this milestone!\nShowcase, Lunch and Networking The showcase featured compelling projects by XROS Fellowship participants, guided by esteemed partners. Along with delicious lunch, it provided a fantastic opportunity for networking and collaboration. Engaging discussions unfolded with leaders from industry, government, academia, and emerging talents.\nA special appreciation goes to the organizers and the team Rohit Sardana and Garima Singh, for orchestrating this enriching event. Heartfelt thanks to Vanya for the nomination as a panelist. It was truly an inspiring and collaborative experience.\nKeep learning, Keep sharing, Keep growing!\n","id":55,"tag":"xr ; ar ; vr ; mr ; event ; speaker ; open source ; talk ; metaverse ; ficci ; university ; students ; startup ; panelist ; learnings ; meity ; summit ; xros","title":"Panelist at FICCI XROS Fellowship Summit","type":"event","url":"/event/ficci-xros-fellowship-summit/"},{"content":"I\u0026rsquo;ve reached a milestone: 100 articles and 50 events at thinkuldeep.com. This experience has improved my writing, boosted my confidence, and fueled my passion for self-expression. I\u0026rsquo;m thankful to readers, authors, speakers, and mentors who inspired me.\nI want to share common obstacles I overcame, hoping to inspire others on their creative journeys.\n1. I am not a great writer/speaker, I can\u0026rsquo;t express well Blocker#1 is doubting our abilities, it is natural when we haven\u0026rsquo;t explored our potential. The phrase \u0026ldquo;I am not\u0026rdquo; can be a blocker; we often underestimate ourselves. As I mention in my post on uncovering human potential, there\u0026rsquo;s untapped potential within us.\nTo improve self-expression, start with writing. It offers time for reflection, as discussed in my article on effective communication.\nHere are some compelling reasons why writing can be a game-changer:\n A learning opportunity: Writing serves as a remarkable tool for learning and self-improvement. Sharing your insights: It\u0026rsquo;s a powerful means of sharing your thoughts and experiences with a broader audience. Connecting with others: Writing fosters connections with people who resonate with your words. Building your brand: It\u0026rsquo;s a cornerstone for building a personal brand and showcasing your expertise. Expanding your network: Writing can help you expand your professional network. Advancing your career: It can be a catalyst for career growth and personal development. Entrepreneurial ventures: For entrepreneurs, it\u0026rsquo;s a valuable asset for business growth. Enhancing life: Ultimately, it contributes to building a more fulfilling life. Most professionals have basic reading, writing, and speaking skills. Start by jotting down your thoughts—no need for a novel. It\u0026rsquo;s like responding to emails or general conversation, a habit discussed in my post on the importance of habits.\nReframe your mindset with, \u0026ldquo;I am a writer or a speaker, and I can express well.\u0026rdquo; This belief makes writing and expressing a natural part of your journey.\n2. What I know is trivial to share with others. Blocker 2 is a common hurdle for many aspiring writers and speakers: the belief that what they know is too trivial to share with others. It\u0026rsquo;s essential to recognize that this is a limiting mindset that can hold you back from realizing your potential.\nHere are some strategies to overcome this blocker:\n Value your perspective: Your experiences, even seemingly ordinary, are unique and valuable to others. Find your niche: Focus on a specific area of expertise to boost confidence in your knowledge. For example, I started with mostly on technology topics, such as java, xr/metaverse, iot. Be your audience: What\u0026rsquo;s common to you can be new and valuable to your audience. Better you start with you yourself as audience Share personal stories: Humanize your content by sharing your practical experiences. I shared about changes, takeaways, and mythoughtworkings that become motivational. Collaborate and learn: Engage with peers to gain confidence and fresh insights. I contributed collaborated with community as speaker, mentor, judge at various hackathon, guest lecture. Keep learning: Continuous learning enhances what you have to share. Seek feedback: Start small, gather feedback, and refine your message. Embrace vulnerability: Sharing your challenges can resonate with others. Shift mindset: Recognize that your insights can impact others positively. Set realistic goals: Begin with achievable goals and build your confidence. Remember, the world is full of people eager to learn, grow, and connect. By overcoming the belief that your knowledge is trivial, you open the door to sharing your unique perspective and making a positive impact on others. Your insights, experiences, and wisdom have value, and it\u0026rsquo;s worth sharing them with the world.\n3. I don\u0026rsquo;t have time Blocker 3, the belief that you don\u0026rsquo;t have time, is a challenge many of us face in our busy lives. However, it\u0026rsquo;s crucial to recognize that time management is about making choices and setting priorities. Here\u0026rsquo;s how you can overcome this blocker:\n Prioritize passion: Allocate time for what matters to you, just as you would for work. Identify time wasters: Cut down on unproductive habits to free up valuable hours. Delegate and outsource: Hand off tasks effectively when possible to focus on creativity. Plan effectively: Organize your day for efficiency and reduced stress. Prevent, don\u0026rsquo;t firefight: Proactive problem-solving saves time. Focus on what matters: Distinguish important from urgent tasks. Improve communication: Clearer communication reduces wasted time. Time blocking: Allocate dedicated time for focused and deep work. Reflect and adjust: Regularly assess and refine your schedule. Efficient time management helps you pursue your creative passions effectively. It\u0026rsquo;s about maximizing the time you have, not creating more of it.\n4. I have many ideas but where to start Blocker 4: Having many ideas but not knowing where to start can be overwhelming. Here are strategies to overcome it:\n Create a playground: Establish a space for experimentation, I setup thinkuldeep.com as my playground. Build on small foundations: Start small and expand on your initial ideas gradually. Begin with a small audience: Start with a supportive group before targeting a larger audience. Start a blog, YouTube channel, or a podcast: Choose a medium you\u0026rsquo;re comfortable with and share your ideas. Don\u0026rsquo;t overthink publicity: Don\u0026rsquo;t worry too much about how your work will be received initially. Submit your work: Share your content on other platforms for publishing/speaking at to reach a broader audience. Embrace imperfection: Accept that early work may not be perfect; it\u0026rsquo;s a stepping stone. Stay consistent: Maintain regular content creation to see gradual progress. Remember, every journey begins with a single step. Start small and build your ideas into meaningful creations over time.\n5. I Don\u0026rsquo;t like reading or listening, but want to quickly learn writing and speaking. While there are no shortcuts to mastering writing and speaking, you must be a good reader and listener first! I have read 30+ books and listen to many in recent years to make this creative\nYou can make the process more engaging and efficient:\n Choose your interests: Focus on topics that genuinely interest you. Take notes: Capture key points and ideas to reinforce learning. Reflect and summarize: Think about what you\u0026rsquo;ve learned and summarize it as takeaways. Practice regularly: Write and speak often to build your skills. Explore different styles: Analyze and experiment with diverse writing and speaking styles. Stay patient and persistent: Growth takes time and effort; don\u0026rsquo;t give up. Leverage technology: In GenAI era don\u0026rsquo;t worry about grammar, vocabulary and pronunciation, but don\u0026rsquo;t overuse them, keep your creativity. Stay informed: Keep up with industry trends and best practices. Remember, it\u0026rsquo;s a journey worth taking for better communication skills.\nIn this article, I reflect on my journey of self-expression through writing and speaking. I celebrate the achievement of 100 articles and 50 events on my platform,thinkuldeep.com. Throughout this journey, I have grown in confidence and skill, thanks to the support of my audience and mentors. I share insights on overcoming common creative obstacles and aim to inspire others on similar paths.\nNext I am on a path to become a professional author, working on a book Exploring the Metaverse : Redefining reality in the digital age, which is soon to become a reality.\n","id":56,"tag":"communication ; motivational ; consulting ; blogs ; learnings ; takeaways ; talks ; tips ; writing ; reading ; listening ; general ; selfhelp ; change ; habit ; leadership","title":"thinkuldeep.com | 100 Articles and 50 Events","type":"post","url":"/post/100-articles-50-events/"},{"content":"The XConf XConf is Thoughtworks annual technology event created by technologists. It is designed for technologists who care deeply about software and its impact on the world.\nLast year, I presented on the topic of AI Assisted Content Pipeline for XR. This year, we took it a step further by hosting an immersive experience booth. We featured cutting-edge AR/VR devices and engaged in in-depth discussions about various industrial applications. It was truly enriching to interact with participants, share our insights, and absorb their perspectives. The event was brimming with informative talks, hands-on workshops, and captivating experience zones focused on AI, ML, Software Defined Vehicles, and XR.\nNow, let me summarize the key takeaways and highlights from this remarkable event.\nTop tech trends Jyothi Pijari set the stage for the event, introducing its overarching theme: \u0026ldquo;Bringing Together Business x Technology.\u0026rdquo; The event then gained momentum as Bharani Subramaniam and Vanya Seth delivered an engaging session on intriguing tech trends.\nThe session covered diverse tech trends, from product development challenges to cost-effective observability strategies like tail sampling. It stressed the importance of secure data storage with automatic encryption, recognizing data\u0026rsquo;s role in product development and purchasing decisions. The session also highlighted the significance of developer experience in measuring productivity and explored emerging trends like zero trust security, web assembly, synthetic data, and self-hosted LLMs in the tech landscape.\nKishore Nallan, CTO \u0026amp; Co-Founder, Typesense \u0026amp; Wreally, continued the discussion with very informative session \u0026ldquo;The unreasonable effectiveness of showing up every day\u0026rdquo;\nTech tracks The event offered attendees the opportunity to explore three distinct tech tracks: Emerging Tech, Digital Transformation, and Data \u0026amp; AI, each featuring fascinating sessions.\nArijit Debnath captivated the audience with a captivating session on Spatial Interface Designing, skillfully weaving in the intriguing story of Flatland to illustrate his points.\n The event boasted numerous other exceptional sessions, promising a wealth of knowledge and insights for all attendees.\nExperience zones The event went above and beyond by showcasing immersive experience zones where we demonstrated cutting-edge AR/VR devices and their industrial applications.\nLenovo\u0026rsquo;s ThinkReality Enterprise XR platform took center stage, along with ThinkReality XR devices like A3 and VRX, garnering significant attention within the tech community. These devices seamlessly integrated with the OpenXR using Qualcomm Spaces SDK. The developer community was particularly impressed with ThinkReality VRX\u0026rsquo;s passthrough feature and wider field of view.\nThe experience zones didn\u0026rsquo;t stop there – they also featured devices from Microsoft and Meta, in addition to standard mobile-based XR and webXR experiences. These zones provided a valuable opportunity for participants to engage with us, share their insights, and gain a deeper understanding of these cutting-edge technologies.\nFurthermore, the Software Defined Vehicles experience zone proved to be a major attraction, allowing participants to get a firsthand look at the future of mobility.\nAshish Patil showcased the capability to host ML models on the edge devices. More work from him can be found at ashishware.com\nAuthor\u0026rsquo;s Corner One noteworthy addition this year was the \u0026ldquo;Author\u0026rsquo;s Corner\u0026rdquo;. The booth proudly featured books authored by ThoughtWorkers, offering attendees a unique opportunity to engage with the authors themselves. Visitors could have their books signed and take advantage of special discounted offers.\n It was a delightful experience that brought together literature enthusiasts and the authors behind these insightful works.\nWorkshops The event hosted fantastic workshops that provided invaluable learning opportunities.\nRaju Kandaswamy led a workshop on \u0026ldquo;Unleashing the Power of Generative AI with Self-Hosted LLMs\u0026rdquo; while Harish Kumar Balaji, Janani Ramanathan, and Gnana Ganesh conducted a workshop on \u0026ldquo;Reinforcement Learning for the Real World: A Gamified Approach with Deepracer.\u0026rdquo;\nThe hands-on experiences in these workshops left participants with a profound sense of knowledge and a rewarding learning experience.\nFull of life The event was brimming with vitality, excitement, and enjoyment, complemented by delectable food and snacks.\n Wishing everyone continued joy in their learning journeys!\n","id":57,"tag":"xr ; ar ; vr ; mr ; thoughtworks ; event ; takeaways ; xconf ; avtech ; ai ; gen-ai ; conference","title":"XConf-2023 - Thoughtworks India - Takeaways","type":"event","url":"/event/thoughtworks-xconf-2023/"},{"content":"Communication is a puzzle - will others understand us as we hope? What\u0026rsquo;s the secret to effective communication? In this article, I share my top five lessons from books, personal exploration, and experts in the field.\nI\u0026rsquo;m still learning this art, and it\u0026rsquo;s ever-evolving. I\u0026rsquo;ve read couple books and followed communication pros, they shed light on how crucial communication is, especially in our modern ways of working. But it\u0026rsquo;s clear even they\u0026rsquo;re always learning. Communication is complex, influenced by what we say, how we say it, who we\u0026rsquo;re talking to, and the context. The listener\u0026rsquo;s side matters too, with their interests and personality in play.\nLet\u0026rsquo;s dive into the top five takeaways that simplify effective communication\n1. Communication: a two-way street Communication isn\u0026rsquo;t just about talking or writing; it\u0026rsquo;s a partnership between the person delivering the message and the one receiving it. Think of it like a dance, where both partners need to move together. The place where the communication happens is also important.\nEver noticed that what works with one person might not work with another, or something that didn\u0026rsquo;t make sense before suddenly clicks? Well, a book HBR\u0026rsquo;s 10 Must Reads on Communication explains that a person\u0026rsquo;s characteristics that covers how they act and take decisions can affect how they understand what you\u0026rsquo;re saying. So, it\u0026rsquo;s not just about what you say; it\u0026rsquo;s about who says it, how they say it, and where they say it. In other words, for your message to be effective, it has to match what the other person expects.\nEffective communication isn\u0026rsquo;t the job of just one person. It\u0026rsquo;s a team effort between the person sharing the information, the person getting it, and the situation where it happens. To make communication work better, it\u0026rsquo;s important to share who you are, and be clear about what want say and what all you expect. Before you start talking formally, it\u0026rsquo;s a good idea to ask the other person what they expect. This way, you both can set some ground rules and plan how to talk better together.\n2.Communication: more than just face-to-face In the past, people used to travel a lot to meet face-to-face, thinking it was the best way to communicate. But as time passed, we got better tools for collaboration. These tools made it easy to talk to each other right away, even if we were far apart. This became the usual way to communicate.\nBut something else has been happening too. Async communication has also grown from mailing. This means we can talk to each other without worrying about where we are, what time it is, or even what language we speak. We can take our time to think before we reply, instead of saying the first thing that comes to mind.\nSumeet Moghe, The Async Guy (TAG) talks about how we should embrace async communication as the way of the future. It\u0026rsquo;s about being flexible and using async as our first choice for communication, only using real-time talk when we really need it. He even came up with a set of rules, called an async-first manifesto, for making software in a more flexible way. Don\u0026rsquo;t miss The Async-First Playbook, a must-read book.\n3. Communication: not a one-size-fits-all The book Smart Brevity : The Power of Saying More with Less takes a deep dive into how we can be concise and courageous in our written and spoken words, it also covers how to me make our meetings more effective. We can easily find similar sources from experts that emphasize the importance of making our language inclusive, clear, and to the point. Sumeet Moghe, who continually offering communication tips for improving our work methods, recently shared brevity tips for effective audio and video communication.\nAlong with these, remember a thing – one approach doesn\u0026rsquo;t suit everyone. Effective communication needs to be personalized. Some folks prefer a quick and to-the-point summary, while others enjoy storytelling and prefer not to reveal everything right away. Business communications and chats with friends and family, they\u0026rsquo;re not the same. Not everyone wants to read something in just a minute, and similarly, not everyone gains knowledge solely from short videos. Some folks love well-crafted books, while others prefer audio and video content.\nIn essence, communication isn\u0026rsquo;t a one-size-fits-all concept. It\u0026rsquo;s about finding the style that works best for you and involved parties.\n4. Communication: a learnable skill Communication is not an inborn talent; it\u0026rsquo;s a skill that anyone can develop. Like any skill, it gets better with practice and by forming good habits. To become a proficient communicator, it\u0026rsquo;s essential to establish some smart guidelines for keeping things concise and regularly practice reading and writing.\nDeveloping these skills requires discipline. A valuable resource in this journey is the book Atomic Habits, see my takeaways here. It offers insights into how to build habits effectively, which can be particularly useful for honing your communication skills.\nBecoming an effective communicator involves one key action: communicate, and then communicate some more. To progress, you need to measure the effectiveness of different communication methods, refine your brevity, and consciously work towards improvement. With diligence and dedication, you\u0026rsquo;ll surely make significant strides in your communication prowess.\n5. Communication: the unveiling of assumptions Lastly, it\u0026rsquo;s crucial to recognize that assumptions often lie at the heart of communication breakdowns. We tend to make numerous assumptions about how we convey information and how it\u0026rsquo;s interpreted by others. Conversely, those on the receiving end also harbor their own assumptions and biases, both about us and the message we\u0026rsquo;re delivering. What makes this particularly challenging is that these assumptions are frequently left unspoken, leaving communication to rely on guesswork.\nTo foster effective communication, it\u0026rsquo;s paramount to bring these assumptions to the forefront before diving into the actual exchange of information. This proactive step can eliminate misunderstandings and pave the way for more meaningful and accurate communication.\nIn conclusion, we can say that as we navigate the complex terrain of effective communication, it\u0026rsquo;s essential to remember that it\u0026rsquo;s not just about the words we use but also about understanding the perspectives and preferences of those we communicate with. By embracing flexibility, curiosity, and a commitment to continuous learning, we can become proficient communicators capable of fostering meaningful connections and achieving our goals.\n","id":58,"tag":"communication ; motivational ; consulting ; effective ; insights ; learnings ; takeaways ; communication skills ; tips ; writing ; reading ; listening ; general ; selfhelp ; change ; habit ; bookreview ; leadership","title":"Effective communication: five key insights","type":"post","url":"/post/effective-communication-5-insights/"},{"content":"🤖 Rapid Prototyping Community Rapid Prototyping is an internal community @Thoughtworks, focused on building a culture of rapid prototyping and innovation. They organise various learnings events, and I was invited to speak at one of their events.\nI talked about the importance of holistic product development and how the modern products goes beyond software. I also discussed the impact of technological evolution on products, how to stay ahead of the curve.\n🌟 Holistic product development : Beyond software horizons In this enlightening talk, we delved into the expansive world of holistic product development, extending far beyond the boundaries of software. Here are the key highlights:\n Evolving product development: We traced the evolution of product development through various industrial revolutions and the current digital transformation. While the term \u0026ldquo;digital\u0026rdquo; is now synonymous with modern products, we emphasized that product development encompasses much more.\n Navigating the ecosystem: Beyond code, we explored the intricate ecosystem surrounding product development. This ecosystem integrates aspects such as business, technology, people, processes, and the environment, enabling the creation of responsible and sustainable products.\n Practical Insights and Pitfalls: Drawing from our experience working across diverse products and teams, we shared practical insights and common pitfalls in the product development journey.\n Future trends : The way we communicate and collaborate would go beyond traditional boundaries, devices, tools and techniques. We discussed emerging tools and techniques to stay prepared for what lies ahead.\n 📚 Resources This browser does not support PDFs. Please download the PDF to view it: Download PDF.\n Keep sharing ! Learning matures only by sharing it with others.\n","id":59,"tag":"xr ; thoughtworks ; event ; speaker ; product development ; talk ; product management ; community ; learnings","title":"Speaker at Rapid Prototyping Community - Products beyond software horizons!","type":"event","url":"/event/products-beyond-software-horizons/"},{"content":"We touched upon Generative AI while talking about AI assisted content pipeline for XR at Thoughtworks XConf 2022. Generative AI is capable of producing outputs that closely resemble those created by humans. It is evolving in multiple forms, one of its kind is ChatGPT, a useful tool buried beneath the hype. It was touted as Google Killer, and posed a disruption to the way we search, we communicate with machines, and the way machines can respond. Focus is shifting toward building systems/tools to generate right inputs, called prompt, to get the acceptable response from these Generative AI systems.\nChatGPT or its rival Google BARD are based on Large Language Models(LLM). While they aim to revolutionize the text based communication space, there have been advancements in the text to Image latent diffusion models (LDM) enabling another disruption in 3D content generation space. For example, DALL-E from OpenAI and Stable Diffusion from Stability, open up great possibilities in hitherto untapped areas.\nThese LDM models are capable of converting natural language inputs into photorealistic images to ultra realistic art output. The natural language inputs are referred to as “Prompts” and the nomenclature which is described or represented by latent space can be fed into the generative AI model in the form of “Prompt Engineering”.\nLDM\u0026rsquo;s building blocks An LDM\u0026rsquo;s building blocks are training (or learning) and inference.\nLearning In the learning phase, a neural network is trained to add noise to images generated with a mathematical model such as Gaussian. Then the noise is blended with an image with latent text description. This is known as a forward pass. In this process, the neural networks model what a latent space description may look like in the form of a very noisy image representation. Figure 1: Forward pass → Photo blur to noise\nInference In the inference phase, the learning is applied by a reverse pass. In the reverse pass, a random RGB noise is generated using sampling techniques such as Euler or LPMS. Then the generated noise is denoised step by step. In each step, the AI will try to bring the latent space description provided in the prompt into the image by denoising. Typically, within 10-15 steps the image will have most of the features described in the prompt. Every additional step will bring more clarity and detail into the image. Figure 2: Reverse pass → Noise to photo for the prompt “a red rose”\nKnowing that, let’s see how we can generate 3D models with text inputs.\nCreating 3D content with generative AI Creating 3D models using LDMs involves four fundamental steps. Here, we discuss how you can optimize each step to generate the output you need.\nFigure 3: Workflow for creating 3D content with LDMs\nPrompt Engineering for Photography Creating anything with generative AI begins with the text input, called a prompt. Writing clear and specific prompts will generate better outputs. To do that in LDMs, it’s important to bring best practices from photography to prompt engineering nomenclature.\nIn LDM models like stable diffusion, latent space description is formed by tokens. A token can be a simple English word, a name or any number of technical parameters. Here are some tokens you can use to produce good-looking images.\nPhotography tokens Here are an indicative list of tokens with examples\n \u0026lt;subject/object description with pose\u0026gt; - An “Indian girl jumping” \u0026lt;subject alignment\u0026gt; - An Indian girl jumping, “centered” \u0026lt;detailing\u0026gt; - An Indian girl jumping in the middle “of a flower garden” \u0026lt;lighting\u0026gt; - An Indian girl jumping in the middle of a flower garden “daylight, sun rays” \u0026lt;resolution\u0026gt; - An Indian …, “Ultra high definition” \u0026lt;camera angle\u0026gt; - An Indian …, “Aerial Shot” \u0026lt;camera type\u0026gt; - An Indian …, “DSLR photo” \u0026lt;lens parameters\u0026gt;, - An Indian …, “f/1.4” \u0026lt;composition\u0026gt; - An Indian … “at the edge of a lake” \u0026lt;fashion\u0026gt; - An Indian … “south indian clothing” \u0026lt;subject size\u0026gt; - An Indian … “close-up shot” \u0026lt;studio setting\u0026gt; - An Indian … “Studio lighting” \u0026lt;background\u0026gt; - An Indian … “, background rocky mountain range” Art tokens When used with the photography tokens, the following tokens can shape the artistic direction of the output: \u0026lt;art medium\u0026gt;, \u0026lt;color palette\u0026gt;, \u0026lt;artist\u0026gt;, \u0026lt;stroke\u0026gt;, \u0026lt;mood\u0026gt;, \u0026lt;costume\u0026gt;, \u0026lt;make/material\u0026gt;\nPhoto generation In addition to tokens and nomenclature, the generative AI model’s inference (of input) is influenced by the following four parameters:\n The text prompt The sampling method used for noise generation (Euler, LPMS or DDIM) The number of steps (ranges from 20 to 200) The CFG (classifier free guidance) scale. This is a scale that sets the extent to which the AI will adhere to a given text prompt. The lower the value, the more “freedom” the AI has to bring in elements further from the prompt Here’s what AI produced for the following prompts Prompt:\n 3d tiny isometric of a modern western living room in a (((cutaway box))), minimalist style, centered, warm colors, concept art, black background, 3d rendering, high resolution, sunlight, contrast, cinematic 8k, architectural rendering, trending on ArtStation, trending on CGSociety\n Photo generated by DreamShaper (Stable diffusion 1.5) 3D synthesis Once we have the photograph, the next step is to create continuous volumetric scenes from sparse sets of photographs/views. Following are the few approaches.\nNeRF It is a state-of-the-art technique for synthesizing novel views of a scene from a sparse set of input images. We used Google’s Dream Fusion to initiate a Neural Radiance Fields (NeRF) model with a single photograph. Here’s a continuous rendering of a scene from NeRF constructed from fewer photographs/views with depth occlusion even in a complex scene.\nDream Fusion takes NeRF as a building block and uses a mathematical process called probability density distillation to perfect the initial 3D model formed by a single photograph. It then uses gradient descent to adjust the 3D model until it fits the 2D image as closely as possible when viewed from random angles. Probability density distillation leverages the knowledge learned from the 2D image model to improve the creation of the 3D model. The output from this step is a point cloud.\nMonocular depth sensing Another approach is constructing 3D models using monocular depth sensing models. The first step in this approach is depth estimation using the monocular photo. The next step is to use a 3D point cloud encoder to predict and correct the depth-shift to construct a realistic 3D scene shape. This approach is faster and can be executed even in low-end hardware, but the constructed 3D model may be incomplete.\n3D mesh model The NeRF model constructed in the previous step has a point cloud model of the scene. This point cloud is meshed using Marching Cubes or Poisson mesh technique to produce the mesh model. The texture for the mesh model is generated from the RGB color values of the point cloud.\n3D mesh model based on the photograph generated above\nIn a noise-to-photo generation model like stable diffusion, users have little control over the generation process. Despite keeping parameters identical, any given prompt can generate a new variant of the image each time you try. To enhance an existing photograph without losing its shape and geometry details, ControlNet models are helpful.\nControlNet uses typical image processing outcomes such as Canny, Hough, HED and depth to preserve the shape and geometry information during the generation process. It can boost the productivity of content designers in iterating combinations of styles, materials, lighting, etc.\nVariants produced using a ControlNet model preserving the shape, geometry and pose Latent diffusion models and generative AI can transform not only XR applications but also prove useful in segments like media/entertainment, building simulation environments for autonomous vehicles and so on.\nConclusion In conclusion, the realm of 3D creation is undergoing a remarkable transformation. The paradigm shift is granting unrestricted opportunities for individualism while simultaneously streamlining the creative process, eliminating challenges centered on time, budgets and laborious labor.\nWe expect generative AI-led 3D modeling to forge ahead and explore innovative functionalities. 3D authoring could be an effortless experience where people, irrespective of budgets and industry could unleash their creative potential.\nA variation of this article is originally published at Thoughtworks Insights, co-authored by Raju Kandaswamy\n","id":60,"tag":"xr ; ar ; vr ; mr ; metaverse ; evolution ; ai ; content management ; technology ; thoughtworks ; insights ; xr content ; xconf","title":"Generative AI for 3D content in XR and beyond","type":"post","url":"/post/generative-ai-for-3d-content-in-xr-and-beyond/"},{"content":"Thoughtworks is celebrating its 30th year in industry, and I have completed five years here, and to celebrate this, I have compiled thoughts from thoughtworkers, and published them as an ebook. I am grateful for the wealth of knowledge and experiences shared by my colleagues. These thoughts have resonated with me and shaped my journey within the organization and life.\nI am excited to launch the ebook with this post. I believe this compilation of thoughts will not only serve as a personal reflection but also as a valuable resource for others on their own professional journeys. May these guiding thoughts continue to inspire and empower individuals to make a positive impact in the world of technology.\n📖 \u0026ldquo;My thoughtworkings\u0026rdquo; - How did it start? 📖 It all began with a LinkedIn post, and over the past 2 months, I have followed the trail of 30 guiding thoughts. Looking back at my five years here, I recall the captivating conversations with my colleagues. Some first-time interactions still linger in my mind, while others have grown even stronger over time through our collaboration.\n These 30 individuals are not the only contributors to my journey; they represent the essence of a promise kept for \u0026ldquo;30\u0026rdquo;.\n🌟 Five categories of thoughtworkings 🌟 30 thoughts from 30 thoughtworkers, and each thought is a gem.\nAs I celebrate my own five-years journey, and I categorize these inspiring thoughts into five themes as follows:\n 🙇🏽‍ People process: Build a common ground through open communication, minimize emotions to enhance decision-making, and trust the teams while enabling their growth. Lead from both the front and back, empowering others through effective delegation. By following the processes that make sense, we embrace every journey\u0026rsquo;s first step with a beginner\u0026rsquo;s mindset, seeking new opportunities and unexplored territories. These insights cultivate a positive team culture and effective leadership, paving the way for success. 🚸 Organization navigation: Leave the baggage behind, embrace a fresh start. Being hands-on and authentic, we navigate the journey with flexibility, storytelling, and executive presence. Master the smooth transitions, and maintain a light and responsible approach towards the technology we produce and consume. These insights empower us to navigate the organizational landscape effectively and create a positive impact on the teams and projects we work on. 🎭 Personal branding: Be our own brand and consistently maintain it. Communicate our aspirations loudly and clearly, showcase the confidence that serves as the best make-up. By embracing the \u0026ldquo;nothing special\u0026rdquo; mindset, we recognize the power of being a \u0026ldquo;शून्य\u0026rdquo; (Zero) and continuously striving for more. Encouraging ourselves to never settle, always seeking to go beyond boundaries. These guiding thoughts empower us to build a strong and authentic personal brand, making a lasting impact in our professional and personal endeavors. 🤖 Engineering excellence: Embrace the importance of tests as our first line of defense, rather than a last resort. Understand that tools are a medium, not the ultimate goal, and true innovations start from within our own organization. Value the significance of validating hypotheses and focus on creating products that truly matter. By measuring what matters most, we ensure our efforts align with our goals. Creating an environment conducive to deep work empowers us to deliver our best work efficiently. These guiding thoughts drive us towards engineering excellence and continuous improvement, making a significant impact in our technological pursuits. 📈 Scale: When it comes to Scale, the remarkable story of India\u0026rsquo;s scale in technology adoption comes at top. Break the self-made boundaries that limit our growth and take bold steps to enter the challenging domains. Create effective pitches ensuring attention, benefits, courage, and defense. Embrace fail-fast and learn fast to iterate and improve rapidly. These thoughts collectively fuel our journey towards greater scale and impact in the world of technology and beyond. These thoughtworkings are invaluable lessons that continue to inspire and empower me, as well as others on their own professional journeys. Read more in the book below.\n📚 \u0026ldquo;My thoughtworkings\u0026rdquo; - Free and Open for all! 🌐 Excited to share my ebook filled with empowering thoughts from thoughtworkers. It\u0026rsquo;s available for everyone to access and enjoy! Dive into the world of insights and inspiration. Happy reading! 🎉\nThis browser does not support PDFs. Please download the PDF to view it: Download PDF.\n This collection is my personal observations, and presented in no particular order. I continued from the impactful change stories, and reflections of thoughtful years after accepting the change. These five years have changed me a lot. The list of my learnings is huge, but for this 30th anniversary of thoughtworks, I limited the number to \u0026ldquo;30\u0026rdquo; thoughtworkers from various backgrounds, experience level, operations, business, delivery, technology, product, management and more. I hope you enjoy reading these thoughts as much as I enjoyed putting them together.\nHere is a physical copy of the book, placed in Gurugram office library.\n Refer here to know more about books I authored, foreword or reviewed.\n","id":61,"tag":"books ; learnings ; about ; general ; thoughtworks ; life ; change ; reflection ; takeaway ; motivational ; thoughtworkings ; journey","title":"My thoughtworkings from five thoughtful years","type":"post","url":"/post/five-thoughtful-years/"},{"content":"Lean2Lead is a network of women at work, with will to lead and succeed in their respective areas of work. They are 400+ women from different industries together in this group for exchange of ideas, connecting and coaching, building network with industry leaders and inspiring each other for building strategies for success in leadership and executive roles.\nI am speaking at their tech session series, on 15 July 2023, 3-4 PM. It is a member only event.\n The Twisted Realities There are already a wealth of examples of XR (AR, VR, Metaverse) being used in different industries and usecases.\n Introduction to XR (AR/VR/MR/Metaverse/Spatial Computing) Usecases of XR Concerns and Challenges The way ahead Event Wrap The event titled \u0026ldquo;Twisted Reality\u0026rdquo; curated by the organizers truly captured the essence of XR (Extended Reality). The intriguing title itself hinted at the fusion of technology and the human experience. We started into the fascinating world of Generative AI, starting with the creation of the title image using MidJourney. This saved us from committing a crime of not talking about AI :)\nSetting the stage To set the stage, I playfully introduced myself as the real Kuldeep Singh, only to reveal that the version of me on the Zoom screen was an artificial replica. This realization raises an intriguing question: What we see and hear during live conferences is nothing more than a digital representation, and we have become so immersed in this technology that we hardly notice the distinction between reality and its extension.\nReality has NO Reality Then we tried to understand the reality on what we see, hear, sense or feel, along with physics and science of things in the environment. We concluded our understanding that reality has no reality backed by these two gentlemen -\n Reality is merely an illusion, albeit a very persistent one\u0026quot; - Albert Einstein\n “Everything we call real is made of things that cannot be regarded as real” - Niels Bohr\n eXtending the Reality The event shed light on the fact that our perception of reality has already been altered and augmented by technology. It challenged us to reflect on the deep integration of digital experiences in our lives and the implications it holds for our future.\nFind the slide deck here.\nThis browser does not support PDFs. Please download the PDF to view it: Download PDF.\n Thanks Everyone References Title picture credit - Midjourney prompt\n /imagine generate a detailed picture of twisted reality, where show home upside down and tilted, and people walking on the walls, also show unreal things like people with animal faces or animal with people face \u0026ndash;ar 16:9\n ","id":62,"tag":"xr ; lead2lead ; talk ; event ; speaker ; ar ; vr ; ai ; mr ; metaverse ; introduction ; overview ; technology","title":"Speaker - Lean2Lead tech track - Twisted realities","type":"event","url":"/event/lean2lead_xr_2023/"},{"content":"Extending Reality (XR) is providing experience to user where the boundaries of real and fake blur, and user believe in the new reality presented to them. In the recent years, the tech has evolved to enable extending the reality by XR (AR/VR) faster, and much more evolution still to happen. I was invited to join the Progress Review Committee as a Judge for Meta and Meity\u0026rsquo;s Startup Hub program. It is great talking to top XR startups of India. I have spoken for this program earlier as well.\nMeta-MeitY Startup Program India is home to one of the most vibrant startup ecosystems, and Ministry of Electronics \u0026amp; Information Technology (MeitY), Government of India is leading and facilitating a gamut of Innovation and IPR related activities across the country towards expansion of this ecosystem. XR Startup Program is a collaboration between Meta and MeitY Startup Hub (MSH). The program aims to accelerate India’s contribution towards building the foundations of the metaverse and nurturing the development of Extended Reality (XR) technologies in India. Foundation for Innovation and Technology Transfer (FITT) at IIT Delhi is a techno-commercial organization from academia is counted amongst the successful such organizations. FITT is an implementation partner for the Meta and Meity\u0026rsquo;s XR Startup Program\nThe metaverse is being touted as the next internet, read my earlier articles to understand it more. At this stage we mainly understand it as a tech evolution where XR (Extended Reality) is at it\u0026rsquo;s center. In some more articles I also touched upon the usecase and challenges and talked about a need of building up partnership of industry, academia, government, public/private institutions, and people to solve challenges of future. This program and the event was perfect example of that partnership.\nProgress Review The shortlisted startups for this program embarked on a transformative journey, including a rigorous 5-day bootcamp and various challenges. Among them, a select few have even been shortlisted for grants to further their implementation.\nIn light of these achievements, a progress review session is organized to evaluate the direction and goals of these top startups.\nIt is truly inspiring to witness the emergence of XR products in fields such as product design, development, healthcare, education, and beyond. The impact of these advancements is far-reaching and holds great potential for the future.\n","id":63,"tag":"xr ; ar ; vr ; mr ; thoughtworks ; event ; judge ; sustainability ; metaverse ; iit ; university ; startup ; learnings ; meity","title":"Progress Review - Meta-MeitY XR Startup Hub, IIT Delhi","type":"event","url":"/event/msh-progress-review-1/"},{"content":"As technology continues to advance, electronics are getting smaller and wires are disappearing. Even electric poles, once a familiar sight on road trips, are being phased out. With the internet now ubiquitous, physical storage devices are disappearing too, and we rely more on cloud-based solutions that seem to happen over the air.\n ‘It\u0026rsquo;s no wonder we look up when we think of clouds ☁️, but should actually look down to the earth in the mines!\u0026rsquo;. - Nagarjun Kandukuru, Global Head of Emerging Services at ThoughtWorks.\n Recently, Green Software Foundation (GSF) and Thougthworks hosted a meetup at Gurugram Engineering Centre, and I am sharing my takeaways from this meetup, where likes of Nagarjun Kandukuru (we call him Nāg) opened up our eyes. Last year we focused on measuring the impacts, I talked about Sustainability and XR, and this year we shared various contributions in this direction along with a few others from industry and academia. 🦹 It’s crime to not talk about ChatGPT! Nāg warned us that if we start a tech session without talking ChatGPT or Generative AI then it’s a crime. I do feel the same in today’s time, 🙇 sorry I started this article with cloud, but let me correct here, while writing this article I am using ChatGPT for brevity, and I will burn 2-3 liters of water during this, read here why. These days, kids are talking to Alexa, and we are socially(-media) connected to the world all the time, our virtual identities prevailing more than us. As technologist, I used to say cloud and AI are behind all this, along with some people building it, until I learn Anatomy of AI by researchers like Kate Crawford. It describes Amazon Echo as an anatomical map of human labor, data and planetary resources.\nIn the above image, we don\u0026rsquo;t see the reality of this, without zooming in multiple times. The state of our awareness on sustainability is not different from this. We need to ZOOM-IN more and more to get it.\nNāg further said that we may be looking up in the air when we say we moved all our computing and storage needs over the cloud. If I can add my personal story here, I paid a huge sum to recover a corrupted disk carrying my personal data, and now I have moved all my data over to the cloud with a nominal fee. However, this doesn\u0026rsquo;t mean that there is actually no physical thing behind the cloud. Cloud is very much physical, and almost no electronics are made without Coltan, a mineral that is predominantly mined under terrible conditions in Congo.\nIn fact, 80% of Coltan comes from mines from Congo, with similar stories in other parts of the world for Lithium that powers batteries. That\u0026rsquo;s why Nāg said, \u0026ldquo;We should look down to the earth in the mines when we think of cloud!\u0026rdquo;\nThe point being that we need to acknowledge the physical realities that underpin our reliance on technology and work towards a more sustainable approach. It\u0026rsquo;s important to remember that there is something physical burning behind Cloud, AI, and even more with language models like ChatGPT and other similar technologies. Below picture tells it all.\nIn fact, scientists have already issued a final warning on the climate crisis, as reported in The Guardian. We need to act now before it\u0026rsquo;s too late. Nāg emphasized that the current efforts from the industry to address this issue are not primarily driven by the sense of doing the right thing, but rather motivated by the 3Rs: Risk, Reputation, and Rewards.\nInformation Technology(IT) has a significant role to play in reducing its impact on the environment. One approach is to use IT to improve the ecosystem that consumes fewer resources. Another approach is to optimize IT itself, and both need to follow. Currently, IT consumes almost 6% of the world\u0026rsquo;s electricity, and this figure is growing rapidly. Look at the Optimal Airport, a client story of Thoughtworks, read more here on sustainable solutions from Thoughtworks.\n⚖️ Measure, Measure and Measure As the saying goes, \u0026ldquo;We can improve what we can measure.\u0026rdquo;\n Last year, Razin talked about Software Carbon Intensity, and Carbon Aware SDKs. We have taken a leap from there, with Mehak and Jayant discussing our work at Terrascope - a smart carbon measurement SaaS platform developed for a Singapore-based climate change venture.\nThoughtworks believe that the future is Green, rise of responsible tech is the way to go. This project is a step towards that.\n💚 Sustainable Product and Customer Experience “What you see is what you act”\n Industry is now heavily focused on building customer experience products that encourage people to adopt sustainable habits. Aditya and Anuja our CX Experts have explained it well by employing a behavioral science approach. They highlighted how different types of green nudges can trigger behaviors that motivate us to make better and informed decisions regarding the environmental impact of our actions. Read The Little Book of Green Nudges. To me, it\u0026rsquo;s like building sustainable atomic habits as described in my earlier article, the experience we get from the environment plays big role.\nFor instance, Google search displays flights with their corresponding emissions ratings, which can help people make informed decisions about their travel plans.\nGreat takeaway! It\u0026rsquo;s important to be mindful of the environmental impact of our choices and habits, and the use of nudges can be a powerful tool in motivating us to make more sustainable decisions. By incorporating sustainability considerations into the design of products and services, and by providing visibility into the environmental impact of our choices, we can make a positive impact on the planet. Building sustainable habits is not just good for the environment, but it also contributes to our own well-being and the well-being of future generations.\nIn addition, Kunal Malik also talked about building sustainable products in his session, emphasizing the importance of considering sustainability from the very beginning of product development. Kunal shared insights on carbon credits and how they can be monetized. So, it\u0026rsquo;s time define your product goals, design, and metrics around sustainability.\n🤝Partnerships for Sustainable Solutions Partnerships are an effective way to find sustainable solutions that benefit all stakeholders involved. By working together, governments, academia, industry, and individuals can collectively develop solutions that are economically, socially, and environmentally sustainable. These partnerships can involve sharing knowledge, resources, and best practices, and can lead to more innovative and effective solutions than working in isolation.\nIn the same spirit, hearing from academician, Dr. Anubha Jain, Director, School of Computer Science \u0026amp; IT, IIS University, in her talk on Green IoT was a great insight into the future of technology. Her presentation emphasized that sustainability should be a priority across all aspects of technology, from software and hardware to networks, physical devices, and virtual environments. As an academic partner, Dr. Jain\u0026rsquo;s insights and research can help inform industry\u0026rsquo;s efforts to develop sustainable solutions. - LinkedIn\nDuring the summit, Diljeet Saluja, our delivery expert, emphasized the importance of collaboration with government and authority for adoption and co-development of solutions like Terrascope. We also discussed how ESG scores, which are increasingly being discussed in length, need more work and implementation in India. As seen in recent news such as the RBI\u0026rsquo;s ESG push catching Indian banks off guard, there is a growing need for companies to incorporate ESG practices into their operations.\n☘️ It is just a start “The journey towards sustainability is a long one, but it\u0026rsquo;s important to take that first step. Let\u0026rsquo;s challenge ourselves to make conscious decisions, support sustainable products and practices, and create a better future for ourselves and the planet.”\n 🙋 With this, I come to the end of the article. It\u0026rsquo;s clear that learning never ends, and after attending the Sustainability meetup, I will be more mindful of the decisions I make regarding technology, products, and everything else in my life. Sustainability will be a key parameter going forward, and I\u0026rsquo;m excited to play my part in building a better future. I want to extend my gratitude to the organizers, Akshit Batra and team for making the meetup happen. Their efforts and dedication have enabled a platform to share knowledge and insights and inspire a community of people who care about building a sustainable future. Keep up the great work!\n","id":64,"tag":"xr ; ai ; chatgpt ; generative ai ; iot ; thoughtworks ; event ; green software ; meetup ; takeaways ; sustainability ; metaverse ; change ; habit","title":"Greening Up: Key takeaways from Green Software Foundation Meetup","type":"event","url":"/event/green-software-foundation-meetup-2023/"},{"content":"In recent years, XR technology has revolutionized the way we decorate our physical environments. By utilizing AI and computer vision, XR can scan and track real-world surroundings, recognizing and remembering images and objects to enable the addition of virtual content. This technology can store and recall virtual decorations, making them visible every time someone enters the environment. While image matching is not a new concept, XR platforms like Vuforia have already implemented various mobile XR examples. We experimented with smart glasses from Lenovo, allowing users to customize their surroundings and view the world in a unique way. In this article, we will explore the potential of XR technology for virtual decoration, not only in personal settings but also in various industries, including conferences, events, shopping, and trade fairs.\n🎍Decorate your world The XR enthusiasts (Sourav das, Sai Charan Abbireddy) have open-sourced an app DecoAR, that is built while experimenting the decoration using XR image tracking capability. You just need to take a few pictures of your environment where you want to decorate, and apply the virtual decoration in the form of a digital picture, video, 3D models or more. Enjoy it in XR!\nHere are steps a developer can follow and decorate the world. Welcome the developer community to extend it more and build concepts that allow dynamically changing the content by end users. Currently it supports ThinkReality A3, and needs Spaces SDK for Unity to run the application.\nProject Setup Follow the steps below to set up the projects.\n Check out the project from git repo git clone git@github.com:xrpractice/DecoAR.git Download “Spaces SDK for Unity” from https://spaces.qualcomm.com/download-sdk/. Unzip it. Navigate to the files that we unzipped and copy the SnapdragonSpaces_Package_0_11_1.tgz from the “Unity package” folder to “Packages” folder in the project. Open project in Unity 3D (Unity 2021.3.9f1 onward with android build support) Go to Assets\u0026gt; Scenes and Open DecoAR scene DecoAR Scene Let’s understand the Deco AR scene. It contains ARSessionOrigin and ARSession components to provide AR capability. They are linked to the AR foundation by attaching ARInputManager and ARTrackedImageManager to ARSession and ARSessionOrigin respectively.\nWe also have an ImageTrackingController, Which helps to keep track of our current tracking image\nRefer detailed documentation from spaces here.\nMap the Area to Decorate Identify the area and objects you want to decorate, for example the below butterfly picture placed near a meeting room in my office and I want to virtually decorate it.\nHere are steps I need to perform to track this picture on the wall.\n Take out your phone and click a picture and put it in the Assets/Resources folder. Change its properties : Alpha is Transparency : Checked, Wrap Mode : Clamp and Read/Write : Checked Picture Credit : https://www.itl.cat/pngfile/big/185-1854278_ultra-hd-butterfly-wallpaper-hd.jpg\n Next we need to add this image for tracking. Go to Assets \u0026gt; Image Libraries \u0026gt; ReferenceImageLibrary and “Add Image” on the bottom. Drag and drop the image from Resources folder into the newly added row.\n And Measure the Picture length and height Specify the size in meters in the Library, and “Keep texture at Runtime” checked eXtending the Decoration This is where we will define what will happen when the images we configured above are tracked.\n Locate the Prefab called PrefabInstatiater Under Assets/Resources/Prefabs folder. In Tracked Image Action Manager. We have Range Between Multi Instances — Specifies the range in meters to instance the same action on an already tracked image. Images With Action — Specifies the actions on tracked images. Expand the Images With Action , It has three fields. ImageName — On which tracked image, the current action needs to be performed Action — Specify the action, Supports Image, Video, Audio and 3D Model SourcePath — Give the source relative to the “Resources” folder without any extension. Action and source should be the same. For Example, On tracking of Butterfly image, We are rendering the same Butterfly image on top of it. Similarly, we can track and configure other images as follows. That’s it, configuration is all done. Now build the app and\n🥽 Experience in XR Just build the app and install it in the ThinkReality A3. Run the DecoAR app, and it will start tracking the images as soon as it opens the app.\nHere is a quick demo.\n ☑️ Conclusion Tracking objects and images in the environment is now a prevalent feature in upcoming XR devices. ThinkReality XR devices also offer this capability with their Spaces SDK. There are numerous use cases that can be built on these features, and we have showcased one of them. However, these features require high computing power and device resources. Imagine the possibilities for evolution in these cases with the support of 5G/6G internet and edge computing. This has the potential to make the real world more engaging and interconnected, ultimately realizing the Real World Metaverse vision of Niantic and likes of it.\nIt is originally published at XR Practices Medium Publication\n","id":65,"tag":"xr ; ar ; vr ; mr ; metaverse ; lenovo ; ai ; image-processing ; technology ; thoughtworks ; qualcomm ; spaces ; decoration","title":"Exploring the Potential of XR Technology for Virtual Decoration","type":"post","url":"/post/xr-virtual-decoration/"},{"content":"The TWiinoFest 3.0 The third edition of innovation fest, named at TWinnoFest, a month-long hackathon, was recently held by Innovation Labs at Thoughtworks. This year\u0026rsquo;s event was conducted remotely, with participants spread across different cities. I was fortunate enough to be invited to judge the finale of the event.\nThe sheer variety and quality of ideas presented was truly astounding. It was incredibly difficult to compare them, as each one was unique and innovative in its own way. What struck me most was how these ideas reflected the Thoughtworks culture - using technology to solve real-world challenges faced by society.\nSome of the ideas presented included tools for improving communication for people with disabilities, AI-powered cost optimization for cloud infrastructure, asynchronous working tools, a smart parking solution, and navigational problem-solving tools, among others. Overall, it was an inspiring event, showcasing the best of what the tech industry has to offer.\n","id":66,"tag":"xr ; ar ; vr ; mr ; thoughtworks ; chatbot ; event ; iot ; ai ; generative AI ; judge ; hackathon","title":"Thoughtworks TWinnoFest 3.0","type":"event","url":"/event/thougthworks-twinnofest3/"},{"content":"Recently, I had the opportunity to contribute to an event called \u0026ldquo;GTA Ghaziabad - Let’s Fork Together\u0026rdquo;. The event was hosted by a community of working professionals and undergraduate engineers from nearby colleges in NCR, and took place at my home office, Thoughtworks Gurgaon. Participating in this event provided me with another energy boost, and in this article, I would like to share my experience and give you a glimpse of what the event was all about! WHY IT MATTERS: The evolution of technology in the future hinges on the extent to which industries, academia, governing bodies, and individuals can work together to define its direction. By fostering a culture of community, we can collectively share and learn, creating a win-win situation for all contributors. This principle aligns with the purpose of Thoughtworks, and it resonates with me and many.\nShivam Gulati, a Thoughtworker and co-organizer of the event, contacted me to speak and even secured an office space for the event.\n View this post on Instagram A post shared by Get Tech Allied Ghaziabad (@gtaghaziabad)\n More than 60 people joined this in person event, to spend quality time with each other.\n🗣️ Speaker Sessions I delivered a session on \u0026ldquo;Technology Defined Interactions,\u0026rdquo; exploring how technological advancements are leading to new methods of interacting with machines. The current state of human-technology interaction is not far off from what was envisioned in science fiction. Today, we have access to advanced touch and gesture-based interactions, wearable technology, and sensors. During the session, I emphasized the importance of staying ahead of the curve to remain competitive in the future. Read here.\nDuring my session, I also delved into the topic of sustainability and responsible tech, which is close to my heart. I highlighted the Green Software Foundation (GSF), an organization supported by industry giants like Thoughtworks, Accenture, and Intel, which is creating a trusted ecosystem for green software. To make improvements in this area, we need to start by measuring our impact at every level of the product we build. For example, if we want to reduce carbon emissions, we first need to measure them. With the increasing resource consumption associated with the new GenAI era, it is critical to consider the costs to the environment and society. We are consuming half a liter of water just by asking a few ChatGPT questions.\nMoreover, I discussed the potential for technology advancements and the ongoing pandemic to create more societal isolation, making certain sections of society feel alienated from the rest. Thus, it is essential to keep diversity and inclusion in mind. To learn more about these topics, please visit here.\nThe event featured an impressive lineup of speakers following my session. Harpreet Singh Seera, a senior mobile developer from Frontier Wallet, presented on multi-platform app development using Flutter, while Saransh Chopra discussed the importance of open-source contributions and went into detail on the Google Summer of Code (GSoC) program. Their presentations highlighted the mutual benefit of open-source for both developers and the community. Dibyasom Puhan also gave an informative talk on Machine Learning on edge with GCP and demonstrated some fantastic demos. There were couple of more, and the discussions on DevOps and automation that arose during the event were also insightful.\nI asked a few questions to the audience, such as how many of them had used the XR feature on their phone and how many had used ChatGPT. To my surprise, 100% of the audience responded positively to these questions. This was a testament to the tech-savviness of the audience and their openness to new technologies.\nWe also had a discussion about the impact of AI on job replacement. We agreed that those who are not using AI in their work may eventually be replaced by those who do, and that this new generation, which is more aware of technology, may replace existing knowledge workers if they do not reskill themselves.\n😋 Networking and Fun We had number of side conversation on how thoughtworks value open-source, and talked about bahmni, selenium, books and more. Not just that, many thoughtworkers believe that the async way of working for open-source development is becoming a normal way of working, async-first.\n In person events are an opportunity for networking and fun, I am sure that participants liked the vibrant gurgaon office, and they have collected memories visiting our office. I like the delicious lunch served by the event organizers, specially the “arora lemon” 😀\nInteresting conversation with Shubham Shakti and Sarthak Jain. Like this one, they organize various events and hackathons, Sarthak unveiled the next upcoming - The Great India Hackathon - Jabalpur. We discussed various synergies on how thoughtworks and thoughtworker may collaborate further.\nKudos to all the organizers, Aishwarya Suresh, Shubham Jain, Animesh Pathak, Nishant Mishra and the team, for making this happen. This office was built during covid time, and I see the best use of the podium since it was constructed 👇 the happy faces.\nKeep it up! Learning matures only by sharing it with others.\n","id":67,"tag":"xr ; thoughtworks ; event ; speaker ; green software ; talk ; sustainability ; metaverse ; gta ; university ; students ; startup ; takeaways ; learnings","title":"Community Events are the Energy Boosters!","type":"event","url":"/event/gta_lets_fork_together/"},{"content":" “Change is the only constant, it doesn\u0026rsquo;t matter whether we like it or not, it takes place”- shared and reflected in my change stories.\n Over the years, I have been following my theory of change Accept-Observe-Absorb. It helped me to traverse through the change year 2020, and understand the role of “Will” and how to cultivate it. I shared how just one change “working from my native home” brought so many other changes in lifestyle and behavior. I advocated taking advantage of such situations when our brain is ready to accept change easily. It changed me to a new person, the 5 am person, who takes frequent 5 km riverwalk, regularly cycles 30-40 km to have morning tea with my cousin and returns before starting work. I believed that motivation is the only thing that is needed to make a change, and change would persist.\nAfter a good 9 months, we moved back to my work city Gurugram (aka Gurgaon), and here I tried my best to continue the similar routine but settling back in an old environment rolled me back in time, where there was no river to walk, no cousin waiting for morning tea. I found alternative cycle paths in nearby mountains, joined the strava app and community. Despite it, all this lasted in the next few months, winter came, and it never resumed actively in summer as planned, focus shifted to other activities - a new house or father\u0026rsquo;s retirement, and identity rolled back.\nFig1. Social reflection\nAnother reflection on my contribution to thinkuldeep.com that I set up with intention to uninterrupted sharing and caring. There is a spike of my contribution in year 2020, and then in the year 2022 spike in the events, it was mostly due increased external influences. Contribution is also proportional to the number of books I read. Overall, the performance is not consistent, and it is very much driven from external factors of motivation, situations, and opportunities. All the years I wondered,\n How can we explain the loss of motivation that once brought about significant change, but eventually faded away over time?\n Why is it that some tasks seem to be accomplished with ease, while others require a significant amount of effort without yielding the desired results?\n I find the answers in the book Atomic Habits by JemesClear. Thanks to my colleague Sumeet Moghe for sharing the book with me, one can find his inspiring work at www.asyncagile.org and www.sumeetmoghe.com. The book does influence my change theory that change is very much a momentary thing, it can be well triggered from various factors, but a change that persists is a side effect of habits.\nI am sharing my key takeaways from the book in the rest of the article. I recommend reading the book. Author has presented tons of examples and facts that backs what he explained in the book.\n🎯 Chase the habits not goals Habits are our repeatable behaviors that we follow knowingly or unknowingly. Eg. 🛏️ making or not making up our bed just after waking up, 🧹 cleaning or not cleaning up our table each time it gets dirty, or something like what my wife does when we go for a trip she counts all bags, and people with her 😀. Collectively our habits define our identity. If we want to become a person who keeps his or her home clean, then we need to chase our habits. Cleaning the home at one time due to some motivation or when guests are coming would not make us that person, what matters is repeating the habit of cleaning up and avoiding the habit of leaving clutter behind.\n The way money multiplies through compound interest, the frequency of compounding matters more them the rate of interest.\n Habits have the same effect, repeating them every day gives enormous results over time. The author focused on the 1% rule for continuous improvement. Just 1% improvement each day can make us 37 times better over the year, and just 1% worse a day will take us to zero in the same year. We never succeed or fail in our life by some overnight transformation, rather our success or failure are the result of repeating our habits. Habits can well compound for us or against us depending on what we follow (good habits vs bad habits).\nAtomic habit is the smallest behavior that can be repeated effortlessly and it becomes so automatic that we don\u0026rsquo;t realize doing it, the tiny change that can lead to that 1% improvement. Achieving a goal is just a momentary thing, and instead of it we need to focus on systems of building automic habits. This is what I missed, I focused more on the goals (5-10 KM walk, 30-40KM cycle), momentarily achieved them, but continuing them was hard, these goals restricted my happiness, and long term thinking. I never thought of building the foundation that can continuously drives forward. James described these scenarios very well in the book. Reading a book is one thing, and becoming a reader is another.\nFiguring out answers for the powerful question like\n “I am X, what does a person like X do?” will tell us what habits we need to have in us to become X.\n X can be any identity that we want to be, reader, writer, athletes, painter, doctor and so on.\n🧠 Understand how does our brain learn James explains that technique called reinforcement learning where the brain keeps scanning the environment and tries different responses, and learns from the results as reward or penalties.\nI and my friend Raju Kandaswamy described it in details in the article on “Training Autonomous Vehicle” where a driver (called agent in technical terms) was given set of rules for rewards and penalty, and that driver supposed to learn parking a car.\nAfter millions of hits and trials (small steps) the agent finally learns to park a car in an empty slot without hitting the other cars. To our surprise the driver automatically learns to reverse the car, and takes the optimal paths. It was amazing to see this simulation visually, the agent was really making very small steps, initially making a lot of mistakes and then gradually learned to avoid penalties and follow the rewards. This is exactly how automic habits stick and the brain learns to repeat them safely.\n🏋️ Create a good habit The book covers very simple laws for creating good habits. To my surprise I learned that creating a good habit does not always need motivation, the environment does have a role play in it. Did you notice small items stacked near the billing counter queue of the supermarket? People end up picking these small items even if they were not in the buying list. Similarly, James shared an instance from a hospital where the water intake/buying behavior for water increased against soda, just by making water availability more obvious than that of soda, and this was achieved without a single message or motivation for good health to the consumers.\nBrain needs a cue to trigger a particular behavior, and making the cue obvious and visible in the environment is the first law of creating a good habit. Author described ways to link the new behavior with existing obvious behavior, and better we map the new behavior with time and location. Any ambiguity at this stage would lead to missed behavior. Next thing is making the behavior change attractive such that it is irresistible. Dopamine plays its role in the brain, it causes craving and desire to do a thing. Pairing an action that you want to do with a tempting action we already do helps here. Imitating behaviour of close family, friends or communities or powerful influencer makes the behaviour attractive, eg. I started imitating cycling habits from my cousins. I tried to make my behavior obvious and visible via social media, and make it attractive too.\n But that\u0026rsquo;s not enough, naturally our brain is designed to be lazy, we tend to do tasks that are easy, or that look easy to do.\n For habit formation, repetition of a behavior is more important than the time spent on a habit.\n Idea is to reduce friction for good behavior. The author described 2-minutes rules that I think can break any friction for habit formation. I never needed to maintain a streak of 50km cycling, or 5-10km walk, but what I needed was to just show up daily, even 2 minutes cycling or walk would have made that behavior automated, and effortless. Technology can rescue and automate a behavior and reduce friction, inclusion of tech is one time actions\nDeepa Malik, the first women paralympian of India, has explained how they have used technology to measure heartbeat, and other vital information while practicing, increased India’s performance at paralympics in multi-fold. On advice of Sumeet I gifted a Kindle to myself. This is a onetime action to continue the journey of a reader, writer and influencer. Such action can help reduce friction, and can lock our future behavior.\nWe talked about rewards and penalties in reinforcement learning, behavior that rewards will be repeated, and punishment will be avoided. In a way the behavior that satisfies would become a habit. However here the brain looks for immediate satisfaction, there are always delayed rewards of good habits. For example, healthy eating may not immediately make you lose weight, but eating pizza may give immediate reward.\n the human brain prioritizes immediate rewards over delayed rewards.\n Immediate rewards can be achieved by getting a sense of achievement. So track the habit, and showing up as per plan is also an achievement, and give immediate satisfaction, try to never miss showing up, and if missed, never miss twice. Recover faster from habit breakdowns.\n🚭 Break a bad habit The author explained that breaking a bad habit is just the inverse of creating a good habit. However, when I see it first time in the book, I was like it is easier said than done. Author explained it quite well with a number of examples.\nSo to break a bad habit, we need to make them invisible from the environment, so it does not become obvious choices. Eg. get rid of TV from bedroom to sleep early, remove mobile charging points out of bedroom. Make the bad habit un-attractive, un-important, and difficult by increasing friction between you and bad habit, use technology to restrict choice to the benefits. Make the bad habit un-satisfying, a punishable offense in your habit contract with yourself. Make it painful.\nPersonally I feel this is harder than creating new habits. I am a strong believer of strength based leadership, where we work on strengths and overtime we overcome weaknesses automatically. I have not found many points to work upon here, so at this stage I am not thinking too much on this part, but will focus more on adding good habits.\n🤞 Genes, Luck and Fall in Love with Boredom After reading laws of creating good habits and breaking bad ones, I thought the book was over, and never thought the author would really talk about “talent”, “genes”, “luck” or “motivation”. As these look opposites to what it covered in the first half of the book. The author covers these with good facts and figures, and described these as advanced tactics that can make good to great.\nWe need to choose habits that best suit us, and not the one that is most popular. Genes and natural talent, personality, and physical capability do play an important role. When we pick the right habit, progress would be easier but when picking a wrong habit, life is a struggle. Understand what comes naturally to us. Working on our strengths gives us better results.\nMotivation is also important, and if we keep doing very simple tasks then we get easily bored, and if we keep doing all hard/complex tasks beyond our capability then we will keep failing, author explained it with the GoldiLocks Rule. Working at natural balance is key to satisfaction. No habits can be forever interesting, as it would become automatic it may be boring as well. We need to fall in love with boredom. Consistency is critical. Author described a technique of variable reward to reduce boredom.\n🫵 Review habits and break the Holdbacks This part of the book is also surprising for me, where the author talks about the downside of a habit. But it is an important piece, similar to what I explained in one earlier article that success can hold us back from getting further success. Once a habit becomes automatic, we achieve mastery in that habit, stop paying attention to little errors, and stop improving further. Ideally one habit should lead to addition/improvement of the next habit, and it continues.\nIf we stick to a habit we tend to lock ourselves into previous patterns of thinking, while as we started this article with “change is the only constant” and the world is changing, so a periodic review and reflection on habits is required.\nKeep upgrading your habits aligned to the identity you want to be! Keep changing.\nNote:- Content is not generated from ChatGPT.\n","id":68,"tag":"general ; selfhelp ; covid19 ; motivational ; life ; change ; habit ; takeaways ; bookreview ; leadership","title":"Habit Matters - The change is a side effect","type":"post","url":"/post/habitmatters/"},{"content":"Agility Today is a 2 days annual UnConference, to meet Agility needs of industry, gathered over past few years via meetups, Conferences and Webinar. This UnConference intends to give you best of Agile Psyche (Mindset Culture and Practice) and Geekery (Engineering, Tools and DevOps). This year in Agility Today Fest 2023 there are couple of programs.\nI am speaking to at Agile Engineering Lab (Technical Agility \u0026amp; Engineering Excellence), on topic of Scaling content generation for XR using AI.\nSummary of my Topic There are already a wealth of examples of XR (AR, VR, Metaverse) being used in different industries and usecases. However and wherever XR is implemented, a key challenge is content creation and managing Nothing, after all, can be extended without new content. XR depends on dynamism — we need to be able to create content that supports and facilitates such dynamism at scale.\nAI may rescue here, we are already seeing the generative AI challenging the existing norms, eg. ChatGPT. Similar disruption is getting ready for content generation and managing stuff.\nIn this session, we will discuss how AI can help content generation space, and can deliver on with AI assisted pipeline.\nBook your seat here, https://app.agilevirgin.in/event/agileengineeringlab/gurugram2023\nSave the date 26th Mar 2023 Event Wrap 🍻 It was a wonderful event\n ","id":69,"tag":"xr ; thoughtworks ; agilitytoday ; talk ; event ; speaker ; agile ; ar ; vr ; ai ; mr ; metaverse ; evolution ; content management ; technology","title":"Speaker - Agility Today Fest 2023 - Generative AI and XR","type":"event","url":"/event/agiletoday_2023_ael/"},{"content":"Lenovo, the global technology leader, hosted the India edition of its flagship event, Tech World , on March 16th, 2023. Industry leaders come together and peep into the future of smarter technology, that is empowering a changing world through innovative products, solutions, and witness technological solutions.\nI get a chance to join the event, and meet and listen to the industry leaders, experience lenovo’s innovative product line, and see breakthroughs in india shaping up. I am thrilled to hear about sustainability, responsible tech, diversity and inclusion, which are close to my heart and to Thoughtworks. Proud to see people trying ThinkReality XR Products that I am associated with.\nI am summarizing my takeaway from the event in this article.\nFuture Trends and India Delivering Event started with key-notes by Shailendra Katyal, MD, Lenovo India, on transforming India through smarter technology. Amar Babu, President Asia Pacific Lenovo took the dialog further by covering how smarter technology is empowering a changing world. The tech trends that are shaping the future.\nHe explained how lenovo is emerging in the non-PC business as well. Lenovo is focused on sustainable innovation, such as using recyclable materials for making parts and packaging of them, reusing the parts as much as possible. India would be a factory for the world, and play a key role in future tech buildup.\nThe India dialogs were well continued by Mr. Amitabh Kant, who is G20 Sherpa and x-CEO of NITI Aayog (National Institution for Transforming India)- the Govt of India’s premier policy Think Tank \u0026amp; prior to that Secretary, Department of Industrial Policy \u0026amp; Promotion. He explains how it is delivering promises for the next Techade. He explained how important the people’s data is for bringing innovation to the grass route. Era of domination of innovation only by Big tech is fading, and more and more startup and ecosystem is growing.\nIndia has focused on building India Stack, platform for building solutions, backed by 1.3B identities, open and trusted access to makers. Stack mushroomed a number of startups and enterprises to build solutions and compete to provide best in class services to the citizens. It enabled 2.2bn people to get vaccinated and all paperless, 500m insurances cashless. It enabled initiatives like Digilocker - digital document store, Diksha - national digital infrastructure for teachers, and educators, Swayam - self service portal to provide best teaching resources to all. India has implemented unparallel scale of FastTag implementation, going Camera Aided GPS based toll collection soon.\nIndia\u0026rsquo;s 98000 startups with 110 unicorns are already telling India\u0026rsquo;s story of innovation. What India is building is not just for 1.4B people of India, but 5B people of the world who will travel poor to middle class journey.\n “Mr Kant, focused on going green, and going digitally green, and explained the number of initiatives India is betting upon. He said without this Earth may survive but humans may become extinct”\n Rajeev Chandrasekhar, Minister of State - Information Technology, Government of India, has talked about making India digitally empowered. He explained how India’s financial sector is still sustainable and more prudent, when asked about SVP bank fallout in the US.\nIndia’s way of innovation is already considered open-source based, resource constraints, cost and so on. India is now focusing more on research and analysis. Proud to hear how we enable us and the world.\nAnother panel of Sumir Bhatia (Presidern - ISG, Lenovo APAC), Santosh Vishwanathan (MD, Intel) and Prateek Pashine - President, Reliance Jio continued the discussion of the innovations defined growth in the 5G era.\nNew way of working Pandemic induced the new normal in the way of working, for the technology industry remote or distributed working was not new, but it has been accepted widely that to be a new normal, and enabled it with more with tech. At times, it was considered the only way of future working, and it was well said that there will not be a return to previous normal. But in this so-called post-pandemic era, is the normal shifting again? Back to the office? Are we taking a U-turn?\nA panel discussed on Changing Paradigm with New IT in Modern Workplace has well covered these questions, by Diwa Dayal - MD, Sentinel One and Harnath Babu - CIO, KPMG, and moderated by Shalil Gupta - Mint/VC Circle. The topic is so interesting to me, and I have been exploring it with Async-Agile/Async-First pioneer Sumeet Moghe, one can follow his work at asyncagline.org. There has been a drastic difference in people who joined during the pandemic, and it is said that they don’t speak the same language, and don’t understand the culture of organization etc, and companies are pushing to go back to office. It reminded me of Sumeet\u0026rsquo;s article “No, that’s not culture”.\n Biggest things to consider when we compare pre and post pandemic time are changes in 3Ws ; Work, Workplace and Workforce. So if these 3W are changed in multiple ways, there is no point going back to the previous normal. There will be a mix of things continuing and location and time independent ways of work continue to evolve.\n While there will be uncertaining and challenges in this way of working, like how much physical to digital or digital to physical transformation is good, moon-lighting, security, privacy and more..\nMr. Dayal talked about good software being built but not secure ones, and security is like a cat and mouse game. With growing tech adoption, we are also exposed to more vulnerabilities, people now have easy to open tools that can hack systems in minutes. AI tools can also help exploit the security, and at the same time AI can find solutions much before they can happen.\nMetaverse is not dying, rather it is evolving further Metaverse is the hot topic these days, and in recent macroeconomic situations the first thing being discussed is “Is metaverse dying or died, after so much hype”. The panel discussion on “Reinventing Customer Experiences through Metaverse” covered these questions and concerns quite well.\nMr. Rajesh Uppal (Executive Director \u0026amp; CIO, Maruti Suzuki India Ltd), talked about Maruti Suzuki, seeing tech future in metaverse for marketing and customer experience enhancement, training and on-boarding, and shaping up employee experiences, shop floor visualization. He talked about Nexaverse and Arenaverse from Maruti Suzuki. It was well shown how a customer can join the metaverse and take a test drive with dealers explaining the car features, they can well experience rain sensor, airbag features, other sensor, fog lights, or other safety features which normally are not easy to experience.\n Mr. Uppal explained that at this stage motivation for metaverse build up is a Wow factor! however he feels the benefit and ROI it brings on table would be hard to ignore in future.\n Similarly Rucha Nanavati(CIO, Mahindra Group) talked about techMverse, and showcased how they launched the XUV400 car virtually. Rajen Vagadia (VP and President, Qualcomm India \u0026amp; SAARC), talked about the need of an open ecosystem and interoperability for metaverse adaption, and their plan for OpenXR based Snapdragon Spaces. He shared that XR devices shaping up faster, and judging metaverse state at this stage is too early.\nIt was a pleasure meeting Mr. Vishal Shah (General Manager, XR (AR/VR) and Metaverse, Lenovo) in person. He explains the metaverse as “Internet in 3D”, and talks about Lenovo’s plan for catering the enterprise metaverse by managing complexity of 3D computing, 3D content by the ThinkReality XR products. He also shared that lenovo is launching ThinkReality VRX device soon in india. I got a chance to discuss various synergies between us, and had an interesting conversation on ONDC, and how it may be the next disruptions in India\u0026rsquo;s digital commerce, after UPI. Vishal shared how lenovo is building a partner ecosystem for XR, and looking at India\u0026rsquo;s growing startup ecosystem, the next metaverse unicorn would be from India.\nShow and Tell, Meet and Greet The showcase area was full of Lenovo\u0026rsquo;s upcoming products from PCs, Gaming Laptop, Yoga, Foldable devices to XR devices like ThinkReality A3 and ThinkReality VRX devices. Nice meeting with Martand Srivastava and Jose Zacharia who have showcased number Enterprise XR use cases at the XR booth.\nThere were few partner solutions as well. For example EdTech solutions from PhiBonacci Solutions being presented by Ravindra Dubey, who has shown their interesting demo that redefines the way we look at books, he shared how they have built assets over the years that can make any education material valuable and interesting.\nAlso met a few more friends there who have traveled to a good depth in XR and metaverse technologies. Hemanth Satyanarayana, Founder and CEO at Imaginate | Collaborate in XR, shared how they recently turned the Vizag experience into Metaverse. I must say the metaverse experience is better than my physical visit to the place. I have never imagined seeing the city in this way. Interesting thing Hemanth shared is that they have turned this whole experience into reality in just a few days after scanning from drones and bringing it to their Atom Product.\n Sachidanand Swami, Founder Invoxel Technologies, has enhanced experiences at various museums in india. He shared how he recently showcased their XR solution to Hon’ble President of India Smt. Droupadi Murmu and other Government of India officials at Digital Pavilion in the \u0026ldquo;Rashtriya Sanskriti Mahotsav\u0026rdquo; concluded recently.\nWe met a number of other industry leaders and enthusiasts over different social events, and delicious lunch and dinner.\nDiversity and Inclusion in Tech Last but not the least, the topic Diversity and Inclusion has become an important part of me after joining Thoughtworks. I witnessed an eye opening panel discussion “Fostering Inclusiveness in Technology” by none other than Deepa Malik, the first women paralympian of India. She explains how tech has brought in Equity in her life as a sports person and in general, due to technology she now feels safer, and comfortable at home, she doesn\u0026rsquo;t need to move to the door every time it rang, she just sees the person on her phone and opens the door from phone. Control fan from remote. She used technology to measure Heartbeat, and other vital information while practicing, she shared how they used tech with athletes and increased India\u0026rsquo;s performance at paralympics. I have never imagined that a nice to have tech for me could be that useful for someone.\nI interacted with Deepa and shared how do we maintain D\u0026amp;I at thoughtworks, and talked about Wapsi, WomenInTech, NetworkOfWomen and other initiatives to improve representation of women and minority groups. Thoughtworks has been recognized in multiple forms for its effort. Deepa sincerely appreciated the effort being put into it by us.\nOther panelist Prateek Madhav - Co Founder \u0026amp; CEO AssisTech Foundation (ATF), explained that around 1bn people are there with some kind of disability. Number in India is not well known, maybe 70m people. He also focused on disability that is non-visual, and hard to recognize. Eg. Learning disabilities, ability to work is reducing after pandamic, but is not being recognized. If we see it in other ways 5-7% of GDP is lying on table if we don\u0026rsquo;t include specially abled in the solutions we are building. This number could easily be the 3rd largest economy in itself, it is already telling the story why inclusion is so important for businesses.\n “In the closing remark Deepa asks makers, solutions designers to think of D\u0026amp;I- first solutions, and make them accessible. Digital means Digit-All, build Digit-All solution, and work on Digit-All transformation.”\n ","id":70,"tag":"xr ; ar ; vr ; mr ; thoughtworks ; event ; conference ; sustainability ; metaverse ; lenovo ; ai ; startup ; takeaways ; learnings ; 5G ; async-first ; async-agile ; work from home","title":"Takeaways from Lenovo Techworld India Edition","type":"event","url":"/event/lenovo-techworld-india-2023/"},{"content":"MOZOHACK is a 24-hour hackathon organized by SRMKZILLA. SRMKZILLA is a Mozilla campus club of SRM Institute of Science and Technology, Chennai(India). It is inspired by the Mozilla Learning Network, provides students an open platform to analyze and enhance their technical and non-technical skills, such as working on open-source projects, developing websites, technical fests, and workshops. The club is conducting the seventh edition of its annual flagship event – MOZOFEST'23 – where they celebrate student community through technical and non-technical events. MOZOHACK 4.0 is part of this event. The hackathon will attract young developers across the nation to build their innovative ideas as a team.\nI am invited to join the event as a judge, and share my experience with students.\n Exited to meet the young minds, the organizers and esteemed juri members (Srishti Gupta, )\nRegistration and Dates Event is from 24th-25th Feb, 2023, and please register as mentioned on MOZOHACK link.\nEvent Wrap Stay tuned!\n","id":71,"tag":"xr ; ai ; judge ; hackathon ; event ; ar ; innovation ; ml ; machine learning ; cv ; sustainability ; university ; students ; mozilla ; srm","title":"Judge at MOZOHACK 4.0 by the Mozilla Students Club","type":"event","url":"/event/judge-mozohack4-srmkzilla-2023/"},{"content":"Extending Reality is providing experience to user where the boundaries of real and fake blur, and use believe in the new reality presented to them. In the recent years, the tech has evolved to enable extending the reality by XR (AR/VR) faster, and much more evolution still to happen. I was invited at Meta and Meity\u0026rsquo;s Startup Hub program as an industry expert. Here I reflect on the day I spend with the few of the top startup of india, who want to solve some of the key challenges using XR and related technology.\nMeta-MeitY Startup Program India is home to one of the most vibrant startup ecosystems, and Ministry of Electronics \u0026amp; Information Technology (MeitY), Government of India is leading and facilitating a gamut of Innovation and IPR related activities across the country towards expansion of this ecosystem. XR Startup Program is a collaboration between Meta and MeitY Startup Hub (MSH). The program aims to accelerate India’s contribution towards building the foundations of the metaverse and nurturing the development of Extended Reality (XR) technologies in India. Foundation for Innovation and Technology Transfer (FITT) at IIT Delhi is a techno-commercial organization from academia is counted amongst the successful such organizations. FITT is an implementation partner for the Meta and Meity\u0026rsquo;s XR Startup Program\nThe metaverse is being touted as the next internet, read my earlier articles to understand it more. At this stage we mainly understand it as a tech evolution where XR (Extended Reality) is at it\u0026rsquo;s center. In some more articles I also touched upon the usecase and challenges and talked about a need of building up partnership of industry, academia, government, public/private institutions, and people to solve challenges of future. This program and the event was perfect example of that partnership.\nThe Event | XR Bootcamp The sortlisted startups for this program come for 5 days bootcamp, where they would further enhance their ideas, build MVPs and find the way forward. Meet the industry expert, partner, investor and more.\nEvent was kicked off by Dr. Anil Wani, Managing Director, and Col. Naveen Gopal (Retd.), COO, FITT at IIT Delhi. And keynotes from Radhav from Meta.\nI took the first industry expert session of the event. I was amazed with talented participants, specially seeing some non-tech background people like doctors, physio, dentists, researchers, professors talking possibility of deep tech. a very much diverse group, capable to bring the best. Carrying number of interesting products, ideas on healthcare, manufacturing, game optimization, edutech, training, retails, events, haptics etc, and ready to tap on XR. I can already see some future pioneers emerging from this group.\nInterestingly this was my first time that I was not explaining people about \u0026ldquo;what is XR\u0026rdquo;, as the room was full of XR enthusiast, I directly jumped to following 3 topics :\n Evolution in User Interactions - The way we interact with technology today is not too far from what was predicted by futuristic, sci-fi pop culture. We are now living in a world where sophisticated touch and gesture based interactions are collaborating with state of the art wearables, sensors. We talked about why it is important to stay ahead of time to be in the race of future. Read more about it here. I also talked about best practices for user experience design and development for XR. XR Content Management - There are already a wealth of examples of XR being used in different industries. However and wherever XR is implemented, a key challenge is content creation, and managing it. Nothing, after all, can be extended without new content. XR depends on dynamism — we need to be able to create content that supports and facilitates such dynamism. Read more about the same here Sustainability and responsible tech - This is the topic very close to my heart, as techie and as the countryperson, as part of society, how can be use the tech responsibly? I discussed Green Software Foundation (GSF), which is building a trusted ecosystem of people, standards, tooling and best practices for Green Software. It backed by organizations like thoughtworks, accenture, intel\u0026rsquo;s of the world. We can improve only when we can measure it. Eg: If we want to reduce carbon emissions from what we are building , then we first need to measure it, at every level in the product we build. All future tech would require multiple fold increase in computation, resource utilization, and everything has associated cost to environment, society. Other aspect we discussed was, tech advancements may create more isolation in society such that a certain section may not use certain tech, and considered alien to that society. So it is important keep Diversity and Inclusion in mind. Read more here We delved into how XR is shaping the future of user experience and revolutionizing the way we interact with digital content. The audience, composed of innovators and entrepreneurs, was highly engaged and participated in thought-provoking discussions, bringing fresh perspectives and ideas to the table.\n By the end of my session, Mr Sachidanand Swami joined the session, who is a serial entrepreneur and industry expert, IIT Delhi alumnus, and Fonder of Invoxel. He shared great insights on working with public sector and government projects in India. How the participants can take their ideas to market, and get visibility at right place. He also talked about importance of social capital. During the lunch and networking time, I let few participants try my Lenovo\u0026rsquo;s ThinkReality A3 device, and they loved the device, and wanted to build for it. I also visited Mr Sachidanand Swami \u0026rsquo;s office, and discussed our synergies. Would like to thank Mr. Swami for suggesting my name for this session, and hosting me at his office, and connect me with his team, loved to see the interesting work his team is doing. Also like to thank the organizers at FITT, and loved the FITT gift hamper.\nHappy learning, Sharing and Caring!\n","id":72,"tag":"xr ; ar ; vr ; mr ; thoughtworks ; event ; speaker ; green software ; talk ; sustainability ; metaverse ; iit ; university ; students ; startup ; takeaways ; best practices ; learnings ; meity","title":"Speaker at Meta-MeitY XR Startup Hub, IIT Delhi","type":"event","url":"/event/meta-meity-xr-startsup-hub-2022/"},{"content":"Recently, I and Raju Kandaswamy talked at Thoughtworks XConf on how AI will play its role in content generation. We are already seeing ChatGPT, as the next revolution in textual content generation space. Let\u0026rsquo;s understand what is the need of such content generation revolution of XR space.\nWith games like Second Life and the impending mainstreaming of the Metaverse, more and more individuals are embracing a virtual reality that, in some ways, mirrors our own real world and, in other ways, lets us live in our imagination or apply them on the real word. This rise of the Metaverse is simply the next level in a journey that began long ago. For instance, on Facebook, many of us have 1000s of virtual friends. While playing games, we regularly trade in virtual currencies. We already have handles and usernames that serve as our virtual identities. As an extension, Metaversity are quite near like Invact, where students join virtual classes as their virtual selves. A virtual car might soon get more expensive than a real one.\nIn my previous article, I described metaverse as technological evolution of many digital technologies, Some primary ones are IoT, Blockchain, 5G/6G, cloud and AI, where central to this is eXtended Reality or XR. More we evolve/adopt XR More we will be closer to future state of the metaverse. XR is one of the most exciting and most-hyped emerging technologies. It’s important to note that XR comes in a number of different forms and implementations. This can often create confusion, so it’s worth clarifying. The key ways in which XR is implemented in real-world products are:\n Augmented reality (AR) which overlays digital content in the live environment. Virtual reality (VR) which delivers content through wearable devices. Here, the whole environment is virtual and computer generated. Mixed reality (MR) is a mix of both worlds. Mobile XR delivers content through smartphones or tablets. WebXR delivers content through AR/VR-enabled web products. (Today, you can build XR properties with something as simple as Javascript libraries.) Heads-up display (HUD) is content projected using specific devices — the most common consumer versions of these are for cars. There are already a wealth of examples of it being used in different industries: smart mirrors, for example, are gaining popularity in both retail — where it’s used to explore potential purchasing options — and leisure — where personal trainers can guide clients through new exercises. However and wherever XR is implemented, a key challenge is content creation. Nothing, after all, can be extended without new content. XR depends on dynamism — we need to be able to create content that supports and facilitates such dynamism.\nContent challenges in XR/Metaverse adoption The journey toward creating 3D content for XR is filled with speed bumps. At the content generation level, there are several siloes. These can lead to content being created that isn’t shareable and usable. Even if we do manage to create content for XR applications, questions about management, access control and versioning still remain.\nThere are a number of reasons for this, but the main issue is that most devices today are bulky and heavy, which makes consumption difficult for end-users. Publishing protocols like HTTP, streaming or WebRTC, moreover, just aren’t sufficient.\nThe diagram below demonstrates the extent of the complexity of an XR content development pipeline: Content generation - The basic unit of a 3D model’s construction is a point, defined by three dimensions: x, y and z. Hundreds or thousands of points brought together make a “point cloud”. These are then meshed using triangulation or a mesh generation algorithm. A typical model has millions of points and polygons that aren’t ready for consumption by consumer XR devices. This is why they need to be optimized. Optimization - With XR content editing tools, such as Blender, Max or Maya developers reduce the size of the content, slicing it based on the level of detail a given application needs. Animation - For a virtual world to feel real, it needs to engage our senses. XR now has achieved sight and sound with lighting, audio, color, clothing and material. Haptic devices for touch are also gaining popularity. Publishing - XR content is published by exporting it to the right format, that can be consumed by game engines, and reusable asset bundles are made available for the XR developers. Developers typically use FBX/OBJ/GLB/GLTF format export, use game engine tools — such as Unity, Unreal, Blender — for import asset bundling and then uploads or shares over cloud or asset stores. Consumption - Cloud stream and XR Render enable remote content consumption. Rendering XR content in the head-mounted/mobile device creates heavy utilization of compute, network and power, however cloud/edge computing with next-gen networking (5G/6G) enables the remote rendering of XR content on edge or cloud. This will enable more real life use cases to run in the XR devices. The biggest challenge with all these processes today is that they are manual and, therefore, not scalable. That’s why AI can add significant value — new content can be produced algorithmically.\nArtificial intelligence for XR content There are a number of ways AI can support us when it comes to XR content — even beyond creation. It can also help us manage and publish content.\nHere are a few of the ways AI is being used for XR content in practice.\nContent pipeline automation One of our clients had a huge volume of XR content that wasn’t ready for consumption because their content pipeline was highly manual. To address this, we built a pre-processing API for all their 3D assets. This distributes the load to a cluster of nodes to pipeline ops.\nWe set up the preprocessing pipeline using Blender CLI on the cloud and then performed model optimization and format conversion using Python scripts, and Dockerized the Blender scripts which were then deployed on Kubernetes clusters. After that, the XR-ready output was cached in a file store. This significantly reduced the manual burden in processing 3D models and accelerated the journey to consumption.\nAI for XR content generation To make XR a reality, we need data at scale. One way of doing this is by replicating the physical environment, creating a digital twin of the real world. Apple’s RoomPlan API, photogrammetry scans, hardware-assisted 3D scans etc. have all been successful in doing this. Elsewhere, facial reconstruction from monocular photos can help us generate avatars.\nGoogle has also been working on creating 3D models from a textual description of the visual. You can write something simple, like “I need a chair with two armrests,” and the algorithm will create a 3D model from your request. The Unity Art Engine was also uses AI for texturing, pattern unwrapping and denoising.\nArchiGAN can generate models for architecture without the help of an architect/engineer. Users can feed their descriptions as sketches or text, and it’ll automatically generate models on the fly. This is not just for rooms or interiors but also for large buildings, roads, flyovers etc., including configurations for fake cities/malls and adaptive floor plans. Read more about it in Raju K\u0026rsquo;s articles in XR Practices\nTraining deep learning on 3D model data To do all of this, machine learning and deep models need to be trained on 3D data. This isn’t completely different from using images to train an image recognition algorithm; it is, however, much more complex. Whereas image recognition works by applying a pixel grid to an image, representing it through three values (R,G and B), for 3D data the datasets are much more sophisticated.\nThe structural definitions of 3D data for deep learning use cases are evolving. 3D Point clouds (such as LIDAR data) are commonly available to represent 3D objects. However, to process 3D point clouds for deep learning, they need to be correctly structurally represented. Pointnet and VoxelNet can both provide possible structural definitions for 3D models for Machine Learning (ML) consumption. PointNet provides a unified architecture for classification/segmentation directly on 3D point clouds whereas VoxelNet divides the point clouds into equally spaced 3D Voxels.\nA large scale library of richly annotated 3D shapes is offered by ShapeNet to train ML models. This library contains roughly 50,000 unique 3D models in 50 different categories for deep learning.\nAI-assisted 3D modeling If we bring AI into the XR content pipeline, the possible steps might include handling various inputs (sketches, photographs etc.) and using advanced processing capabilities (driven by edge/5G/cloud) to generate content. Humans or an AI system could review that content, with any further optimizations as the process is extended. Once the design is deemed acceptable it can be published.\nIf XR and metaverse is to become truly mainstream — and deliver significant business value — we artificial intelligence will play a vital role. As we have demonstrated, this is already happening; however as research and applications progress, we will see new opportunities and possibilities emerge. It will be up to us all in the industry to seize them and build extraordinary experiences. Metaverse needs XR content at scale, and AI generated media is the way to go, find \u0026ldquo;AI generated media\u0026rdquo; in Thooughtworks Looking Glass | Making the metaverse.\nA variation of this article is originally published at Thoughtworks Insights\n","id":73,"tag":"xr ; ar ; vr ; mr ; metaverse ; evolution ; ai ; content management ; technology ; thoughtworks ; insights ; xr content","title":"AI-Assisted Content Generation Pipeline for XR","type":"post","url":"/post/ai-generated-xr-content/"},{"content":"Every end is a beginning, if there is no end, then there is no beginning, just like the year 2023 can only start after the end of year 2022. So this is the right time to reflect and plan for a better beginning. A “Forward Looking Reflection” was my topic of a guest lecture at DesignersX at their year end event. We discussed challenges of software development, and how the industry is trying to solve them. How can we make the right software in the right way?\nDesignersX is a company specialized in software development, located in the same city Mohali, Punjab, India where I started my career more than 18 years ago. They invited me to their year end event as a guest speaker and to interact with their developer community. It is great to see the development of city Mohali (which was then considered as an outer area of the City Beautiful Chandigarh), now it is well part of Chandigarh with extended beauty. The event was also beautifully planned, starting the day with a Paath, a spiritual prayer that spread a lot of positive vibes, and then a series of sessions and discussions, team events, and off-course with delicious Punjabi food. I must say there was no better way to end the year 2022 other than this event.\nAs my topic of guest lecture was on reflection, and finding the way forward, I asked the team to conduct a survey to reflect on the year 2022. The survey consisted of questions about their org health, project health, and personal front. The participants had provided good inputs for the actions for the future. They shared what is working for them, what things they want to improve, and challenges and hurdles they will try to solve in future as a team.\nHere are some key discussion points.\nSuccess, Growth and Satisfaction Our achievement gives us a sense of success, and we tend to follow the same path that led us to that achievement. At times, we may get biased with the way, and start ignoring other better possibilities. I have written an article on how “being successful” is better than “become successful” - “Change to become more successful\u0026quot;, this article was my takeaways from a book “What Got You Here Won’t Get You There”. So keep fighting to become more and more successful, but never try to stop at success. We are here due to our achievements or failures, but to grow further we need to look forward, and make an effort to advance ourselves.\nSimilarly, people have their own criteria/parameters to define growth. Growth is always being represented upward and always relative to some past state, but in my opinion it is a journey and much more than just comparing two states. When I ask people about what growth means to you, your organization and your client. I get answers as follows and my take is “it’s all the same”, the growth is all linked, no one can get a real sense of growth without the growth of the ecosystem around us. So I just remove the lines and define growth as follows Collective growth leads to better satisfaction, it is very important to define organization vision and make everyone aligned toward it. For example, look at ice-cream company “Ben \u0026amp; Jerry’s mission statement”, and how it inspired Thoughtworks in its early years and became part of its culture. Software is built by people, so go for sustainable growth by earning sustainability, achieving excellence, and work with social responsibility and purpose. It would lead to more satisfied and willing people, and may result in a better outcome.\nAgile IT Organization Most IT startups/organizations want to be as agile as possible, requirements/needs are never fixed, controlling scope keeping it in limit is utmost important for efficiently and effectively delivering the software. During my discussion with DesignersX’s CXOs and developers, it came out evidently they are very keen to be a true agile organization, and infuse more agility at various levels. ‘Agile’ means different to different people, and I feel it is the most overused word in the IT industry. I emphasized on improving reading habits of people, and gaining from masterpieces like Agile IT Organization Design, a book by Thoughtworks, the organization I work in and admire for its ground level agility and people centric approach to solve challenges. Follow the industry agile pioneers like Martin Fowler, my colleague and also a co-author of Agile Manifesto. Be aware of Extreme Programming (XP) Practices, it may solve a number of IT industry and people challenges. Get ready for the need of the time, Async Agile that challenges some of existing traditional agile practices that we have gained in years of working synchronously.\nWhat will work for a particular organization depends again on how well we understand the word “agile”. The idea here is to gain knowledge from others\u0026rsquo; experiences and evolve from there, and define what works for you by experimenting and measuring the outcome. Try to solve a problem that is yet to be solved.\nWe also talked about Hierarchy vs Structure, Enterprise Agility and some of the takeaways at Thoughtworks that I shared here.\nBeing a Consultant / Consulting Org Most IT people want to solve the problems at hand in the most optimal ways, however I feel many times they get trapped in the hierarchies or designations/roles they live in. If a solution is lying beyond their boundary they are trapped in then they miss the opportunity of solving it, no matter how capable they are of doing that. Consultants aim to do the right things in the right ways, then getting trapped in traditional boundaries.\nHere some key areas consultant may focus on are as follows\nBe a trusted advisor Focus on creating influence and persuade others with the right guidance. Be a thought leader, and share your opinions, strengthen your voice with facts and figures.\nIt is important to identify the right problems in the situations, but focus on being on the solution side, the more we practice on finding solutions, we become better at finding solutions. The client comes to us not because they have some problem in hand, but rather they come to us when they think we will troubleshoot the problem well, and provide an optimal solution for the same. Think of the problems as constraints, think of solutions with and without constraints.\nRemember that for being a trusted advisor, you don’t need to be a subject matter expert(SME) of the problem area, rather you have to be a right advisor who may suggest the experts that may solve the problem at hand.\nAutonomous Teams A team of consultants who aim to provide solutions for the client, has to be autonomous, self-motivated. They are well aligned with the same goals and vision. They build a chain of effective delegation, and follow delegation as “Trust with Verify”. They first trust their team members, and verify frequently to enable them in getting things done as they anticipate.\nBuilding a consulting mindset is most important to sustain and scale the team of consultants, and an environment of sharing with caring helps a lot. Focus on cultivating, mentoring and coaching. Read here on how coaching can enable self-reliance.\nStrength based leadership and servant leadership are other key considerations that people centric teams. read here on strength based leadership in my reflection from a workshop. Think more on enablement, then questioning too much on people capability. Sometimes we ourselves don’t know our own capability, depending on the situation we may out-perform or under-perform a similar set of tasks.\nOwnership Sense of ownership plays a key role in building autonomous teams. Define ownership for outcome, and let them figure out “how to achieve” part.\nI believe in team ownership, we succeed or fail as a team, not as individuals. Everyone is important and responsible for the outcome as a team, and to achieve that - trust, working together, supporting each other, upskilling, self-motivation etc comes as defaults in a team.\nPeople focus on self sign-up, and encourage statement like\n “I sign-up for this task” instead of “I assign you to this task”\n Statement like following are highly discouraged\n “I have done my work, and x person in team did not do the rest”\n We work for outcome, and not for individual tasks, always look at the bigger picture. People start measuring productivity of individuals and question their performance as individuals, however as humans our productivity depends on the work environment, team, and culture and our own strengths. Every individual has their own strengths and areas of improvement. Ownership may or may not come as default with individuals, we need to mentor/train ourselves on it. As a team people need to own, and productivity is to be measured as team productivity (time to deliver an artifact) is most affected by slowest one in the team, so it is very important the other team members focus on uplifting the people and bringing them up to speed.\nMeasure what you want to improve on We can improve a thing when we start measuring it, anything which we cannot measure we cannot improve there. So if we want to improve code quality then start measuring it first, if we want to be better in delivery schedule then start measuring it, if we want to improve the cost and effort we put into, then start measuring it against everything we do.\nEveryone wants to improve efficiency and effectiveness of what they do, but most of the time we don\u0026rsquo;t give enough attention to the measurement. No matter how great processes we implement, if we are not measuring the improvements then it is like working in the dark. For example, timesheets - the friday/monday chaos, most of us don’t like filling the timesheets, and there is always a push required to do so. But we all want to better utilize our time and balance it in work and life. It is important we keep reminding people about the importance of measurement.\nMeasurement can be in the form of quantitative and qualitative metrics, depending on what is being measured. Over measurement can also be problematic and may lead to unnecessary effort being consumed, so measure what makes sense, and what can be easily tracked. Always keep “Why” before “How?”. Why do we want to measure? What do we want to achieve? Share the outcome upfront with all the stakeholders, and get buy-in. Then think on how? Choose tools and evolve as you go.\nFaster Feedback, Faster Action Feedback is really important for a consultant to validate and improve what and how we are doing. Collect and provide feedback as early as possible. Faster the feedback, more time we get to reflect upon. 360 degree feedback, frequent planned retrospection are quite important. Retrospect on\n “what worked well or things I/we should continue doing” , “what did not work so well/things to improve/things stop doing”, “Appreciations” and “Questions and concerns”. Brining feedback earlier in the process is called “shift left approach”; move feedback closer to the start. Testing and verification is also a form of feedback, the sooner we start testing and verification, the more time we get to plan the action. Vote for test driven development, automated testing; extreme programming practice to achieve shift left.\nIf one wants their team members/clients to improve/change, then the first thing they should do is share/receive feedback with clear context, and with reasoning on why it is important to act up on. Don’t assume that other people should understand what you expect from them, please communicate the expectation. Most issues come when we don’t set expectations, and communicate them clearly. Taking and giving feedback is also an art, ask clear questions so that you want to know people’s opinions. Another rule is to appreciate people in public, but when we need to correct them, have conversation in private.\nDefine your defaults As a consultant/ or consulting org, define your defaults. When a person thinks of you, what comes as default? When a client sees your deliverables, what should they think as default?\nFor example, security by design, performance by design, clean code, agile team, zero tolerance to bugs, best user experience, innovating or what?\nWhat are your default rules of delivery / execution - standard buffer / default estimation techniques, metrics, checklists, definition of done, definition of ready etc.\nAgility vs Scope Changes Many people think that “Agile” means accepting any change from a client at any time, and on that line they continue like this, keep trying to make their clients by burning themselves. In my opinion agility in software engineering never means to create more chaos and burn un reasonability. Rather it talks about change is inevitable, and better manage the changes, trying to move left and get faster feedback on what you are supposed to deliver, and plan what we can achieve in remaining time.\nConsultants need to come with a change plan, and talk like “what best can we do in X time-frame?” rather “this needs to be done in X time-frame”. It’s all about scope negotiation, and consultants need to be good at it. Everything has a cost associated, no matter what is your delivery methodology - fixed-bid/ time-money someone is always bearing that cost. Some people say we don’t bother much for time-money projects as the client is paying on what they want, I strictly say NO to this attitude. In time-money risk of over-budget/effort is owned by the client, but does not mean we are not responsible for that, as we discussed in the start of this article that it is a shared ownership.\n \u0026ldquo;Technical debt is real debt. It will eventually be paid by someone. And unless we take steps to hold companies and executives accountable for preventable — and foreseeable — failures, it will be we the public that keep paying.\u0026rdquo; — Zeynep Tufekci.\n So manage the scope well, keep track of your bucket, if some things are getting added then other things need to go out, unless we may increase the bucket size, it\u0026rsquo;s a matter of prioritization and negotiation. If time is fixed, and we need to deliver more than planned, then the only way to increase the capacity (and that too until a limit - adding more people does not always lead to proportionally increased delivery). Capacity may also be increased by improving the delivery efficiency, however if planned to deliver increased scope without increasing capacity that means we would deliver by burning people’s life. In such a situation people\u0026rsquo;s productivity is even lesser and we are more on the losing side.\nIt is also important to do pro-activity assessment of what we are planning, and review planned vs actual frequently, identify risks and plan for mitigation. Communicate the Risk, and mitigation upfront to every stakeholder. Keep it in the formal communication in the form of a document, or email, instead of just a verbal communication.\nContinuous Innovation, Continuous experimentation Tapping on the change or managing the change never means to restrict on trying new/ unknown things. Rather as consultants we need to focus more on continuously evolving and innovating. We need to try more without worrying too much about the failure, in-fact if we need to fail then plan to fail-faster, then later.\nCelebrate failures, just like what Amazon does and makes the successful failure, read here, the takeaways from the book “Success Secrets of Amazon by Steve Anderson”. It talks about 14 principles that lead to amazon’s success. Not all the steps Amazon takes are successful or in-fact Amazon doesn’t take a step but takes a journey, which means continuous effort to make the way ahead. I have personally attended some interesting sessions at Thoughtworks internal event FailConf, and we collectively learn from our mistakes.\nPrioritization - Important vs Urgent Our plate is full of things to do, and everyday things keep adding to it. if we don\u0026rsquo;t manage it, then it will spill over.\nWe need to see where we are putting our effort, if we are always in chaos of doing things that are mission critical, then we need to think about our planning, we probably don\u0026rsquo;t spend enough on planning for strategic tasks. The tasks which are urgent but not important may be automated or delegated further, and the tasks which are important but not urgent, put time to plan them well, and pick them as time comes.\nRead more about prioritization here Source : ideatovalue.com\nThere are other ways to prioritize things, you may do trade-off exercises as described here.\nConflicts of Interests Since software is developed by humans, and wherever people work emotions, likes, dislikes, interests are bound to happen, they come from diverse backgrounds, culture, race. All this leads to bias or favors towards particular types of decisions. A group of similar mindset people also leads to influencing the decision in particular ways. It is always favorable to include people with diversity. As consultants we need to make right decisions - unbiased, uninfluenced from particular situations, avoid conflicting interests impact the decisions.\nTry to collect inputs from a group by listening well to them and observing, and then filter out inputs that are too much influenced by personal emotions or bias. It is quite common that people with varied interests come as a team, but it is the responsibility of these individuals to maintain the decorum as a team, opinions are fine but don’t let your conflicting views come out, and add more people within your team on this side/that side. First think of the team, and then think of yourself, anything that is not in favor of the team\u0026rsquo;s interest is not in favor of you too.\nThis brings us to the end of the article, apart from the above points we also discussed on measuring happiness, ideations, team bonding events, open seating arrangements, project walls as the playground for brain exercises.\nConclusion Every year we make resolutions, and we plan and execute some to reach closure to our targets, however unless we measure our progress we don’t know how close/far we are from target. That means we don’t know if we are going in the right direction, progressing there would be taking us to closer or far from the target, do we need to change our approach. That’s the conclusion of the session, we need to reflect on what we do, by measuring it.\n “We can only improve the things that we can measure.”\n Happy measuring! Happy new year!\nThis article is also published on Medium\n","id":74,"tag":"DesignersX ; guest ; lecture ; software ; reflection ; process ; design ; organization ; takeaways ; agile ; technology ; success ; change ; motivational ; development ; consultant ; event ; speaker ; DesignersX ; technology","title":"Guest Lecture on 'Forward Looking Reflection' at DesignersX","type":"post","url":"/post/forward-looking-reflection/"},{"content":"Recently I shared an article on my life lessons from my father’s work life, and mentioned that I dedicated a new home to my father’s work life, also promised that I will share some corporate lessons learned from making-over this new home. I can well relate my takeaways to the corporate experiences such as vendor selection, negotiation, solutioning, estimation, prioritization, agility and being a trusted partner and many more..\n As I keep saying “we learn every day in all situations of life, good or bad”\n Here also, I learned a lot while working and dealing with mason, carpenters, electrician, plumbers, authorities and contractors. I also tried to apply my earlier learning, use technology to better communicate my need to these work executors.\nI invited quotations from different contractors, and make-over agencies and got varied offers. Everyone came up with great features, style of deliveries, and showed their past work. I learned new ways of negotiation and balancing the scope, and prioritizing things when you have X budgets. It’s not that I selected the best one to go ahead, or was the best fit to my needs, but it was an experience. It was nothing less than our corporate projects, where things change as we proceed - requirements, schedule, and budget. No matter how strongly we say “this is what I need” and “give me the fixed estimates\u0026rsquo;\u0026rsquo;. So how should we bid, what should be the estimated cost? Is the cost the only factor in getting selected? Why does a particular vendor get selected over the others despite higher estimates or lesser comparative features? I looked back at my decisions and I see them as corporate lessons. Other areas I will reflect on are people\u0026rsquo;s capability that they don\u0026rsquo;t realize, or how their short sighted vision is impacting the long term path.\nHere I start, following are the top 5 corporate lessons I learned from this home makeover exercise, which run for around 6 months.\nPitch for the ask, not your capability I met multiple vendors and contractors before choosing the one. Some were focused on showcasing the capability on what they have done, and taken me to multiple locations to show their work, some were focused on showing the number of people working under them, some has shown how transparent their payment plan is, and focusing in discounts, some were showing very professional style printed quotations, and some were talking about bringing experts architects in designing home etc. However, only a few were talking about what I need, do I really need a very professional and expensive setup? Do I really need an expert interior designer? How to reuse existing investment? how to work optimally? and how important is the cost factor for me? What are my priorities? where would I be ready to compromise?.\nFinally, I went ahead with an average contractor, who can implement my own designs, and can work on my terms. I asked him if he understood my need, and he responded that I am not looking for very expert/expensive architects. I do have my own designs, and I am a technology person who can talk to workers and convey what I need, and I would closely monitor the work, and I don\u0026rsquo;t have time to manage logistics and transportations etc. I can say he closely understood my need. Knowing this he was also able to provide relatively lower indicative cost estimates, but kept some parts vague, he was kind of sure that my requirements will change as I go, and I liked his confidence. I have worked with this contractor years ago for a relatively small job. He wasn\u0026rsquo;t great that time in terms of discipline on schedule and managing things but I believed in these years he gained experience and he might well fit this time.\nCost and budget may not be as important as it looks Cost and budget may be the key criteria for getting selected, I also selected the one who came with a competitive cost. His offer was not comparatively too high or too low against others, but it was different, he defined a range for different items, and shared his experience that my budget will really change and things will change on the go, he was also keeping some buffer, and if things go beyond that buffer then I needed to pay accordingly. He kept a lot of things optional. I was feeling empowered that the cost is in my hand.\nSo a balanced pitch is better, too high or too low will keep you out of the race. However, showing flexibility is key here and vagueness may be fine and calling out the buffers and unknowns helps. Most important part is to empower the client to control the cost.\nBy the time we finished the home makeover, my requirements were changed, my contractor was right, and I ended up paying 50% extra in 30% additional time, however I still feel that cost was in my control. In most corporate worlds there are hardly any projects that run within budget from where we start, requirements keep changing, and we end up something different. So, in the long run, what we want to build is more important than initial budget and cost. If the customer is happy with what they have been building and it is meeting their expectations then the customer may be ready to invest more, but if things do not go as expected then the customer would not be happy investing even the agreed budget. In later cases if a customer is still investing, then we are at a losing end even if we are getting paid.\nDon’t assume that the client fully know the ask As I explained in the last section, I had some budget in mind, and had some idea what I wanted to do, but did I know exactly what I wanted to do? Answer is NO. My requirements were changing as I proceeded, due to inputs from family, cost trade offs, looking more options, and seeing how things are progressing. Input from the contractor and experts working with him created a good influence on the decisions I made.\nSame goes with corporate projects, if the client has selected us, they have most trust in us to provide a solution, and they believe in our experience believe that we will provide the right inputs/options at the right time. If we are just order takers then we would end up with just the work that we have signed up for, but if we become a trusted partner for them, and provide our expert opinions on what our client should be doing, then we will go a long way.\nYes, we can do, and here are the options Let me share my experience with a carpenter who was working under the contractor I selected. The carpenter was really good at his work, however the only challenge was whenever I asked for a change or correction in the work he finished, his first response was “NO” with a lot of resistive reasoning. On the other hand when I talked the thing to the contractor, the response was “YES, this can be done, at this cost or with these options”. This matters a lot in the long run. Despite the great potential of the carpenter, I would prefer working with the contractor then the carpenter, as I know he would get my work done with the same carpenter or maybe with some other not so good carpenter. Hard skill plays a role, but in long you need some other soft-skills.\nIn our corporate life as well, if we focus more on the solutions then we would be on the path of growth, and we will solve customer’s problems, else there may be 1000 reasons for not doing things. Try to share multiple options with pros and cons, with your recommendations, and then let your clients share their inputs, negotiate and find the middle path. Be flexible on your terms, remember “you are the expert” and providing a solution is in the best interest of you and the client even if it costs more to you, which may not have been thought of originally. Share your resistance to the client, they would understand. Even if they don\u0026rsquo;t understand now, it will pay off in future. Don’t let short sighted vision overpower the long term opportunity.\nBe a trusted partner I had worked with the contractor I selected, however at that time he was just starting his business, and was not having experience. He did a relatively small assignment for my old home years ago, his work was satisfactory. He was able to crack this new contract with me despite his new offer not being the best offer available to me.\nHe showed the same empathy this time as well, has shown great flexibility, and helped at various stages and even the area beyond his scope. For example, transporting some old furniture from old location to new location, and getting them repaired when I was busy in office work.\nWe need to be a trusted partner for our clients, and build a trust that we are here to solve their problems, just ask us, we will find a way. Don\u0026rsquo;t worry. At times if a client says they left with no money to pay our cost, help them by different ways such as discounts, delayed payment plan, or co-investing or something else. Of Course we can support the client within our limits and capacity. If we can\u0026rsquo;t help, then maybe we can help them in finding the right people. Becoming a trusted partner is a key, solve their issues by putting yourself in their shoes. It will take us really long.\nThese are the top 5 learnings from my home makeover, and after writing them together in this article, I realized that these are almost similar things I explain to participants of “Consulting Mind Workshops” at thoughtworks, when they ask “what it means to be a good consultant”. So in a nutshell, I can say what I learned is “Being a better consultant”.\n","id":75,"tag":"motivational ; technology ; ar ; work from home ; general ; impact ; motivational ; life ; reflection ; takeaway ; learnings ; experience ; change ; xr ; thoughtworks","title":"Corporate lessons from a Home Makeover","type":"post","url":"/post/corporate-lessons-from-a-home-makeover/"},{"content":"National Institute of Technology Calicut is organizing Tathva, Asia\u0026rsquo;s largest student-run Tech Startup Expo. Tathva is a techno-management fest of NIT Calicut is one of south India\u0026rsquo;s biggest scholastic endeavours. Since its inception in 2000, this annual symposium has known only growth and has flourished into one of the few promising enterprises the nation has witnessed. Pushing the barriers year after year, Tathva aims at providing a plethora of platforms to the intrigued technical heads that arrive at this fiesta with an array of events and workshops, to sharpen their skills and widen their knowledge quotient.\nThis year, it is three-days event from 21st to 23 Oct, 2022, full of learnings with workshops, guest lectures, and students competitions. It will focus on discovering the greatest ideas of students from diverse areas.\nI will be speaking on \u0026ldquo;SpatialWeb and Web3.0\u0026rdquo; at the event on 22nd Oct 2022 9 AM We will touch upon following topics.\n Overview of Web3, SpatialWeb, Metaverse The hype, facts and figures Opportunities Sustainability and Concerns What we can do? Come join me at - 9AM IST, 22nd Oct 2022. Register here - https://www.tathva.org/lectures/spatial-web-and-web-3.0\nStay tuned for further details.\nEvent Wrap It was a great experience with talking to the young mind and share my experience there.\n Find the slides here. This browser does not support PDFs. Please download the PDF to view it: Download PDF.\n ","id":76,"tag":"event ; nit ; students ; speaker ; lecture ; xr ; ar ; vr ; metaverse ; nft ; blockchain ; web3 ; spatialweb","title":"Guest Lecture on 'SpatialWeb and Web3.0' at NIT Calicut","type":"event","url":"/event/guest_lecture_nit_calicut/"},{"content":" I started thinkuldeep.com keeping in mind that we learn every day in all situations of life, good or bad. Our learning matures by sharing, we explore more of us by giving it away to others.\n Recently my father turned 60, and has retired from government services. 60 years full of life is a quite big experience, where around 38 years he spent in serving the same Govt of Rajasthan. We can learn a lot from his life, especially the younger generations who are behind the money and super-fast growth, and change jobs every few years or months. There are a lot of differences in the fixed routine 9-to-5 jobs vs the corporate race we are running, both has it\u0026rsquo;s own side. Not sure, if I will ever be celebrating retirement for me like what we celebrated for my father’s retirement day. I shared some reflections on his life in the retirement day speech for him. I am sharing the brief here in this article.\nReflections from My father’s work life My father, Shri Banwari Lal Dhundhara, has retired from services as a clerk at Public Health Engineering Department (PHED - Waterworks) in Rajsathan. As his colleagues explained on the retirement day that he worked hard, and served people with honesty, and great consistency, commitment, no matter what task came in his way. He has rich experience of different jobs to earn bread, from bus conductor to pump driver to a clerk and a farmer. There are so many things I learn from my father. It is very motivating, to see how he managed all the things like mine and my sister’s studies, family, home, work, farms and making us doctors and engineers while keeping his own profile quite low.\nWatch the video below from the retirement day speech where I shared my 2 cents (It’s in Hindi/Local).\n Out of so many things I learned, I explained here are 3 things from his life that I follow very closely, and that has shaped me what I am today.\nEarn the wealth that brings prosperity We have been taught from childhood that wealth earned with hardwork and honesty will always bring prosperity, and such wealth always takes us on the path of growth. That’s what my father did for his whole work life. He served people with honesty and hardwork.\nThis lesson started way back, when I was in 8th standard. I visited my father’s office and found that there were large book ledgers to record people’s water consumption data. However binding work for those books used to be given to an external vendor. I knew book binding very well, and asked my father if he could connect me with the right people there. I proposed a quite economical offer to the officer I met, and luckily I got the order to bind some books. I, with the help of my sisters, completed that and further orders for this binding work, and the day came when my father received Rs 1200/- for our work. Incidentally, on the same day, my father booked the land for our first house in town. He had this 1200/- in his pocket, and gave it as token money. I always remember that incident as life changing, and how wealth earned with honesty brings prosperity. We have never looked back after that, and are still growing. My father always inspires me to earn that prosperous wealth and not just the money.\nStay on the path of Truth Choosing the right path is a challenging thing in life, unless one has the right environment, right people around. Staying on that path is even more challenging when the people and environment around you do not encourage you to follow the path, due to there not being enough successful instances, or people have not tried that path, but they keep sharing discouragement. My father has mostly spent his life in a village or small town.\nHe stayed on the right path that he chose, despite the discouragements from people around him. First thing that my father stayed strong on was letting me continue my studies. People shared multiple examples in front of him, such as how kids get spoiled when they study higher, especially when they stay under the influence of maternal grandparents, their love spoils them more. I was that kind of child, but my father ignored all those unwanted guidance allowed me to continue to follow my passion, and allowed me to go to Kota city to prepare for engineering entrance exams. I left with my grandfather to start a new journey at around 800km away from my place. I am sure it was difficult for my father, he might have a fear of me getting spoiled in Kota, but he never showed that fear to me. That fear was a motivation for me to stay focused on my path, and succeed.\nNext thing for my father to let my sisters study, they were also good in their studies, and they proved it also. See story of them in earlier my article here. People’s discouragement didn\u0026rsquo;t stop here, they told my father that it would be very difficult to get the right life partner for these engineers and doctor’s he built. He stayed strong, and people\u0026rsquo;s opinion again proved wrong, we all got our life partners better than us in many ways.\nThese instances make me stronger to stay on the right path. There are always some obstacles, distractions and discouragements, but I try to stay strong like my father and stay aligned to the path of truth that I chose.\nDown to earth My father is down to earth person, he teaches with his life that no work is a small work, it’s just our thoughts that make us small. It does not matter much what others think about you but what you think about yourself matters. He has achieved so much that one can easily get his lifestyle changed, and such success sometimes comes on people’s heads that they forget their past and routes. He has made us doctors and engineers, and now he can easily afford a little luxury in life. I see my father exactly the same as what he was from the time I understood life. He still prefers bicycle or walk or some public transport for regular needs, and does farming, while we kids are running on cars, flying the world, and living in metro cities. I have never seen him talking or telling people about what he achieved in life, rather he shows from his work, his simple life that what he achieved.\nI think we complicate our life. When I see my father’s lifestyle, it’s much simpler than that of mine (though I try to follow him, and keep it as simple as possible). He adapts to what is available, has no particular asks from other; eats what is cooked - hot or cold, sleeps wherever the place is available, wears what is available, travels on whatever is available, lives wherever life takes him. He prefers to go early in bed, and get up early, and prefers not to disturb others for his day to day. Keep lessor things in life, need lessor things to worry about.\n I feel like, he follows the concept of “Why Worry?”. Try Why Worry Simulator - Click here to reduce your anxiety.\n In my childhood, I heard a statement in a TV commercial “Simplicity has Strength”, that fits well to my father. I do follow it closely. Learning is a never ending process, life keeps teaching us no matter if we like it or not. Be flexible, and bring acceptance and flexibility can make it simple and happy. Enjoy!\nRetirement day and more I wanted to make my father’s retirement day 30 Jun, 2022 a very special event, and I was planning for it for a couple of years. During covid time, I got a chance to stay at my native place, and work from home there, and we also got our home renovated. See my story for renovation on what I learned and how I used the technology to convince parents to get it done. Retirement event was planned in that renovated native home. It was a great fun-filled get together.\nThe learnings continued from there. I applied the learnings in building a new home at my workplace Gurugram, and I am writing this article from that new home. I dedicate this new home to my father’s work life, and wish him a happy and healthy life ahead.\nI will share some corporate lessons that I learned while building this new home and working with different people of different capability and skills. Stay tuned!\n","id":77,"tag":"general ; impact ; motivational ; life ; reflection ; takeaway ; work from home ; covid19 ; learnings ; experience","title":"Lessons of Life - Father’s Retirement","type":"post","url":"/post/lessons-of-life-father-retirement/"},{"content":"Brief summary Extended Reality technology — XR — has had a lot of hype in recent years thanks to the concept of the metaverse. But bold visions of the future can obscure the reality of what engineers and organizations are doing today.\nIn this episode of the Technology Podcast, the host Scott Shaw and Birgitta Boeckeler are joined by Cam Jackson and Kuldeep Singh to discuss the XR work Thoughtworks has been doing with clients. Together they explore what it means for engineers and how the future of the technology might evolve.\nWe discussed following\n What does X and R stand in XR? Do I need to have a headset to experience XR? is phone only means for Augmented Reality or Cardboard kind of things are still being pursued? What are the industrial side of XR? Are there ways to get started that are cheaper or easier in XR? Can I get started just on the web? Can people just access the web through these devices? If I was just a React developer and I wanted to start working in 3D, is there a migration path for me? I know React, I know JavaScript, but it\u0026rsquo;s probably not that one-on-one transferable. What are the different challenges? JavaScript and the browser don\u0026rsquo;t immediately make me think suitable for real time, right? You must have had to address continuous delivery issues like how do you deploy to a device like that? A gaze click would be: I look at something and then something happens in the glasses, yes? What are the testing practices? Suppose I was a developer and I wanted to get started doing this, what skills should I learn? Do I have to learn Unity or what\u0026rsquo;s the basics? Do the frameworks shield you from some of that? I would have thought that would have been worked out already? It seems like it\u0026rsquo;s almost like an architecture problem, as opposed to design like visual design. You\u0026rsquo;re designing spaces, aren\u0026rsquo;t you? What is it that\u0026rsquo;s stopping it from more general adoption? You think the hype is becoming reality? Metaverse is like cyberspace; we\u0026rsquo;re living in cyberspace, but we don\u0026rsquo;t call it cyberspace. Listen here Thoughtworks Podcast : XR in practice: the engineering challenges of extending reality\n","id":78,"tag":"xr ; ar ; vr ; mr ; fundamentals ; thoughtworks ; technology ; podcast ; event","title":"Podcast : XR in practice: the engineering challenges of extending reality","type":"event","url":"/event/xr-in-practice-engineering-challenges-extending-reality/"},{"content":"After a wonderful experience of participating as a judge and mentor at Smart India Hackathon 2019 and 2020, I am delighted to share that I got a chance to meet and judge India\u0026rsquo;s young winners and innovators at Grand Finale of Smart India Hackathon Juniors 2022, Smart India Hackathon 2022 is a nationwide initiative to provide students a platform to solve some of the pressing problems we face in our daily lives, and thus inculcate a culture of product innovation and a mindset of problem solving. It is world’s biggest open innovation modal, organized by AICTE and Ministry of Education, Govt of India, and other industry partners. This one is very special, it is for school students.\nI attended the grand finale on 12 Aug 2022, and sharing here my experience and learning. I am assigned to a Nodal center — Noida Institute of Engineering and Technology, here is how rest of it goes.\nInauguration Session Grand finale inauguration session started at 9 AM by Dr. Mohit Gambhir), Director, MoE\u0026rsquo;s Innovation Cell Secretary, Organizing Committee Smart India Hackathon 2022.\n Grande Finale Judging Sessions Judging sessions was start at 10:30 am, and there were multiple sessions in parallel. I joined one of the track, and evaluated 9 teams in first round, and we needed to select top 1 from this set. Problem statements for my teams were in the category of \u0026ldquo;Transportation_and_Logistics\u0026rdquo;.\nEvery team was given 20 minutes to discuss their ideas, and evaluated on solution approach, novelty, ambitiousness, complexity, impact, technology, innovations, user experience and execution feasibility. It was very tranparent process, evaluation criteria was well shared in advance with jury as well as students. Every evaluator is supposed to evaluate independently on an online platform.\nWe were amazed with ideas shared by kids to detect and solve land slides, tunnels safety, rider safety, road safty bus/train monitoring, traffic issues, parking problems, smart cycle stands ,scaling electrical vehicle, and so on. They all were presenting their idea so confidently that at time we forgot this they are Juniors, are still in schools. It was very difficult to choose from these ideas.\nThen we had another round Power Judging Round, where top 1 team from each panel was presenting. and here concluded on top 5 of SIH Juniors 2022 from NIET nodal center.\nThanks to NIET faculty and student coordinators for organizing is so well.\nSmart India Hackathon 2022 Seniors This is coming next, I have participated as evaluators some time ago, and the grand finale will be planned soon.\nGood bye, keep in touch, keep rocking, stay safe!\n#AjadiKaAmritMahotsava #AatmnirbharBharat\n","id":79,"tag":"sih ; experience ; takeaways ; event ; hackathon ; judge ; mentor","title":"Judging the Smart India Hackathon Juniors 2022","type":"event","url":"/event/sih-2022/"},{"content":"eXtended Reality (XR/AR/VR) technology is changing the reality of almost every industry today by enhancing user experiences. XR adoption in the museum industry is also quite natural, let\u0026rsquo;s understand how it is shaping up the Museums of Future.\nMuseum is a place that displays artifacts from history, culture, science, art and more. As per International Council of Museums (ICOM), there are around 55000 museums worldwide designed with different themes serving different types of artifacts. The museums are all about providing user experience where visitors immerse in the environment provided. Over the years, physical design and layout of the museum including colors, interior/exterior, arrangement of artifacts, audio/visuals guides, tips are playing key roles in improving the user experience.\nMuseum visitors typically want to cover more in a short time, and keeping all types of visitors engaged is tedious, especially the younger ones who are well equipped today’s gadgets, and for them traditional brochures and signage methods may not work well. So museums are finding ways of evolving user experience with technological advancements, earlier paper based brochures, navigation maps, tips are now converting into smart touch based displays, websites, or even to the sensor based touch-less interactions. Mobile apps or smart devices are replacing the traditional audio taps. Now, navigating through the museum is getting driven by technology, and this pandemic has induced the need to go virtual. Museum industry is exploring ARVR based technology to provide new ways of interactions and enhance the existing experiences to be ready for future, where the task from the metaversal museum will be quite different, it would demand not only digital (virtual) replicas of the museum itself but also the whole experience need to evolve.\nLet’s understand different use cases for the museum industry that can be extended with XR technology as follows.\nExtended Navigation — The Museum Explorer Museums are generally spread in multiple floors, buildings and have large space and an easily accessible navigation path for indoor and outdoor has been a key experience driver. A Museum\u0026rsquo;s experience has to be a journey, starting from a visitor entering the premise, park vehicle, booking, and entering the museum, visiting different artifacts, till a visitor leaves from the premise.\nTraditionally museums provide paper based info brochures, maps, navigational signages, point of attractions, and audio guides in different languages with markers. These ways are not easy to follow for specially abled, or people with non-supported language, these are hard to maintain too\nThere has been some effort already in building indoor maps from Google similar to standard google map, and now there are number AR applications are being built for Indoor navigation, example is shown below in Fig#1 from OfficeExplorer App from ThoughtWorks. In such case, visitor can use an AR application in a smartphone/tablet or smart glass like ThinkReality A3, and follow the path shown in the phone overlaid on the environments, or they can follow some digital character that appears in the app.\nFig#1 Indoor navigation using AR\nThe digital character can well talk to the visitor and guide them on the path to the area of interest, or take them to the point of attraction. A visitor may also choose the language of their choice. So instead of maintaining fixed audio taps, devices and paper manuals, museums can now maintain navigational paths digitally, not just that the path can be designed and controlled as per the need, crowd dynamically. They can monitor who is following what path, how much time people spend differently, what area is less explored, and can think of improving the point attraction, so it gets better.\nFig#2 Administrator defines the routes for specially-abled visitors\nVisitors may be shown different navigation paths with time/waiting time, point of attractions etc, and they can choose what suits them best. There can be a dedicated navigation path configured for specially abled, or people who need special attention. Visitors can pre-download the app, and may pre-book the navigations, so they get personalized experience, and enjoy the experience\nExtended Artifacts — Tell Me More The purpose of a museum is to display a set of artifacts from history, science, art, culture or so. Information about these artifacts are typically available in form of brochures, printed boards near the artifacts. Some museums also adapted audio devices/visual displays near the artifacts that can tell information about the artifacts.\nFig#3 — Scanning a QR code tagged artifact\nIn recent times, artifacts getting digitally tagged with QRCode/BarCode, RFID etc. Visitors are using these tag reader phone apps/devices to scan the tags and get the relevant information about the artifacts, this is a less flexible way to manage artifacts information than the traditional paper based crude ways. Fig#4 shows digital information overlaid near the artifact it scanned in Fig#3\nFig#4 Show the tagged information on the artifact\nWith advancements in computer vision and cloud technology, artifacts can now be tagged and persistent as Anchors (e.g Google Cloud Anchor) with respect to the environment. The XR applications can have image and object detection capability that allow showing augmented information directly on the physical objects. For example, Fig 5 shows information about the book (such as booking rating, views, number of pages in the book, reviews etc.) just by looking at this book. This is a snapshot from TellMeMore app built by Thoughtworks\nFig#5 — Show augmented information on a book just by looking at it from the AR App.\nThe AR app shown in Fig#5 is pre-trained for this book image, similarly all the physical artifacts in a museum can be scanned and an AR app can be pre-trained for that. We can augment any information such as Audio, Video, Image, Text or Document, Website etc at any position relative to the artifact.\nThis type of application improves visitor experience by bringing contextual information at ease as per their preference. In addition to that, it would allow faster editing/configuring the museum and onboarding new artifacts. We can well measure the telemetry data around usage of content, and see which artifacts are accessed well, what kind of information visitors are interested in, and update the content based on that. It would also save on resources consumed for managing these artifacts, we can divert the effort where it needs attention most.\nExtended Visitor Engagement - Look Around Keeping the visitor engaged is one of the important tasks while designing the museum experience. Traditional ways are getting out dated, and engaging the modern crowd would need modern solutions. Current generation is well versed with smart gadgets, and actively connected on social media.\nXR can well help in improving visitor engagement by gamifying the experience. For example introducing games like treasure hunts and pokemon go where visitors can signup and compete in groups. At Thoughtworks, we have built an app named LookARound, that allows users to scan some QR Codes configured at last space, and visitors attempt to collect goodies appearing randomly around the area of attraction.\nFig#6 — LookARround app concept for visitor engagement\nThis type of app helps in designing large spaces like museums for better customer engagement, in the search of goodies people will explore the places which we want them to visit. Since visitors also come in a group this could be a great app for experience enhancements.\nOther ways of improving engagement is to let users provide reviews, feedback, rating on the artifacts directly by looking at artifacts from the XR apps, and other visitors can also see the reviews and ratings, they can also publish these on their social media channels.\nThere could be experience booths for visitor engagements for having advanced experience like time travel described in the next section. There could be other value added services for taking to food joints, offers, advertisements etc.\nFig#7 people experiencing XR at a booth\nExtended Immersive Experience — Time Travel We have talked about user experience enhancement in previous sections, and here we will talk about taking that experience to the next level with XR. Visitors look at the artifact wearing a smart glass, it would change the whole environment related to that artifact, it would be like the visitor has traveled in that time, and would see people wearing dresses of that era, see buildings, design of that time. An example of this shown below.\nFig 8 — Time travel to history in XR\nThis can be further enhanced by multi user collaboration, where a group can join in and be part of the same session. Visitor may choose and try clothings from history and take selfie in those clothes, ornaments, and share these with others\nAnother example could be, looking at dinosaurs at a museum would take us to the dinosaur era, and the whole environment would change to a jungle and we see dinosaurs roaming around with sounds and animations.\nSimilarly, if we are in a car museum, and looking at a car, then it can allow visitors to experience the car as follows. Fig#9 XR visualisation of a Car\nZero Museum - Open Museum Zero museum is a concept coming out with a museum without any physical artifacts, we still have a physical museum but with empty space. Artifacts are shown all virtual, either projected in 3D or shown via smart headsets or AR/VR enabled phone/tablets.\nThis is a very interesting concept, and the same space can be used for different types of museum, and even at a time multiple visitors can see the same place differently.\nAnother concept of museum is coming to think of the museum as a city scale experience, like turning all parks or open space of the city into museums or art galleries, where people can search such places and go there to experience it . In fact people may also contribute in submitting the content.\nVirtual Museum Last but not the least, Virtual Museum, where the whole museum experience can be replicated virtually. There are multiple ways of doing that. Visitors can experience it directly on a web browser, and access the museum as a 360 degree tour, example here.\nor users can try experiencing a museum in virtual reality headsets. They can walk inside a museum using a controller in the device or walk naturally as today’s device may well track the user\u0026rsquo;s movement.\nThis experience is not just experiencing the museum artifacts but also it can also replicate people who are visiting the virtual museums at that time, their avatars can come, and interact with you. It is called metaversal museum experience, and nowadays even NFT museum are also getting hyped, which aim to bring artists and the creator community together, and drive the creator economy.\nMuseums can build VR training and education materials that they will run with schools, and audiences in a studio setup in the premise as additional services.\nConclusion I have documented the ways to extend the museum experience using XR technology, where users get the enhanced experience using XR enabled phone/tablet or XR smart glasses/headsets. All different types of AR/VR/MR use cases are possible for museums. XR can cater both the audiences who want to physically visit the museum and those who want to experience it remotely. It can provide a touch less/contact less, and safe experience for physical visitors, and for immersive virtual museum experience for the ones who don’t want to come for physical visit or they can’t come due certain medical conditions. Metaverse will also bring new ways of experiencing the museum.\nXR brings flexibility to change the museum as per the visitors\u0026rsquo; needs, and can provide a very personalized experience. Visitors would see a different theme/style of the museum on every visit, or in future, we will be able to configure museums as per my need.\nHappy visiting! Happy learning!\nThis article is originally published on XR Practices Publication\n","id":80,"tag":"xr ; ar ; vr ; mr ; usecases ; thoughtworks ; Enterprise ; technology ; museum ; lenovo","title":"eXtending the Reality of Museum with XR","type":"post","url":"/post/extending-reality-of-museum-wtih-xr/"},{"content":"The XConf XConf is Thoughtworks annual technology event created by technologists. It is designed for technologists who care deeply about software and its impact on the world.\nLast year XConf was virtual but this year, we are resuming to a physical one, so come meet us face to face in Bangalore at ITC Gardenia on Friday, July 29.\n \u0026nbsp; I am exited to share the stage with renowned technologists like Supriyo Ghosh, Chief Architect, Open Network for Digital Commerce (ONDC) India, our Head of Technology Bharani Subramaniam and Vanya Seth along with other techies from three streams Emerging Tech, Digital Transformation and Data.\nI will be speaking with Raju Kandaswamy on \u0026ldquo;AI Assisted Content Pipeline for XR\u0026rdquo; Highlights of the talk Understanding the ask from the Metaverse and XR The XR content management - current stage and challenges. Content automation pipeline using AI Mass generation of XR content Register here This is register only event, please register here. Registration closing soon.\nEvent Wrap and Key Takeaway It was amazing to join such an event face to face after so long, there was a lot of excitement in participants as well as organizers.\nHello from #XConfIndia event!\nWe\u0026#39;ll be live tweeting some snippets from the conference. Watch this space. #TechConference #ThoughtworksIndia pic.twitter.com/WWEm9kWdUE\n\u0026mdash; Thoughtworks India (@thoughtworksIN) July 29, 2022 Stage was set so well and was a full house. #XConfIndia was trending on social media. The ONDC, the next disruption in digital commerce I loved the keynotes by Supriyo Ghosh, who talked about Open Network for Digital Commerce (ONDC), it\u0026rsquo;s architecture, and how decentralized architecture will come into play. I have always been talking in the metaverse adoption challenges, that there will be tussle between centralized and de-centralized architecture, as community based eco-system would like it in de-centralized fashion, but central authority, governments always wanted to control centrally. It changed my thinking after looking at the ONDC architecture, and how government is adopting it safely.\nHe further deep dives into democratizing #digitalcommerce and the tech initiatives ONDC has implemented to get there. #XConfIndia pic.twitter.com/sNM7uz3fXm\n\u0026mdash; Thoughtworks India (@thoughtworksIN) July 29, 2022 Top 10 tech trends Very well covered by Bharani Subramaniam and Vanya Seth. Key trends they spoke are scripting the kernel (ePFB, taking ownership of rising cloud costs. Preserving data privacy while linking databases (record linkage), Authorisation as service still a need (eg SpiceDB) that has been inspired from Google’s Zanzibar, pretotyping over prototype and cleaning the software supply chain systems, building protocols that can build platforms, sustainability and more.\nThoughtworkers @bharani_sub and @vanyaseth get started with the ten technology trends you need to know as #technologists at #XConfIndia. pic.twitter.com/JkLnkuEs0P\n\u0026mdash; Thoughtworks India (@thoughtworksIN) July 29, 2022 Tech Tracks Overall focus was on data and AI. there were 3 tracks, one was focused data and DataMesh based practices, and one was focused on digital transformation where we talked about development practices such as security at scale, clean code, service meshing etc. and we had a dedicated track for emerging tech. At emerging tech track we talked about how AI can help in XR/Metaverse content generation, and Drug discovery, and how to measure carbon intensity of the software we are producing.\nExtended Reality#XConfIndia @thoughtworksIN pic.twitter.com/JePZUGDxEl\n\u0026mdash; Karanchdey7 (@Karanchdey71) July 29, 2022 Extended Reality…#XConfindia@thoughtworksIN pic.twitter.com/e8KtoIOPFH\n\u0026mdash; Karanchdey7 (@Karanchdey71) July 29, 2022 Hands-On Workshop Two workshops was organized on very interesting topics \u0026ldquo;Patterns Of Distributed Systems\u0026rdquo; and \u0026ldquo;Divide and conquer with Micro-frontends\u0026rdquo;. There was overwhelming response from developer community.\nXR Experience Booth We had a XR experience booth where we have showcased number of industory usecases in XR devices Lenovo ThinkReality A3, HoloLens, Oculus, Cardbaord, and mobile based XR.\nExperience the feel of Meta and going completely virtually being in-person....\nIt\u0026#39;s really enriching how uniquely volunteers are explaining @thoughtworksIN #XConfIndia pic.twitter.com/KlTxfLx0rD\n\u0026mdash; Parvez Alam I (@Parvez__AI) July 29, 2022 Event was full of life and fun.\nVideo Link Here Happy learning!\n","id":81,"tag":"xr ; ar ; vr ; mr ; thoughtworks ; event ; speaker ; xconf ; talk","title":"Speaker - XConf-2022 - Thoughtworks India","type":"event","url":"/event/thoughtworks-xconf-2022/"},{"content":"The VAMRR ITeS Leaders Round Table The thoughtleadership roundtable is an exclusive event hosted by VAMRR Technologies Pvt Ltd, where they invite an elite gathering of 20 ITeS leaders focused on the areas of Design Visualization, Digital Twin and Hybrid Workspaces.\nThis one is the 98th Conference/Round Table by VAMRR Technologies, and focus this roundtable would be on developments across Advanced Visualization, Remote Design Collaboration, Hybrid Workspaces, 3D Workflows, Data Integrated Digital Twins, Web3 and Metaverse are all converging. As they converge, they offer exponential transformation impact and opportunities to the IT/ITeS/Technology sector.\nI would like to Thank VAMRR Technologies for inviting me to share the stage with the esteemed industry leaders from NVIDIA, Accenture, Bosch, Verizon, L\u0026amp;T, Wipro, Tata and more. Keynotes New Era of Design Collaboration w/o Compromising Performance, Security \u0026amp; Manageability. - Nikhil Parab | Head, Pro Viz Business India, NVIDIA\nFireside chat Collaborate \u0026amp; Build Digital Twins with Minimal Effort with NVIDIA Omniverse. - Eloise Tan | Senior Business Manager, Software APAC , NVIDIA , Anand Gurnani | Founder, VAMRR TECHNOLOGIES\nThe Roundtable Discussion topics at the Round Table Include\n The Convergence of Design, Visualization \u0026amp; Technology Managing Design \u0026amp; Remote Design Collaboration in Hybrid Workspaces Design to 3D to Data \u0026amp; Digital Twin The Business Impact of Advanced Visualization, Immersive Technologies \u0026amp; Digital Twin Participants Ashwin D\u0026rsquo;Silva | Managing Director, Enterprise Metaverse - Global IT | Accenture Suhas Bendre | Director | Cognizant Interactive Vamsidhar Sunkari | Senior Expert - XR Solutions | Bosch Global Software Technologies Mayank Jain | Chief Metaverse Officer \u0026amp; Head of Digital Transformation | US Tech Solutions Kuldeep Singh | Head of XR Practice, India | ThoughtWorks Ravi Varma | Vice President, Design Services | Evalueserve Vinay Ashok Jirgale | Head of Emerging Technologies(AI/ML, AR/VR) | Neilsoft Nilabdhi Samantray | Vice President, Head - Emerging Technologies | CSM Technologies Ramesh Sethuraman | Sr Director | HCL Technologies Suuresh Halarnakkar | Director, Product Design, Blue Jeans | Verizon Pranjal Jha | Sr Manager, Global Technical Sales \u0026amp; Digital Transformation | Cyient Ltd. Ajay Rawat | Senior Art Director | Genpact Dr Amarnath CB | Head BIM Strategy | L\u0026amp;T Jhhen Choudhary | Engineering \u0026amp; Manufacturing CoE - Lead | Tata Technologies Ajinkya Bhosale | Manager | Wipro 3D Kasiviswanadham Ponnapalli | Manager Content Engineering | GlobalLogic Rajeev Khanna | Senior Manager, Design | Genpact Kallol Roy | Director, UX Design | Grassdoor Logistics Technologies Vivek Dubey | Consultant, Digital Twin | General Mills Mahindrabalan GS | AGM AR | VR | 3D | Sify Technologies Takeaway We discussed and shared our experience of building for XR and metaverse, and how it is impacting and challenging us. XR and metaverse are like a technology at the stage when first time Radio Frequency wave was invented, and everyone was thinking where can we use it. Imaginations were at its hype, people were thinking if we can transfer data over RF from one city to other city nearby it would change the world, and some we were imagining of connecting between states, countries and space. XR or Metaverse also at the stage where usecases are being explored in all different areas, it would be adapted along with the evoliutions.\nIn my previous article, I have defined current stage metaverse as a result of technology evolutions. NVIDIA Onmiverse will fuel the XR adaption, and were photo-realistic rendering would be possible at edge cloud and smart glass/XR would remain as at display device which can display graphics at good frame-rates, where frames are delivered from CloudXR. Content delivery and consumption is the key for XR solutions, and platform like Omniverse would definetely help here.\nWe have also discussed that technology should solve people\u0026rsquo;s problem, for example this new era of remote first digital transformation is one side connect the world more, and on other side it is reducing the emotional connect. We would need faster evolution, why are vehicles still driven by humans? why people participation required in wars to drive tank, fly aircraft? why a driver can\u0026rsquo;t work from home and drive a vehicle seating at their comfort? hope technology will some day take us there\u0026hellip;\nHappy learning!\n","id":82,"tag":"xr ; ar ; vr ; mr ; thoughtworks ; event ; speaker ; metaverse ; leader ; talk ; design ; vamrr ; roundtable","title":"The VAMRR ITeS Leaders Round Table","type":"event","url":"/event/leadership-roundtable-vamrr/"},{"content":"The Metaverse Metaverse is the most talked, understood and misunderstood topic these days, and every other industry today is trying to define it in their own way to suit their narrative. It would end up creating many sub-metaverses, with different sets of features and, would lead to confusions, and conflict of interests. Acceptable definition of Metaverse will evolve with time. At this stage, I treat the metaverse as a result of technological evolutions in the key areas such as ARVR, AI, Cloud, Blockchain, 5G/6G and hardware. Metaverse adoption will also depend on the further evolution of these technological areas. In fact, the metaverse is touted to be the next natural step in the evolution of the internet with use cases to boot - as The Next Internet or Web 3.0.\nEvery evolution comes with its own pros and cons, the same is also true the metaversal evolution, it matures with time. It is important for the leaders in enterprises, governments, institutions, educators and people to join hands, give the metaverse a right shape and direction. We need to create awareness about the metaverse, define best practices, and standards, to adopt it ethically.\nI tried to define Metaverse, its evolution, usage and concerns in my earlier articles - Getting into the Metaverse - Part1, Getting into the Metaverse - Part2 and Getting into the Metaverse - Part1. I have written foreword for the below book, that can help you understand the metaverse terminologies.\nThe Book - Metaverse Glossary We need to first understand the terminology to speak a language in the context. I appreciate Author Mr. Ravindra Dastikop for taking a step forward and collecting 300+ terms and coming up with a book “Metaverse Glossary”. This book will help in creating awareness about the metaverse, and let people talk about it. Metaverse is a very vast topic, and the “Metaverse Glossary\u0026quot; book covers a variety of terms from history, busineses, user experience design, development, tools, and techniques. People can easily traverse through it and speak the language of metaverse.\nThe book covers good details with images and source references, It has QR Code for each reference, that can be directly accessed in the mobile phone.\nThe Author The Author Mr. Ravindra Dastikop is an educator, and 30+ years of experience and promote the power of technology among everyone - students, larger community. Currently, he is specializing in Metaverse and creating awareness about its transformational power. He also runs a stream series on Crater.Club on the Metaverse covering all its aspects and from all angles. He maintains Metaverse Matters - a popular blog, and conducts metaverse related workshops and courses to professionals, engineering and management students and teachers.\nLinks to Buy Amazon : https://www.amazon.in/dp/9354468004/ Flipkart : shared soon. ","id":83,"tag":"event ; book ; xr ; ar ; vr ; metaverse ; book-review ; nft ; blockchain ; iot ; glossary","title":"Book Launch | Metaverse Glossary - Your gateway to the future","type":"event","url":"/event/metaverse-glossary-book/"},{"content":"RedBrick Hacks is flagship summer hackathon of Ashoka University, open to students from Delhi NCR. You will have the opportunity to learn, collaborate, and build meaningful projects alongside mentors from various disciplines. For 24 hours on the weekend of June 18, you will have access to free workshops, plenary lectures, prizes worth over ₹8L, games, and more!\nThe theme of the event was \u0026ldquo;Tech for Social Good\u0026rdquo; with three main tracks: Health \u0026amp; Wellness, Sustainable Growth, and Security \u0026amp; Privacy. The participants had to identify relevant problems in these domains and build their own solutions within the 24-hour time frame.\nI was wonderful experience to join such an event face to face after so long. I have participated as a Judge, and joined the panel with Saud Khan from Twitter, Muthuraj Thangavel from Google, and Deepa Nagraj and Satish Rajarathnam from MPhasis. I was amazed to see such wonderful ideas for the topic close to my heart \u0026ldquo;Tech for Social Good, Responsible tech\u0026rdquo;. Event Wrap I joined the closing ceremony with colleague Jitendra Nath, who was a speaker at the event, and has shared his industry experience with the students. Students have also enjoyed inspiring talks from other speakers Latha Raj, Utkarsh Uppal, and Joshua David.\nIt was great to have participants from different colleges from all over Delhi-NCR. 140+ students from 25+ colleges, I got chance to access top 10 teams in Jury round, and I was amazed to see their innovative ideas, and demonstrations the young blood of nation bring on the table. They were easily explaining some complex concepts of machine learning, computer vision, spatial computing, NLP and IoT. Not just that, they were well concerned about challenges we as a country facing in this COVID time in Health-care, Education and Sustainability.\nIt was amazing to see these youngsters come up with interesting usecases, and demoed working concepts. It is not the a complete list, just some glimpse.\n Automatic Waste Segregation using AR, awareness creation apps Face mask detection, point cloud detection and personal AI trainer for exercises health routine. Using technology to report and reduce crime Accelerate education by easy remote tools, accessible content management, attentiveness checking and alerting - really important when most of the education being planning as online classes. Patient monitoring and record keeping, remote consultation - this is need of time to digitally transfer patient records and have remote consultation facilities. Use technology to detect diseases and reduce chances of mis-diagnosis Tools for specially-abled people. Guide for sex education We would like to congratulate our winners, it was not easy to judge the event. I also addressed the students in the closing ceremony that this just a start, all who participated are the achievers, when we practice we become better it, and this platform have given the students a chance to become a better ideator. I also suggested the students to be cautious while using tech as solution. Tech is just an enabler/tool to solve something, it is not a solution in itself. Eg. Just using buzz tech like blockchain, ARVR, etc may not make your solution innovative, unless it solve some real good.\nThanks to the participants, mentors, organizers and sponsors of the event. Keep it up!\n","id":84,"tag":"xr ; ai ; judge ; hackathon ; event ; ashoka ; ar ; innovation ; ml ; machine learning ; cv ; sustainability ; university","title":"Judge - Ashoka University Hackathon 2022","type":"event","url":"/event/ashoka-university-hackathon-2022/"},{"content":"School of Computer Science and Engineering (SCOPE) of VIT-AP University is organizing a 1-day National Workshop on “Exploring the Blockchain in Metaverse” on June 11, 2022, from 11 am to 5 pm.\n The program is intended for students, faculty of technical institutes, professionals from Industry, R\u0026amp;D organizations, and research scholars.\nI will be speaking at the workshop on \u0026ldquo;Gearing up for Metaverse\u0026rdquo;, Will touch upon metaverse step by step.\n Metaverse - the definition part Metaverse - the hype part Metaverse - the opportunities part Metaverse - the concerning part Metaverse - the getting ready part Register here Come join us on June 11, 2022, from 11 am to 5 pm. Register here\nMy session will be from 2 to 4 PM.\nEvent Wrap Thanks to the organisers for this smooth organisation of the session. It was well appreciated by the audience, and some of them termed it as one of the best session.\nHere is a certificate for the participation Also received appreciation letter from VIT-AP University. Those who missed the live session Here are the slides of the session.\nThis browser does not support PDFs. Please download the PDF to view it: Download PDF.\n I will share video recording soon. Stay Tuned!\nHappy learning!\n","id":85,"tag":"event ; students ; speaker ; lecture ; xr ; ar ; vr ; metaverse ; webinar ; nft ; blockchain","title":"Metaverse Workshop - VIT-AP University","type":"event","url":"/event/metaverse-workshop-vit-ap-university/"},{"content":"Green Software Foundation Green Software Foundation (GSF) is building a trusted ecosystem of people, standards, tooling and best practices for Green Software. GSF works with vision to change the culture of building software across the tech industry, so sustainability becomes a core priority to software teams, just as important as performance, security, cost and accessibility. Read more here.\nGSF Global Summit 2022 The Green Software Foundation Global Summit 2022 is a series of free events organized by Green Software Foundation members all around the globe that are passionate about sustainable software development, maintenance.\nI will be talking at the summit on \u0026ldquo;XR and Sustainability\u0026rdquo; at Thoughtworks Gurugram office.\n Highlights of the talk Understanding the XR and sustainable development Sustainable usecases of XR Sustainability concerns of XR What\u0026rsquo;s next, What we can do. Register here This is register only event, please register here and join us on 10th Jun, 2022 : 9:00 am - 04:00 pm IST\nEvent Wrap Stay Tuned!\nThose who missed the live session You may download slides here.\nThis browser does not support PDFs. Please download the PDF to view it: Download PDF.\n Happy learning!\n","id":86,"tag":"xr ; ar ; vr ; mr ; thoughtworks ; event ; speaker ; green software ; talk ; sustainability ; metaverse","title":"Speaker - XR and Sustainability - GSF Global Summit","type":"event","url":"/event/green-software-foundation-global-summit-2022/"},{"content":"Indian Achievers\u0026rsquo; Forum Indian Achievers\u0026rsquo; Awards is India’s Most Coveted Award For Outstanding Achievements, organized by Indian Achievers\u0026rsquo; Forum. It draws attention on the theme “how the successful Achievers can help the social development in and around the country.” It provides examples of social entrepreneurs, successful models of the society and business community partnership. It promotes achievements that inspire businesses and communities, and recognizes the outstanding professional achievers\u0026rsquo; and their contribution in nation building.\nIndian Achievers\u0026rsquo; Award 2022 I would like to thank Indian Achievers\u0026rsquo; Forum for recognizing my professional achievements and social contribution in nation building, and presenting me Indian Achievers\u0026rsquo; Award in Business Leader category.\n I love giving it back to society by sharing my learnings, as I believe my learning can matures only by sharing, It explores me more of myself by giving it away to others.\n I dedicate this award to my organization thoughtworks and thoguthworkers who makes a wonderful cultivation culture that inspires. In my earlier article, I talked about how thoughtworks has changed me, and helped me bring purpose in what I do.\nIt would not have been possible without continuous support and belief from my beloved life partner Ritu Beniwal, my son Lakshya, Lovyansh, my parents, family and friends.\nMy Social Contributions Apart from the professional achievements at thoughtworks, and nagarro over these years, I have also contributed to the society in following ways. Thanks to the student\u0026rsquo;s communities, universities, institutions, and my social network for invite me to participate for the social cause.\n Webinars on emerging technologies, ethical usage of technologies Guest lecturer or Speaker at different universities, social community in India Judge and Mentors at different hackthons such as Smart India Hackathon, Nasscom, and institutions across the nation Blogged number of articles on technology, leadership and motivation. Some articles got covered by Thoughtworks Insights, Economics Times, Tech.de, Analytics India Magazine Maintaining a publication of 100+ articles by communities, and managing this space thinkuldeep.com Guest writer Dream Mentor, tealfeed.com and more. Member of The VRAR Association Mentoring of trans community was a life-changing experience for me. Startup advisor for young entrepreneurs Volunteer at non-profit organizations My Success Story Indian Achievers\u0026rsquo; Forum has covered my story at https://www.iafindia.com/mr-kuldeep-singh/\n “We explore more of us by giving it away to others”- Kuldeep Singh\n Introducing Kuldeep Singh, A technologist, speaker, mentor and knowledge guru with 17+ years of experience of bringing technology and arts at the core of businesses. He believes that we learn every day in all situations of life, good or bad, and the learning matures only by sharing, we explore more of us by giving it away to others.\nHe has incubated IoT and AR/VR Centres of Excellence with a strong focus on building development practices and standards. He has developed innovative solutions that impact effectiveness and efficiency across domains, right from manufacturing to aviation, commodity trading and more. Kuldeep also invests time into evangelising concepts like connected worker, remote expert, digital twin using smart glasses, IoT, blockchain and ARVR technologies within the CXO circles.\nHe is currently associated with Thoughtworks India, as a Principal Consultant and Head of XR Practice, India. He has worked with Nagarro, and contributed there for 13 years at various roles from developer to Director, Technology. Apart from the professional achievements over these years, he has contributed to the society by sharing his knowledge and experience with 25k+ people though 30+ events and 80+ articles on technology, leadership and motivation. Some of his articles got covered by Thoughtworks Insights, Economics Times, Tech.de and Analytics India. He has contributed as Guest lecturer, Speaker , Judge and Mentors at various universities and initiatives by Smart India Hackathon, NASSCOM, AIEEE. He has shared life-changing mentoring experience with PeriFerry, a trans community, He is also advisor for young entrepreneurs, and a volunteer at various non-profit initiatives.\nKuldeep holds a B.Tech (Hons) in Computer Science and Engineering from National Institute of Technology, Kurukshetra. He has completed schooling from a small town in Rajasthan. He has lead various student committee initiatives for National Service Scheme, Fine Arts Club and Science fair. His good foundation in arts and science from childhood is making him successful evangelist the space he is working in today.\nCurious by nature, Kuldeep is a tinkerer. For instance, he fixes most of his home electronics himself. Amazed by the way he juggles with his gadgets, his kids Lakshya and Lovyansh think of him as their very own Tony Stark. His wife Ritu Beniwal, who is also an engineer, acts as a backbone and encourage him to follow his passion. He always follows guidance from his teacher and grandfather that “We can do much more than what we think”.\n","id":87,"tag":"award ; xr ; mentor ; event ; university ; india ; nasscom ; judge ; hackathon ; students ; speaker ; webinar ; lecture","title":"Indian Achievers' Award 2022","type":"event","url":"/event/indian-achievers-awards-2022/"},{"content":"Metaverse has become the newest macro-goal for tech industries to invest in and to see the world in a new perspective with the help of AR and VR technologies.🌟\nA student club at VIT Vellore named Mozilla Firefox Club invited me for a guest lecture in The Parallax View Episode 1 :Bring the world to wherever you are\n 🔮✨Join us on 22nd April, 6PM to clarify all your doubts regarding AR and VR and opportunities it offers you !✨🔮\nWe have touched upon following\n What is a reality How XR is changing the reality Why now? How to start on it There were some interesting questions from the audience, I have fully enjoyed the session.\nYou may download slides here.\nThis browser does not support PDFs. Please download the PDF to view it: Download PDF.\n ","id":88,"tag":"event ; vit ; students ; speaker ; lecture ; xr ; ar ; vr ; metaverse ; webinar ; reality","title":"Guest Lecture at VIT Vellore","type":"event","url":"/event/the_parallax_view_1_vit/"},{"content":"Metaverse is the most talked, understood and misunderstood topic these days. In most of my sessions people ask what is the technology stack for Metaverse? what should I learn to stay relevant? etc.. I will try to cover it in this article. In my earlier article I tried to define the Metaverse, and my friend Vinodh Arumugam also attempted the same here. It is defined so loosely that every other industry today is trying to define it in their own way to suits their narrative. It would end up creating many sub-metaverses, with different set of features and, would lead to many confusions, and conflict of interests. Any how this will evolve with time.\nCurrent stage of metaverse is like “a solution waiting for the problems to happen”, there are lot of promises and claims without real acceptable usages. This situation is changing soon with evolution of technology and businesses, and more acceptance to metaverse are in making. Acceptable definition of Metaverse will also evolve with time. We are still far from the true metaverse, and today metaverse hype is just the evolution of technology and it is a set of technology pieces coming together, it may continue like that. Here are the building blocks of Metaverse. Evolution within these building blocks driving the metaverse adoption.\neXtended Reality (XR) Evolution of user experience and interactions is the key driver for XR adaption. Now access to a camera and internet is a very basic thing the users, norms are changing, and people are accepting to be live on camera, and sharing more with the world. This is brining many metaversal use-cases in life. There has been lot of advancement in ARVR devices in recent years, that is making the new user experience possible.\n Head Mounted Displays to Normal looking Smart Glass — Earlier bulky ARVR head mounted devices are turning into light weight and normal looking smart glasses with better capability than ever. Eg. Lenovo A3, Nreal and 250+ devices in line. Not just that it is now getting into as small as contract lenses. Mobile phones are evolving as wearable computing unit to power the smart glasses/other wearables. Most head mounted devices today are tethered from PC or mobile phones, or some android based computing units. This enabled bringing traditional developed practices to be reused and improvised for XR development. Mobile XR — Mobile phones itself is capable of running ARVR application without a need of smart glass. Google ARCore and Apple ARKit has made it possible for mobile phone developer to build XR application for Mobile phones. Unity 3D Engine, and Unity AR Foundation has scaled it the next level along with other 3D engines like unreal, vuforia and more. Smart phones are working to include advanced capabilities like depth sensing, and semantic scene understanding out of the box. WebXR — Web based XR technology is bring XR experience directly on the web browser. There are many player making it possible with traditional WebGL, or using favourite javascript libraries ThreeJS, BabylonJS or likes of 8th Wall. OpenXR — an open initiative for bringing multiple devices, maker, and communities closure to the characteristics of the Metaverse. OpenXR is being adapted by Qualcomm Snapdragon XR platform, and Qualcomm is fuelling huge in metaverse. More and more XR devices (Lenovo A3, NReal, and more) will be using XR2/XR2 chipset from Qualcomm. SDK’s from Meta and Microsoft Hololens has also support OpenXR. Volumetric Displays or projectors are also growing and bringing XR experience without wearing any display devices. Smart Mirrors with camera are coming to track the person moment and show contextual information right in front of user. Heads up displays are also traditionally available as Augmented Reality usecases. Other Peripheral devices like hectic gloves, trackpads, keyboards, are all growing and evolving.\nArtificial Intelligence (AI) As we talked about in last section, user interaction is evolving faster, the XR devices or development kit are are coming with out of the box artificially intelligent capability to understand the environments semantically, as follows\n Identity detection — XR capable devices as now capable identify humans by detecting the face, iris, skin texture, finger print etc. Authentication and Authorisation metaverse will be a key think to safe guard identify where people will have multiple metaversal identifies. These capabilities are important for metaversal development.\n Image and Object Recognition — Scanning the scene and understanding is semantically is now important part to share the environment in metaverse or creating the digital twin of it. Devices and SDKs are coming with built in Image and Object recognition, and plane detection capability. Tracking the detected objects naturally is the important part where they get occluded with other real or virtual objects out there. Eg. see videos below.\n Snapdragon Spaces: Object Recognition and Tracking | Qualcomm Gesture detection — XR interactions have been evolved now, users can use their hands naturally with different hand gestures. They don’t need to use physical controllers. Use can also control the XR apps with eye tracking, or facial expression too. Not just that, now XR devices are evolving to detect full body posture detection, and can track it well. The applications can react to full body movement. You might have seen a metaversal conference call where avatars are with only upper half of the body, but it will be changing soon, as the AI now head worn devices can also predict movement bottom half of the body. Ref here.\n Voice recognition — Avatars or digital twins of us are replacing us in metaverse, and they need to replicate/animate our voice, lip movement, and more. Voice recognition is at the core of it, and not just that the XR devices now support voice based interactions to control the XR applications.\n Spatial tracking — Realtime tracking of device with respect to the environment is a basic need for XR application, and it is called Simultaneous Localisation And Mapping (SLAM). Most devices are now capable of tracking in 6DOF (Degree of Freedom).\n Spatial Sound — Sound is very important for spatial application, and it need to reflect naturally based on the environment. Read more details here, and hear it out below;\n Image/Video Processing — Live video processing and deep learning, and sensing the environment is becoming useful tech for metaverse usecases. See here RealityScan, and NVIDIA Instant NeRF. NVidia Instant NeRF Brain Computer Interface (BCI) — is the next big bet in evolving interactions, controlling metaverse just by thoughts. See some development, Snap acquired NextMind, Galea from OpenBCI, or emotiv Cloud Technologies Cloud technologies has also evolved with time, and it has been acting as a catalyst for scaling emerging tech. There has always been a battle of private/public/hybrid cloud and the balancing game between these. Infra management and maintenance is always a key cost consideration in digital modernisation.\nWhile XR devices are getting more powerful, and capable but that is at the cost like Lower battery life, bigger form factor, overheating, more wiring etc. This limit them for longer usage. Cloud based XR, is looked as scale solution for XR, following evolutions are helping the adaption\n Cloud Rendering — Rendering XR content put huge load on XR devices and currently not all XR content can be rendered in these devices. Lot of effort is being put in to optimising the XR content for different devices. This restricts the use of photorealistic content in XR, mean sub-optimal experience. XR is all about believing in the Virtual content just like it is real, and this breaks the experience. Device SDK can provide an automatic way to switch between local vs cloud rendering (near or far cloud). Read here Azure Remote Rendering and NVIDIA CloudXR Cloud Anchors — Another key cloud requirement is mapping the world. Finding the feature points in the environment and storing those feature point on cloud. This is called Cloud Anchors. Read here about google cloud anchors and Spatial Anchors from Microsoft Azure. Other general cloud usage continue here too for realtime data transfer using WebTRC/Websocket and storing the data on cloud. or integrating with other SAAS and managing the XR devices with MDM solutions.\nInternet Of Things (IoT) XR devices are already acting as hub of IoT eco-system, that has number connected sensors, has internet, and can do some level of local computing and can well talk to connected cloud services. So the XR devices are more evolved IoT devices in one sense. They can well connect to other IoT devices over the different IoT protocols (BLE, Zigbee, IR etc) to understand the environment even better.\nIn the industrial context, connected eco-system can bring more fruitful results. For example, bring contextual information about a machine when user is in its proximity and machine can be accurately identified by merging multiple identification methods like image/object recognition and IR/proximity data.\nNot just that, IoT eco-system can well handshake XR devices and process or read data from it and behave accordingly. In nutshell XR devices can become ear and eyes for the traditional devices, and the Enterprise Metaverse is all aiming for it, to bring humans and machine/robots closer, let them work together.\nBlockchain Metaverse is evolving in multiple directions, sustainability is one of the characteristic of metaverse. Content creation and making it available to public consumption is important part of metaverse survival, and adaptation to general use. Currently there is no standard way to distribute or monetise on digital content. You might say why am I discussing this here in blockchain section, how is blockchain linked to metaverse?\nOne reason is NFT which is an evolution of blockchain technology, that claims to boost content creation economy and where creators can monetise on their asset. I am not very sure how long this will fly, but currently it is really hyped. Number of NFT based market places booming in this area for example Cryptopunks, Hashmasks, SuperRare, Rarible, Sandbox, NFTically, WazirX.\nAnother reason is, another characteristic of metaverse that talks metaverse not being controlled/owned centrally by one, rather it should be driven from communities/people. This lead to de-centralised architecture for metaverse solutions and Blackchain distributed app architecture comes closure to it. So it’s more of “de-centralisation” that is related to metaverse than the blockchain. De-centralisation in form of near remote computing is also a factor here. Decentraland is building marketplace on blockchain tech, giants like samsung are also adopting it.\nThe traditional payment methods are evolving to include NFT/Crypto based payment mode, and whole new era of advertisement business for virtual real-state coming in to play. Current stage of blockchain based architecture may not suffice speed the metaverse need, but more people use it, more it will evolve naturally. There is always a tussle between forces for centralisation (government, enterprises, authorities, auditors) and decentralisation (community, people, open eco-system etc), and will continue like that.\nnG Network We are talking about huge data transfer, the network speed has to be really high to meet the metaverse need of real time environment sharing, and live collaborations. Evolution to 5G/6G will bring more metaverse adoption. It’s all about experience, and if immersion breaks it would not look real. It has to be close to real world. There would be no storage in device, as the devices will go very light-eight wearables, all can be processed remotely and transferred via the super fast network.\nTelecom provider also evolving, see MEC computing with 5G by Verizon.\nXR devices also need to go wireless, and solution like LiFi are evolving to catch the game. Even charging to these devices evolving with wireless charger. 5G rollout in trial, I hope it will get faster for enterprise use-cases first and then it will to get hand of consumer. So enterprise metaverse will take 5G/6G advantage faster than that of consumer side, enterprise can have controlled high speed network.\nSatellites internet is also evolving and claims to provide high speed, low latency network. other development happening on quantum network for safer internet.\nConclusion Evolution is a natural process, and so is true for technology or businesses. Technology emerges and get tested, conceptualised for real use. There is always some loopholes, constraints and limitations with emerging tech or business, it goes in to multiple evolutions before it become ready for prime times. The same is true for the Metaverse, that is being driven from some set of technological evolutions. More it get explored, more it will evolve, and we hope to reach true stage of the metaverse in future.\nCharacteristics of Metaverse\n Metaverse is Open and Independent — Metaverse has to be open to use, open to build on top of it, and It must not be controlled by any one. Metaverse is One and Interoperable — Metaverse has to be one and only one, and interoperable and integratable across software and hardwares. Metaverse is for everyone and self-sustainable — Metaverse should be accessible for everyone, and has to be self-sustainable where people can buy and sell things and earn to survive there. In nutshell, Metaverse is the next Internet — it is touted to be the next natural step in the evolution of the internet with use cases to boot.\nThis article is originally published at XR Practices\n","id":89,"tag":"xr ; ar ; vr ; metaverse ; AI ; evolution ; general ; technology ; IOT ; blockchain ; NFT ; 5G","title":"Metaverse — a technological evolution","type":"post","url":"/post/metaverse-a-technological-evolution/"},{"content":"Awards Indian Achiever\u0026rsquo;s Awards 2022 ☞ More Details\nRecognitions NITK Alumni Recognition ☞ Patent#11,988,832 B2 ☞ Patent#12,081,729 B2\nJudge at Smart India Hackathon ☞ More Details ☞ More events like this\nGuest Lecture at NIT Calicut ☞ More Details ☞ More events like this\nGuest Lecture at DesignersX ☞ More Details ☞ More events like this\nCoverage ","id":90,"tag":"me ; profile ; sih ; speaker ; event ; takeaways ; about ; awards ; recognitions ; judge ; lecture","title":"Recognitions","type":"about","url":"/about/recognitions/"},{"content":"BML Munjal University, a Hero group initiative, is organizing a 5th edition Hackathon, named HackBMU 5.0. It is a hackathon focused on promoting innovation, diversity and networking among all future hackers. We will be hosting upcoming engineers and developers from all over the World to create mobile, web, and hardware hacks for an intense session.\nHackBMU seeks to provide a welcoming and supportive environment to all participants to develop their skills, regardless of their background. Last year and the year before, HackBMU garnered huge attention with 30+ teams from around the country. This year, I will join the event as a Judge\n17th Feb 2022 - 19th Feb 2022 Join Discord server : https://discord.com/invite/qUUZBEMQMZ Register here - https://dare2compete.com/hackathon/hackbmu-50-bml-munjal-university-bmu-gurgaon-250263\nStay tuned!\n","id":91,"tag":"iot ; ai ; xr ; ar ; mentor ; judge ; event ; university ; hackathon","title":"Judge at HackBMU 5.0","type":"event","url":"/event/bmu-hack5/"},{"content":"IEEE WIE (Women In Engineering) at Bennett University - The Times Group, has hosted a panel discussion with students preparing and participating in hackathons.\n I joined the panel of 🌟 Swati Anuj Arya (chief information security officer, Amazon Pay) and 🌟Srishti Gupta (Senior data analyst at Ernst \u0026amp; Young) to share our experience with students.\nHighlights of the event! It was a great discussion, we started with questions on prerequisites of hackathon, and skills required to participate in hackathon, where we talked about being a thinker, solution oriented mindset, authenticity and practicality of the ideas and stay aligned to the problem statements. Also discussed on tips for preparing for hackathons, building a capable team that think, sell, acquire skills on the go.\nParticipating in hackathons and coding competitions with a focus on exploration, learnings outcome may shape your career as an entrepreneur or a successful professionals.\nWe talked about challenges from Covid, and how may we see it as an opportunity to solve the challenges. “It’s Ok” to accept the new normal. Modern problems would need modern solutions.\nFull video here.\n I thank Karishma Kapur and team IEEE WIE BU for inviting me for the session.\n","id":92,"tag":"event ; webinar ; students ; hackathon ; university ; mentor ; discussion ; ieee","title":"Your Hackathon Guide at IEEE WIE - BU - The Times Groups","type":"event","url":"/event/ieee-bu-wie-discussion/"},{"content":"National Institute of Technology Hamirpur is organizing Tech-Nexus, an online technical carnival, as an initiative of the Institute Innovation Council, MHRD. The essence of Tech-Nexus is to unite all the technological talents across the nation and give them a platform to express their skills and ideas. This flagship event will help people to showcase their unique projects and trends in the technological field as well as the challenges encountered, and solutions adopted while developing the projects.\nTech-Nexus has multiple event that will focus on discovering the greatest ideas of students from diverse areas.\nI will be participating at one of the hackathon named Vividhta as a Judge.\nAlso find my guest lecture on getting into the metaverse here.\nHighlights of the event! It was pleasure to meet the young innovators and discuss the ideas solve problems society is facing. Amazed to see usecases of IoT, Datascience, CV, Blockchain, ARVR, and more..\nSome of the interesting ideas are listed here\n Smart farming to help the farmers to produce more and grow right crop with right environements, prevent crop from animals, locus and more.. Detecting telephone towers and predicting the safe place for the households and authorities, and systems that prevent gas leakage AI in healthcare. NFT aggregators, Blockchain based identity and verification system. Crowd software testing concept based on blockchain. ","id":93,"tag":"event ; nit ; students ; judge ; xr ; ar ; vr ; metaverse ; hackathon ; university ; blockchain","title":"Judge at Tech-Nexus Hackathon","type":"event","url":"/event/tech-nexus-2021/"},{"content":"Metaverse is the next Internet, that’s what we summarized it in the previous article. We discussed it’s characteristics and building blocks. In this article we will discuss some prominent use cases of the metaverse. We understand the metaverse as a trillion dollar opportunity in the next few years. It is growing at real fast speed, but without effective rules and regulations, without effective standards and practices, and we will also touch upon this concerning part of metaverse.\npicture credit : www.medicaldaily.com, www.nicepng.com\nUse-cases of Metaverse The Internet today has already created some metaverse, when we connect to the internet, we are knowingly and unknowingly becoming live to the world. The apps on our phone are getting active on the internet, and ready to talk to the world. Tomorrow’s internet will be more immersive, we will actually go into the internet\nHere are few use cases of metaverse:\nLocation Agnostic Work Environment Work location is becoming irrelevant for many businesses, the need to bring tools and technology that can make the work environment free from location, is arising. Covid19 has pushed this heavily, and businesses that could not achieve this in some or other form have lost to pandemics. Today, many enterprises are talking about different modes of work, to work from home, hybrid or work from anywhere. Metaverse will smoothen this transition for enterprises and consumers. Working from metaverse would be a common thing. Just like Microsoft Mesh, or Meta’s Horizon World does.\nHere are few key objectives\n Remote Expert Collaboration — Experts don’t need to travel at the exact location Be in your comfort — In the metaverse, my avatar can replicate me while working, and I don’t always need to be on camera. Always on camera is so stressful for long hours. Everything Digital — Metaverse will have the potential to have digital copies of almost everything, that means I don’t need to carry a lot of hardware, devices or instruments all the time to use them. I would be able to work on digital twins with almost the same efficiency. Infinite and Interoperable Space — Metaverse will provide opportunity to infinitely scale the work environment, and take it anywhere, interoperable across devices mobile/PC/smart glasses and more.. More importantly customize it as per need with minimal cost. 3D for 3D — Traditionally all the tools used for designing 3D products are on 2D. We have been taught in Engineering Design how to represent a 3D on 2D paper on screen. This is all going to be changed in the metaverse. 3D objects will be designed in 3D, that’s more natural, and without any material wastage. It will make things faster to market. Eco-friendly — Yes, the metaversal work environment will be much more eco-friendly to the environment as well as to the pocket. It would run on minimal operation cost, and with minimal waste. Skilling and Reskilling Industry has to tackle skilled workforce shortage and onboarding new people on the skills. As per Global Talent Crunch report, 10000 baby-boomers reaching retirement age every day for next 19 years, covid19 making them retire even faster. By 2023, 75% of the global workforce will be millenials, and traditional methods of onboarding and training are outdated for these millennials. On top of that, enterprise complexity is increasing, and the need to co-work with machines is arising. Workforce safety compliance requirements are making things complex.\nOnboarding and training, with remote collaboration in metaverse can solve some of the above challenges, and improve the efficiency. The future of work and workforce will be more metaverse ready.\n Managing enterprise complexity with XR Product design and development are extremely complex, so bringing products to market takes a lot of time. Developing…\n Who needs training, and for how many hours, auto-suggestion, contextual on the job guidance, co-working with machines, remote controlling the machine, all will be possible in an AI empowered metaverse. Humans will be empowered more with technology to make better and faster decisions. They will be able to record the work, and rewind and make corrections in the work they have done.\nAssets and Marketplaces One of characteristic Metaverse is — self sustainable, that provide it’s own way to fund creator and co-creators. A new economy of creators and co-creators is growing. Co-creation is not new, whole social media market is built on it, YouTube, Snap AR Lens, Stories, or Insta Reels are all example of co-creation.\nMetaverse claim to fund the content creators transparently, with NFT (Non-Fungible Token). Eg. Original GIF of below Nyan Cat was sold at $600K.\n NFT technology allows metaverse users to own digital assets, and trade them on marketplaces; the original creator would get the loyalty on every transaction. There are a number of marketplaces booming in this area for example Cryptopunks, Hashmasks, SuperRare, Rarible, Sandbox, NFTically, WazirX.\nThe use-case for NFT would be all the confidentials/copyright assets can be made open on the marketplace, as a user I don’t need to fake it, can directly use original assets with the originator\u0026rsquo;s permission. For example if I want to bring a Ducati bike 3D model in my game, I can get it directly from Dukati on the marketplace, instead of searching for it. So metaverse will not only benefit the creators but also make the digital content more accessible to the users.\nReal Estate and Event Management Metaverse will be a self-sustainable marketplace for digital content. Everything in the physical world can have it’s digital version. Companies like Niantic, are building a whole 3D map to augment the physical world. Just like physical things, viral things will also be traded. So people will buy/sell virtual land on metaverse, and build houses/buildings and sell it or rent it, or earn from advertisements on them. See it happening here.\nJust like physical properties, people will own properties in metaverse, similarly events and concerts will be organised in metaverse. There are number of VR events happening around the world from convocation, award shows, virtual conferences, concerts and more.\nPicture credit : https://www.nicepng.com/\nAdvertisement in metaverse will be very big opportunities, and can make everything free of cost and open for people to join in, or even people who join/refer can be paid for joining the events.\nMeet and Greet Metaverse is all about human connections and Meet and greet is a basic use case of metaverse, within capabilities of real world and beyond it. Metaverse can bring the real power of being human.\n Here is how we will Meet and Greet in Metaverse —\n Meeting the ones who are no more with us. Metaversal ability for the specially abled Multiple social identities — people will have multiple social identities in metaverse. It DOSEs when meeting — Dopamine, Oxytocin, Serotonin, and Endorphins are the chemicals in the brain, and our feeling when we meet, and remember is based on the ratio of these chemicals. It is observed that when people meet in VR, similar chemical reactions happen in the brain. This is a very big thing. Metaversal therapy — enhanced from VRT, and more will happen, may be metaverse as a drug. We discussed key use cases of metaverse, let’s now get into concerning part of metaverse.\nConcerning part of Metaverse Just like any technological evolution there is always some concerning part, Metaverse do have some concerns, as it will be exposing us and our information more to the world. Metaverse is being treated a dystopia by many.\nProtecting Identity In Metaverse, people may want to keep multiple social identities but safeguarding metaversal identity would be the difficult part, without strong laws and regulations. At this stage it is not very difficult to impersonate others. Our future identity is at Risk. Think of following\n What if someone represent you in your office, society, without your knowledge? What if someone accessing your property/assets without your knowledge? Social media trials (metaversal trial in future) can cause the irreversible impact on social reputation, health and wealth, by the time the truth prevails. It would be difficult to differentiate the real or fake We may need Universal Virtual Identity, and must need attention from governments, institutions, legal authorities, enterprises and people to come together and safeguard our future identity.\nProtecting Privacy Maintaining privacy would be another big concern in the metaverse, we will be exposing more information, and we will be more vulnerable to exploits. We will always be under surveillance, when everyone is roaming around with smart glasses or smart contact lenses with cameras on. Most of the vehicles will have cameras.\nNew types of restrictions will be imposed, for example, you may not be allowed to enter a certain area with glasses, or there will be a detector that detects the camera devices. Physical places may be going aways, a lot of businesses will be impacted, people will have fear of coming out of their homes.\nWhat will be legal options if privacy is compromised in the metaverse? Difficult part would be how to define the compromised privacy, will it be acceptable in the physical justice system?\nMind Blowing Impacts Metaverse is an immersive experience and we will be going into the internet, into people’s imagination, and mind. We talked about DOSE (Dopamine, Oxytocin, Serotonin, and Endorphins), how their chemical equation can impact our feelings. How about changing that chemical equation by metaverse? think of drug addictions, that is also a result of some chemical balance in the brain. Similar addiction has been observed with games, and can Metaverse be a drug, Look at this Forbes Article Legal Heroin.\nSo metaverse has capability to influence the decisions, if it goes in wrong hand, impact will be huge. If we get compromised, what will be the legal option? That is another big concern explained in the next section.\nLaw and Order Metaverse is growing without meaningful rules and regulations. What is right and what is wrong in Metaverse is not known. Crime in the real world may not be a crime metaverse. For example, killing someone, cheating, posing weapons is not a crime in the metaverse. Can we deal with crime in a metaverse with a physical legal system?\n Will we have a metaverse constitution? Will we have a metaverse court? How to identify crime, do we such tools and tech? Who will maintain law and order? will there be virtual police? What would be punishments? Block people/ put them in virtual jail? I don’t have answer to all of these at this stage, but what we can do is get ready for it.\nGetting ready for metaverse We have defined metaverse, discussed use-cases and concerns. Some of it might look just the hype or scary at this stage, only time will tell what will be real and what not. I believe when there is a problem, there is a solution, innovation happens when there is a need to innovate.\nIn the long run, not using metaverse or staying away from it would not be an option. No matter how scary it looks, it is coming closer than we think.\nWhat we can do\n Create awareness — start using it Get trained — build/co-create on it Adopt ethically — Find right use-cases that can solve real world problems Contribute — Build practices, standards and contribute share with communities. Join hands with enterprises and governments, institutions, systems and people. Most importantly, what we can do is Play our part, promote ethical use of metaverse, and educate on unethical use of it.\nKeep sharing, keep contributing.\nThis article is originally published at XR Practices\n","id":94,"tag":"xr ; ar ; vr ; metaverse ; google ; android ; general ; technology ; facebook ; meta ; blockchain ; NFT ; IOT","title":"Getting into the Metaverse — Part 2","type":"post","url":"/post/getting-into-the-metaverse-part-2/"},{"content":"Metaverse is the most enquired word after Facebook renamed itself to “Meta” to align to their metaversal vision. However, the term “metaverse” was coined 3 decade ago by Neal Stephenson, in his 1992 science-fiction novel “Snow Crash,” which envisions a virtual reality-based successor to the internet. In the novel, people use digital avatars of themselves to explore the online world, often as a way of escaping a dystopian reality.\nToday, Metaverse is being considered as a $800 billion path with double digit growth in next 5 years, where industry is trying to get most out of it by co-creating with communities, and focusing on it’s cost-effective universal acceptance. Metaverse is supposed to be driven by technology and business related to content, AI, ARVR, IoT, and Blockchain. However, the definition of metaverse is not very clear, it is very loosely defined by individuals and enterprises as per their interests, purpose, use-cases and understanding.\nIn this article, we will try to understand the metaverse in more detail and attempt to define it, and its characteristics. We cover the metaverse use cases and concerns in my next article.\nDefining Metaverse Root of the metaverse has always been from science-fiction gaming and entertainment, so definition would also converge from there. Let’s look at Oasis of Ready Player One, could be one of its kind.\n People come to Oasis for all the things they can do, because of all the things they can be\n Dictionary meaning of the word “Meta” is “beyond”, a universe beyond this real life could be the metaverse. Just like what Second Life defines\n If you are traveling beyond this life, where weight of tradition cannot follow…\n There is an endless list of such games eg. Fortnite, Decentraland, The Sandbox, Moonieverse, where community based co-creation and play-to-earn kind on platforms are being built, these are some sort of mini-metaverses.\nMetaverse can’t just be gaming and entertainment, it’s going beyond it. The way Meta and Microsoft defines it as new ways of collaboration, and human connection, and working in Metaverse.\n Niantic, best known for bringing Augmented Reality at scale by games like Pokémon Go, they define metaverse as a better reality of the same universe and call it real world metaverse. Niantic aims to build 3D maps and augment the whole world with their Lightship platform for developers and creators communities. Google CEO Sundar Pichai looks at metaverse as immersive computing in AR. Google, who is pioneer in ARVR (XR), most of XR devices today run’s on Google’s Android OS today, and now they are building Operating System for AR, may be a key enabler to Metaverse devices.\nAfter looking at multiple metaverse definitions from different enterprises, we can say there are few common aspects such as co-creation, openness, currency, self-sustainability, communities, sharing, collaboration and network. Before getting into a more formal definition of Metaverse, let’s first understand two words: “Metadata” and “Internet”. Metadata is the “other data”, “extra data”, that can refer something, for example email-Id, phone number, address, interests, what we like/dislike, things we own, places we go.. all are metadata for us. The way metadata is connected with us, let’s call it the Intranet of Metadata. Similarly everyone has metadata, and the way we all are connected is the Internet of Metadata. Metaverse and it’s characteristics Now we understand the metadata and internet, and the Metaverse can be defined as follows\n “Metaverse is an universe built up on the network of metadata around us and about us. In a universe where the real and fake blur together, photo-realistic avatars are making use of virtual identities, virtual intellectual properties / assets, virtual societies and their own virtual currencies”\n Here are the characteristics of Metaverse Metaverse is Open and Independent — Metaverse has to be open to use, open to build on top of it, and It must not be controlled by any one. Metaverse is One and Interoperable — Metaverse has to be one and only one, and interoperable and integratable across software and hardwares. Metaverse is for everyone and self-sustainable — Metaverse should be accessible for everyone, and has to be self-sustainable where people can buy and sell things and earn to survive there. In nutshell, Metaverse is the next Internet - it is touted to be the next natural step in the evolution of the internet with use cases to boot.\nAI, XR, IOT, Blockchain, Cloud, 5G/6G will be the building blocks of Metaverse.\nGetting closer to Metaverse We are getting closer to the metaverse, it is the evolution of the internet. Today’s “connecting to the internet” is becoming “joining the metaverse”. Evolution is a natural process, and we will be getting into the internet, no matter how good or bad we feel.\nMetaverse is not only about a few of the big enterprises we discussed in the earlier sections of this article, it is much bigger than all of them combined. 100s of XR devices already here and many are in making to catch the metaversal wave.\nNumber of development, partnerships happening to boost the adoption faster, for example Qualcomm Snapdragon XR platform will be boosting normal looking smart glasses development, joining hands with enterprise like Lenovo, Motorola, Niantic, Holoone and more.\nJobs markets and hiring situations are changing, there is a huge surge in demand of AR/VR engineers in recent years, however there is still a gap in the skills required for the metaverse and skills already exists. Metaversal development needs skills and creativity of an Artist, Scientist, Mathematician, Physics, Sound and Lights, and they have to be a movie director.\nNew jobs titles are coming such as Chief Metaverse Officer (to define metaverse strategy for the products, replacing existing social media strategists job), Metaverse Security/Safety Officer and Metaverse Planner. Storyteller role will be more common in experience designing for metaverse. Content Creators/free launcher economy will play big role and will make people self-sustainable in metaverse market. Play2Earn is getting popularized these days and, in future people might be earning just by being in metaverse.\nWe are going faster into the future than we think, in next 5 years, we will see what we have not seen in last 10 years. At this speed, we are getting into metaverse with new use cases to solve and new concerns to deal with. In the next article I have discussed key use-cases and concerns of metaverse, and what we can do about it.\nThis article is originally published at XR Practices\n","id":95,"tag":"xr ; ar ; vr ; metaverse ; google ; android ; general ; technology ; facebook ; meta ; blockchain ; NFT ; IOT","title":"Getting into the Metaverse — Part 1","type":"post","url":"/post/getting-into-the-metaverse-part-1/"},{"content":"National Institute of Technology Hamirpur is organizing Tech-Nexus, an online technical carnival, as an initiative of the Institute Innovation Council, MHRD. The essence of Tech-Nexus is to unite all the technological talents across the nation and give them a platform to express their skills and ideas. This flagship event will help people to showcase their unique projects and trends in the technological field as well as the challenges encountered, and solutions adopted while developing the projects.\nTech-Nexus is a two-day event that will focus on discovering the greatest ideas of students from diverse areas. I will be speaking on \u0026ldquo;Getting into the Metaverse\u0026rdquo; at the event.\n We will touch upon metaverse step by step.\n Metaverse - the definition part Metaverse - the hype part Metaverse - the opportunities part Metaverse - the concerning part Metaverse - the getting ready part Come join me at - 5PM IST Sunday, 19 Dec 2021. It will be live at Tech-Nexus YouTube Channel\nVideo link here Highlights\n 00:01:00 - Introduction 00:04:04 - \u0026ldquo;Metadata\u0026rdquo; and \u0026ldquo;Internet\u0026rdquo; 00:09:05 - Oasis of Real Player One - is that Metaverse? 00:14:10 - is Metaverse about collabration with avarars? 00:20:00 - is the Realworld metaverse, the real metaverse? 00:23:10 - Metaverse definition and characterstics 00:27:06 - Metaverse building blocks 00:38:00 - Metaverse facts and figures 00:43:00 - Impact on hiring, jobs. 00:50:30 - Metaverse usecases - location agnostics work environments 00:57:00 Tackling workforce shortage 01:01:00 Digital economy 01:06:00 Real-state and event management 01:07:00 Meet and Greet - beyond the physical reality 01:15:00 - Concerns - Protecting Indentity and Privacy 01:17:00 - Mindblowing imapacts on humans 01:20:00 - Law and order situation in metaverse 01:23:10 - Getting ready - Play your part ","id":96,"tag":"event ; nit ; students ; speaker ; lecture ; xr ; ar ; vr ; metaverse ; webinar ; nft ; blockchain","title":"Guest Lecture on 'Metaverse' at NIT Hamirpur","type":"event","url":"/event/getting_into_metaverse_guest_lecture_nit_hamirpur/"},{"content":"Today’s websites are designed first for mobile and then for other platforms, and it is getting important to bring mobile XR functionality into the websites. Google’s scene viewer is bringing WebXR for mass adoption. Theoretically any 3D model can be tried in your space, and you can easily built AR experience for your website.\neCommerce has already transformed into mCommerse, and now taking steps into xCommerse (XR Commerce). 3D Model viewer is very common requirement for across different domains, and it will be the basic requirement of developing for Metaverse as well as for any digital business in future. The most explored/talked about feature is “Try before buy”, or building AR experience while visualizing the products.\nI have explained WebXR — the new Web earlier, and how we can built XR experience directly on our website using additional tools such as babylon.js, AFrame.js, threejs, directly in Unity or so. In this article, I will explain easier way to build WebXR with almost no code. In the graphics below, you see my son with a tiger in my room. This experience is available out of the box by google on most of android devices today.\nThis experience can be easily extendable to any products. At the end of this I will also explain how we can customize this experience of your choice.\nGoogle Scene Viewer Google scene viewer, is an immersive viewer that enables 3D and AR experiences from your website or Android app. It lets users of Android mobile devices easily preview, place, view, and interact with web-hosted 3D models in their environment. Scene Viewer can be invoked from a web-link from a web browser or from android app, or it can be embedded in html with special tags \u0026lt;model-viewer\u0026gt;.\nGoogle Model Editor Google does have free tool called Model Editor where you can create test and validate. Here is the link . It allows customise to properties, and produce embeddable code.\n\u0026lt;model-viewer bounds=\u0026#34;tight\u0026#34; src=\u0026#34;Astronaut.glb\u0026#34; ar ar-modes=\u0026#34;webxr scene-viewer quick-look\u0026#34; camera-controls environment-image=\u0026#34;neutral\u0026#34; poster=\u0026#34;poster.webp\u0026#34; shadow-intensity=\u0026#34;1\u0026#34;\u0026gt; \u0026lt;div class=\u0026#34;progress-bar hide\u0026#34; slot=\u0026#34;progress-bar\u0026#34;\u0026gt; \u0026lt;div class=\u0026#34;update-bar\u0026#34;\u0026gt;\u0026lt;/div\u0026gt; \u0026lt;/div\u0026gt; \u0026lt;button slot=\u0026#34;ar-button\u0026#34; id=\u0026#34;ar-button\u0026#34;\u0026gt; View in your space \u0026lt;/button\u0026gt; \u0026lt;div id=\u0026#34;ar-prompt\u0026#34;\u0026gt; \u0026lt;img src=\u0026#34;ar_hand_prompt.png\u0026#34;\u0026gt; \u0026lt;/div\u0026gt; \u0026lt;/model-viewer\u0026gt; I just made the downloaded scene and made it public here, and you can browse the it as URL : https://thinkuldeep.com/modelviewer/ on your android phone browser. You can rotate the model, and if you click on “View in your space”, it will ask a camera permission and bring the model in your space. You can interact with it. The whole experience would not take more than 5 minutes.\nYou may ask there does Scene Viewer come into picture? The above example usage embedded mode, and android browser interpret it in native Scene Viewer, based on what ar-modes provided on the HTML tag \u0026lt;model-viewer\u0026gt;. Let’s explore other ways of using Scene Viewer in next section.\nLaunching Scene Viewer with a URL We can open any android app or intent with the app using a mapped url, it is called deep-linking, or a direct url with intent scheme (intent://).\nIn the above example, we have published a 3D model Astronaut.glb, and here is the URL for launching Scene Viewer.\nintent://arvr.google.com/scene-viewer/1.0?file=https://thinkuldeep.com/modelviewer/Astronaut.glb\u0026amp;mode=ar_only#Intent;scheme=https;package=com.google.ar.core;action=android.intent.action.VIEW;S.browser_fallback_url=https://developers.google.com/ar;end; Let’s add a link to this URL on the same index.html page in above example.\n\u0026lt;a href = “URL”\u0026gt; Lunch Directly in Scene Viewer \u0026lt;/a\u0026gt; Browse the URL https://thinkuldeep.com/modelviewer/ again, and you will see a link “Launch Directly in Scene Viewer” on the top, clicking on that would open Scene Viewer as described below. It asks to scan the space, and once it detects a plane, it will bring the 3D model in to the space. So to make it work we just need publicly accessible URL of the 3D model, that means we can experience for all our products in XR if their 3D model are available.\nBuilding Custom Model Viewer Google scene viewer is proprietary from google, we may not customize it beyond the configurability supported by Google. In this section, we will discuss some easy options for building a customer model viewer.\nAs described in last section, we can always open any app/intent from android browser with intent url. So for each model on our website, we can pass filepath as parameter in the intent URL, and respective linked app can intercept the filepath and load (or download if needed) the model as per their business requirement.\nThis way would work for the websites in our control, but what if that follows google’s scene viewer approach? we may not ask everyone for change their URLs as per our need.\nIntercept the URL Only way left is Android Webview, we need to intercept the url intent://arvr.google.com/scene-viewer, if we found it, then find the file parameter from url.\n... myWebView = new WebView(this); myWebView.setWebViewClient(new WebViewClient() { public boolean shouldOverrideUrlLoading(WebView view, String url) { Uri uri = Uri.parse(url); if (url.startsWith(\u0026#34;intent://arvr.google.com/scene-viewer\u0026#34;)) { String filePath = uri.getQueryParameter(\u0026#34;file\u0026#34;); responseCallback.OnSuccess(filePath); return true; } view.loadUrl(url); return true; } }); ... Detailed approach for intercepting URL is defined in this article. A responseCallback can take the filepath to the originating 3D app such as Unity.\nDownload the model Once we get the path in Unity, we can download the model, and store it locally.\nprivate IEnumerator GetFileRequest(string url, Action\u0026lt;UnityWebRequest\u0026gt; callback) { filePath = Path.Combine(streamingAssetsPath, GetFileNameFromUrl(url)); using UnityWebRequest request = UnityWebRequest.Get(url); request.downloadHandler = new DownloadHandlerFile(filePath); var operation = request.SendWebRequest(); while (!operation.isDone) { yield return null; } callback(request); } Show the downloaded model Once download is complete, initialize the model. We can use GLTFUtility as GLTF importer.\nprivate void DownloadModel(string url) { StartCoroutine(GetFileRequest(url, request =\u0026gt; { if (request.isNetworkError || request.isHttpError) { Debug.Log($\u0026#34;{request.error} :{request.downloadHandler.text}\u0026#34;); } else { model = Importer.LoadFromFile(filePath); model.transform.SetParent(parent); //set the parent } })); } This way we can view any 3D model implemented using google’s sceneviewer approach in our custom model viewer.\nConclusion I have presented some easy ways to enabled your website with XR features such as “Try in your space”, I recommend to use the google suggested approach the first one. The second approach with custom model viewer need to be validated on security aspects, and there might be good options in future with OpenXR. I hope google will add more model in these approaches, and we will see very generic and open approaches to meet the need of Metaverse.\nThis article is originally published at Medium\n","id":97,"tag":"xr ; ar ; vr ; unity ; webxr ; immersive-web ; metaverse ; google ; android ; spatial web ; general ; tutorial ; technology","title":"Try everything in your space - WebXR","type":"post","url":"/post/try-everything-in-your-space-webxr/"},{"content":"The MasterClass This October, at Thoughtworks India, we’re celebrating our 20th anniversary! That’s 20 years of excellence in technology!\n All through the month we’re hosting 10 tech talks, delivered by some of our best-and-brightest technologists in India. Each of them will be taking you through different topics, ranging from Distributed Delivery and XR, to Security and Navigating Ambiguity.\nI am sharing stage with Raju Kandaswamy who is a robotics enthusiast and taking us through \u0026ldquo;Using XR in building autonomous vehicles\u0026rdquo;.\nHighlights of the Masterclass XR is a technology that provides awesome user experience and as well the tools that are used to build XR are capable of providing simulation environments needed to train autonomous vehicles without needing to build the actual hardware and fake environment. Setting-up environments and parameterizing the content of the environments shall be automated. Here in this master class we understand what is XR and see 2 such use cases\n Training a car to do autonomous parking using simulation Training a Robot arm to Pick \u0026amp; Place object using simulation Register here This is register only event, please register here and join us on Wednesday| October 20 | 3pm - 5pm IST\nVideo Recording \u0026nbsp; ","id":98,"tag":"xr ; ar ; vr ; mr ; simulation ; thoughtworks ; event ; speaker ; talk ; digital-twins","title":"Speaker - Thoughtworks MasterClass-2021","type":"event","url":"/event/thougthworks-masterclasses-2021/"},{"content":"Modern enterprises must manage multiple layers of complexity across thousands of assets, workflows, and processes — and there’s near-zero tolerance for issues that interrupt operations, damage profitability, or put workers’ safety at risk.\nThese layers of enterprise complexity create significant challenges in four key areas:\n Design and development are often slow and expensive An inability to demo and customize complex products can make them difficult to sell Users need continuously updated training to operate products effectively and safely A shortage of on-the-spot expertise endangers employee safety and increases downtime risk and cost However, many enterprises are beginning to use extended reality (XR) technologies to overcome these challenges.\nWhat is XR?\n Extended reality (XR) is an umbrella term used to describe technologies that immerse a user in a virtual world or augment the physical world with virtual content — like virtual reality (VR), augmented reality (AR), and mixed reality (MR).\n XR technology adoption is growing fast, especially for productivity use cases around user training and industrial maintenance. In fact, the global market for AR and VR is expected to reach $571 billion by 2025.\nAccelerating time to market Product design and development are extremely complex, so bringing products to market takes a lot of time. Developing autonomous vehicle technology, for example, requires large amounts of computing time and resources. And testing the technology in physical environments can be prohibitively expensive or dangerous.\nUsing XR, enterprises can not only accelerate product design and development but also reduce costs. Testing autonomous vehicle technology in repeatable virtual environments means less crashes of a real car during testing, less LIDAR equipment needed, and less material wastage. Once optimal results are achieved, the autonomous technology can be further tested on real vehicles.\nXR can also help enterprises create and iterate prototypes faster, implementing user feedback in real time to deliver tailored products that meet customer needs.\nMaking sales easier When you\u0026rsquo;re selling complex products, it’s not always possible to let customers “try before you buy”. For example, remote medical operating theater equipment is complex, expensive and typically not available for demos. And even if you can perform a demo, it requires scheduling, travel, and setup that increase costs at the vendor and customer side.\nCreating a VR experience, however, reduces sales complexity and improves conversion rates by allowing customers to visualize all possible options in real time. This saves costs on the customer side and helps customers make better informed decisions by validating their assumptions and concerns. And on the vendor side, it creates more opportunities to cross-sell accessories, supplies, and supporting products.\nImproving training to reduce costs and revenue loss Traditional training with books, rote memorization, crude mockups, and PowerPoints can’t keep up with enterprise complexity. Even if training is available digitally, the materials are often out of date because products are constantly changing. People with expertise have limited time to train others, and there are constant interruptions to work as people get directions or cross-check diagrams and schematics, increasing costs and impeding revenue generation.\nWith XR technologies, on the other hand, instructions can be overlaid on the job, helping trainees learn faster and make better decisions. For example, trainees using AR at a Boeing assembly facility completed tasks in 35% less time than trainees using traditional 2-D drawings and documentation. Plus, with AR tools training can be monitored, tracked, and repeated faster and more cost-effectively.\nDelivering on-demand expertise to improve safety and reduce downtime A shortage of skilled workers has a huge impact on productivity and safety, with errors leading to expensive downtime and dangerous safety incidents that can bring operations to a halt. And as employees with expertise retire, it’s increasingly difficult and expensive to replace the lost knowledge.\nBut with XR technologies, enterprises are finding new ways to help employees navigate operational complexity and reduce cost and risk. GE, for example, saved $1.6 billion by using digital twins for remote turbine monitoring, and many other utilities are already using AR to improve employee safety and reduce errors and downtime.\nThree steps to managing enterprise complexity Enterprise complexity can’t be eliminated, but it can be managed with XR technologies. Some organizations have already come a long way on their XR journey; others are just starting out.\nThe goal for all enterprises should be to move away from being an “unextended digital business” purely focused on developing and improving core products and toward creating an XR immersive ecosystem. To get there, enterprises must pass through three stages of XR maturity (see XR maturation cycle diagram below).\nThe first step is to create SDKs that support the creation of new services enabled by XR technologies, and then test these services with users. From there, the next step is to integrate XR with existing services and deliver immersive experiences.\nAnd finally, at the most advanced maturity level, enterprises should invest in building an XR immersive ecosystem that allows them to monetize their services, build turnkey solutions, and create upsell and cross-sell opportunities. This “extended digital business” is supported by a business collective of partners and suppliers and a technology collective that supports the architecture of the ecosystem.\nAt Thoughtworks, we help enterprises manage complexity with XR, starting by assessing their current maturity and then defining a clear path through the maturity level to achieve their desired business outcomes.\nThat could involve anything from helping generate ideas and build innovative XR-enabled services through to developing an ecosystem that makes the best use of existing and new services and products as well as partners’ capabilities and extended networks.\nRefer to the original article below, and find the usecases and examples of how our clients are advancing their XR maturity — and generating significant business value.\nThis article is co-authored with George Earle, Raju Kandaswamy and Hari R, and originally published at Thoughtworks Insigths\n","id":99,"tag":"xr ; ar ; vr ; mr ; enterprise ; covid19 ; thoughtworks ; practice ; complexity ; benefits","title":"Managing enterprise complexity with XR","type":"post","url":"/post/managing-enterprise-complexity-xr/"},{"content":"The iFest This October, at Thoughtworks India, we’re celebrating our 20th anniversary! That’s 20 years of excellence in technology!\nInnovation Lab\u0026rsquo;s at Thoughtworks comes second edition of iFest - a month long hackathon. I got opportunity to participate in iFest as a judge.\nI was amazed to see the verity of idea. All the ideas were great and it was so hard to compare them.\nIdeas were touching were reflecting the Thougthworks culture, use technology to solve challenges society is facing. Menstruation awareness, opprtunity aggregators for rural establishments, IoT navigator for specially-abled, plant pathology, Feed the ones who needs it, enabling sustainable living and carbon saving etc.\n","id":100,"tag":"xr ; ar ; vr ; mr ; thoughtworks ; event ; iot ; ai ; judge ; hackathon","title":"Judge - Thoughtworks iFest 2.0","type":"event","url":"/event/thougthworks-ifest2/"},{"content":"The XConf XConf is Thoughtworks annual technology event created by technologists. It is designed for technologists who care deeply about software and its impact on the world.\nThis year, XConf will be a virtual experience available at two different time slots. October 7th and 8th will feature pre-recorded sessions and October 9th will feature live sessions. During the nine exciting talks spread across three days.\n I am exited to sharing the stage with renowned technologists like Neal Ford, Dr. Pramod Verma and Chris Ford.\n I will be speaking with Raju Kandaswamy on \u0026ldquo;The Shift from Enterprise XR to Ethical XR\u0026rdquo;\nHighlights of the talk Understanding the XR and it\u0026rsquo;s current stage Enterprise XR use-cases, and how it is moving from nicety to necessity A shift from Enterprise XR to Ethical XR, and why enterprises need to stay invested. Register here This is register only event, please register here and join us on 8th Oct, 2021 : 9:30 am - 10:00 am IST | Replay: 5:30 pm - 6:00 pm IST\nEvent Wrap It was a great event with 1000s of participants around the world. Sessions were pre-recorded, so we were able to focus on audience during the session. On 9th Oct, the finale event, was live, well moderated session by Prasanna Pendse.\nA great story from connecting all diverse sessions. Dr. Pramod Verma talked about how have we built Aadhar and india stack, and how we build protocols that are fueling to build platforms. There is so much relevance of Thoughtworks effort of building Bahmni to uplift health digital infra at low resource environment, and talks of the acceptance of AI in judicial system. We covered how XR can help provide better experience for health, and also staying focused on ethical use of tech, and all this need to be build on strong foundation that can be trusted and tested from security and reliably perspective. That\u0026rsquo;s where practices like security by design, shift left, and chaos engineering helps.\nA mesmerizing session by Chris Ford on understanding music theory (science, maths, physics and art) using programming.\n Those who missed the live session Here is a video recording of the session.\n Happy learning!\n","id":101,"tag":"xr ; ar ; vr ; mr ; thoughtworks ; event ; speaker ; xconf ; talk","title":"Speaker - XConf-2021 - Thoughtworks India","type":"event","url":"/event/thoughtworks-xconf-2021/"},{"content":"Sep 2020, a year ago, I was excited for getting nominated for an year long Global Leadership Development Program at Thoughtworks. At the same time was little confused on what we are going to do in this program for that long. I must say it was so well executed, despite the challenges of Covid#19, where we all(participants, speakers, coaches and other enablers) are globally distributed working from their home. Expectations were set, virtual meetings were set with options to attend at per individual comfort. Program was designed in multiple phases, with continuous feedback, and some measurements. Now this program come to its end for our batch, but it has left some long lasting impact on us, I feel we are now more stronger and more confident in what we do.\nI read a forbes article, that tells how important it is to train your leaders to drive a transformation, and it is amazed to see that Thoughtworks value the same. I briefly talked about this program in my article my impactful change stories. In this article, I am sharing my key takeaways from last one year of leadership journey, they are actually many and never ending as the leadership is not a fixed destination, it is a continuous improvement and effort. I am covering the ones that have changed my way of working/thinking.\nStrength based leadership It is always easy to reflect on our strengths than working on our weaknesses. The word weakness itself is negative, and bringing a positive change while on a weakness is difficult. However, on the other side, working on strength is already a winning path, and it is always a easy conversation when we talk people on their strengths, and work to add more it.\nI come across StrengthsFinder book and their assessment. It helped a lot in finding my strengths, and before this, I never thought these values in me are actually my strengths. Another important part I learned is blind spots, every strength comes with some blind spots, we need to be conscious about them when applying our strength. For Eg. One of my strength “Adaptability” that allows me stay productive when the demands of work are pulling me in many different directions, however I need to be careful when I apply this while working with people who thrive on structure and stability.\nStrength-first thinking not just help us grow, but also helps while working with/leading the team, we start thinking about the people’s strength, and encourage them strengthen more there. Overall productivity of team increases automatically, and it becomes win-win for everyone. Remember no one is perfect in this world, no one is without strengths and no one is without weaknesses, it’s on us where to focus on.\nCoaching is not what we think Just like we need doctors in life, I feel everyone need a coach in life whom we can trust, and share our things. But me mindful, coaching is not what we think! I wrote about this topic in details here. I have been coaching throughout these the years but I understand real meaning of it in last one year. Now I am more confident in coaching people and get coached. I have never thought just listening to people and re-affirming their plans and solutions could be so powerful. Learned the power of asking right questions, to get insights, and coaching models like GROW can make conversion much easier.\nNavigating through organisational politics I have been hearing and supporting statements like “There should be no politics in an organization”, “I left X organisation because of politics”, “So much politics in that organization, will not let people grow”, “lot of favoritism politics there” … till last year. We hear politics always in the bad lights, and most think that it blocks people’s growth, and bring injustice in the system.\nHere is a different perspective I learned, that politics is part of life, it is a natural aspect of human nature, knowingly or unknowingly we all do politics. Eg. My younger son just 4 years old, knows who can fulfill his demand the best, he goes to grandfather to go outside for playing any time he wants, he comes to me to play in pool, and go to his mom when want to sleep. He knows who has what influence, and capability, and how he may get his things done, and who he can easily convince.\nOrganization is made of people, and when we talk about human, politics is bound to be there. So there is no organization without politics. We all have emotions, likes and dislikes, some of them matches with others and some don’t. We all have some influence/trust/friend circle, bigger or smaller, and unknowingly there are always some entry and exits. Now it is on individuals how they see these circles. if they find ways to navigate through these circles, then these circles (socially connected network) are very helpful in getting things done. If they don’t find the way out, then they start blaming the organization politics. If we remove politics from organisation, it would become an organization of machines, driven from fixed set of rules. People organisation can’t work like that. We can only force a change in organization, but cant bring the change in org without politics.\nAs people of an organization, we need focus on building our social network and politics on trust, transparency, and growth. Exercises like Stakeholder Mapping would help to understand the formal and informal structure of organisations.\nPrioritization and Delegation I learned number of aspects and methods for prioritisation, and I now make sure I am not always doing tasks when they becomes urgent and important, but also do some important tasks to avoid my burnout. Delegation is important part of leadership, and I learned about how to do the good delegation.\nBuild your own brand There was some great exercises to defined Me Now, and Me Next, with action plan. What I like most is the term personal brand. What is your personal brand, how well you have communicated your brand to the the world. I am still not a goal person, our life is not fixed so a goal also can not be fixed. However, last year I focused more in defining my brand, and now it’s easy for me to explain it to others.\nRefer section “Build your own brand”, and “What is your WHY” of previous article. I will continue working on my brand.\nWe had some great sessions in enlightening talks, I presented brief about “What is a successful change” derived from my article.\nLearning never stops, and so it the learning for leadership. Leadership is not a goal, but it is a continuous process, we need to keep refining it. Keep it going…\nHappy leadership!\nThis article was originally published on My Medium Profile\n","id":102,"tag":"thoughtworks ; motivational ; workshop ; takeaways ; coaching ; mentoring ; leadership ; development","title":"Leadership - A result of conscious and continuous effort!","type":"post","url":"/post/leadership-development-takeaways/"},{"content":"Recently, I have been invited to share my 3 years Thoughtworks Journey, at “Impact Series” by Social Change Group TW, India.\n \u0026ldquo;Striving for positive social change is at the heart of our purpose, culture and work\u0026rdquo; — /thoughtworks\n I believe an impactful change is the change that changes you, and any social change start from within, our own journey. I shared my change journey, and received number of comments, and feedbacks. Fellow thoughtworkers, shared their views on how my impact stories is impactful for them, and I thought of sharing the same to wider audience. This is a continuation of last year’s journey.\nMy Theory of Change Change is the only constant, doesn’t matter we like it or not it take place. First step of a change journey is to accept the change, for acceptance we need to analyse impact of the change, and know what are anchors (things that blocks your path) and engines (things that let you move) for the change. Some the baggage you carry may blocks your path, read about success bias in my earlier article change to become more successful.\nNext step after acceptance is Observe, it helps to navigate through the change, and learn on the go. Don’t react to situations too soon, give some time, observe them more. Start Absorbing things that are aligned, you need to believe in the change, and then become the change. This theory has helped me navigate through my Thoughtworks journey, continue reading…\nCheck the ground - Believe in the change Initial days of Thoughtworks (TW), when I was asked to meet lot people in the organization, and I was just observing what people are saying and why are they saying so, you get lot of advice, opinions on do this, do that. I was checking the ground of TW, and TW was checking mine. It was huge change for the me to absorb and believe in. You will get lot of new terms, and focus on social capital. To believe in the TW more, I asked operations for a project where I can start as a developer and see things on ground. To my surprise, I was allowed, and appreciated for my decision. Luckily I was able to contribute well there (I think so :), — based on feedback, that\u0026rsquo;s very important part here)\nWhile observing things on ground, I absorbed TW practices, culture, and become a TWer. I learned that we own our own journey, we need to define it and travel it, the way we want. You can only lead with your influence here, and not with your title or designation. Democratic decisions at different levels, autonomy at ground, and leaders needs to work more as enablers, and many more.. you may find more on it in my earlier article.\nOpinions Matters — Build your own brand Opinions are flowing like anything, we will hear and talking about it a lot here, everyone has opinions, and you are asked to share your opinion too. On the floor, you will find people, who are speaking at conferences, sharing about books, writers of books, and sharing their thoughts and more. To matchup, I started reading books, and get addicted to writing too. I realized that I have mentored and trained so many people in past but never shared things to improve my one knowledge and my social brand value. I started writing on Medium, and then setup my personal blog site\n Kuldeep’s Space — the world of learning, sharing, and caring — thinkuldeep.com\n It is now a home of 100+ articles on technology, motivational, take-aways and events. I started writing for me, without really too much worrying about audience like, how would they feel? and started getting very good response from people. Just like social media addiction, reading, writing and sharing is also an addiction, a good addition to go for. My articles start getting good traction and so far published for ThoughtWorks Insights, DZone, LinkedIn Pulse, XR Practices Publication, German Magazine TechTag.de, AnalyticsVidhya Magazine, and DataQuest India.\nSharing thoughts, opinion with others has not only helped me creating my brand value at Thoughtworks and outside, but also helped me to navigate through another important change coming next..\nWhat is your WHY? — Bring the purpose ‘Why’ is so commonly used word here, be it discussing business with client, or starting a new project, or scaling the organization. People ask Why? Even we ask, Why does Thoughtworks exists? So it becomes a culture to focus on purpose for everything we do. This helped me to think, what is my Why? why do I need to share? whom should I be sharing with? etc.\nI started collaborating more with students and enthusiasts needing guidance, by judging and mentoring them in hackathons, conducting guest lectures, webinars for universities and communities. Built a community of 500 thoughtworkers and socially connected with 1000s of enthusiasts outside Thoughtworks by contributing at Nasscom, The VRAR Association, TechGig, Great International Developer Summit (GIDS), Google DevFests and more. People ask me how did I do it, answer is, it is result of the change I described in last section.\nOne more thing that changed my view about life, is when I got chance to mentor Julie at PeriFerry. Thoughtworks does talk about diversity, inclusion and social justice, but experiencing it on ground is life changing. Social contribution at IndeedFoundation, dreamentor.in also gives me a purpose to my work. Bring purpose and balance in life, it helps us passing the difficult years that demands changes.\nAgility in leadership — Use your strength Recently I read an forbes article, that tells how important it is to train your leaders to drive the agile transformation. Thoughtworks understand it well. I am participating in a leadership development program. It has not only connected me with global leaders in the organization but also helped me in changing my perspective. Learned about stakeholder mapping, prioritizations, delegation, time management, difficult conversation, effective communication and more.\nThe topic of navigating through organization politics was very interesting, it has changed my perspective about politics. I have shared few learning about coaching — coaching is not what you think. I have learned about my strengths and how can I use them well to expand my leadership skills. Before this, I never thought about the things that are so natural to me are actually my strengths eg. Adaptability, Action oriented, Connectedness etc. It also helped me to know about side effects of my strengths or the blind spots where I need to be little conscious when applying my strength.\nAnother best part was Personal Brand Exercise, that helps me define “Me Now” and “Me Next” and plan around it.\nWith this, I would like to conclude this reflection of 3 years journey with statement that I saw recently at a hospital while getting vaccinated.\n We can’t change the time so it’s time to change ourself.\n This article is originally published at Medium\n","id":103,"tag":"general ; thoughtworks ; impact ; motivational ; life ; change ; reflection ; takeaway","title":"My impactful change stories","type":"post","url":"/post/my-impactful-change-stories/"},{"content":"A good consultant, have to be a good story-teller. I feel you become a storyteller when you start making your own stories, so that you can explain the point of view in better way. I started writing stories while practicing storytelling and found that story writing is so powerful and it helps bring our imagination into reality. Here I am sharing a story of the BlackBox, a character that inspire me the most, and will inspire you too.\nLet’s start the story with my friend Sreedhar, the youngest and a pampered kid from childhood, grew up in the small town of Uttarakhand (India). He was considered the brightest child in his school, and managed to get into the premier engineering institute of India for graduation.\nSreedhar and I used to enjoy attending English lectures in standing positions if not thrown out of the class. Picture Source\nThis brought us closer, and he always gave me pride that my English is better than at least one person. The brightest kid of school time was somehow lost in college. He generally kept quiet, and year on year was accumulating courses to be cleared. He used to bunk most of the classes but was famous for two things. One was smoking, he had wrapped walls of his hostel room with stacks of cigarette boxes, and the second is black screen shell programming in Unix.\nHe painted the room black, and most of the time we saw his lips sucking cigarettes and his fingers dancing on the keyboard. Everyone else from the computer branch was called Box (“Dabba” in Hindi) but he got the title BlackBox, who was always in search to solve the mystery of college computer networks, specially when our hostel network crashed. Just like BlackBox in aircraft keeps the record of all the flight data that is useful in troubleshooting emergencies.\nFinal year came, and most of us were busy preparing for campus interviews or celebrating selections, while few still had courage to go for higher studies. He was in none of them.\n He used to tell me “I do things that add value to me”.\n I was not sure of the value but I was sure that the number of subjects he had accumulated to clear will keep him busy for next few years. The final year passed away so quickly, we landed up in the corporate world, and left the Blackbox in his own created lost world, a black room full of cigarettes, cables and a PC.\nTwo years later, we got together on convocation, the blackbox was still there, playing with cables and the camera’s placed in the convocation hall. It was nothing less than magic to see him controlling all the cameras from his room at that time.\n He smiled after looking at my degree and said I will not find him there. He was indicating that he wasn’t there for this piece of paper.\n I did not give much attention then and left college as a proud engineer. Years passed on, we all settled in our daily life, trying to face every problem of life in the name of agility and transformation.\nPicture Source\nI was at one of the biggest conferences in the Java world in the US, to present how we can automate Java migrations (a hot topic in those days). I was excited for my first talk on such a big platform but 5 minutes before the start I realized some hiccups in the network between my standard laptop and some advanced Audio/Visual system from some BeeBee company.\n Unknowingly I said “Where is the BlackBox?”, a fellow next to me given some surprising look. I was thinking about the BlackBox at that moment who used to help us in such situations.\n A minute later, a person wearing a BeeBee T-shirt came to me asking my consent to apply a patch on my PC from their handheld device, and I had no reason to say no. It was magic, the issue is fixed before I could raise it. The BeeBee person handed me a card. I noticed that BeeBee logo was all over, from entry, id-cards, stalls/showcases, exits, and all was automated. I turned the card and I see BeeBee means “BlackBox”.\nDisclaimer :The story is not linked to the Picture Source\nQuickly I finished my talks, started searching on yahoo! (popular search engine at that time) about BeeBee, and come to know that it is a company that produces and manages advanced A/V systems for large-scale events. I noticed a person called me from behind, holding a BeeBee tracker, I turned and guess what! I found the BlackBox again, founder of BeeBee Mr. Sridhar Kodian, holding 50 patents in the area of network and wireless communication.\n And, I understood what he meant when he used to say “He does the thing that adds value”.\n This story is originally published at My Medium Profile\n","id":104,"tag":"story ; motivational ; consulting ; storywriting ; storytelling","title":"The BlackBox!","type":"post","url":"/post/story-blackbox/"},{"content":"eXtended Reality or XR is an umbrella term for scenarios where AR, VR, and MR devices interact with peripheral devices such as ambient sensors, actuators, 3D printers and internet enabled smart-devices such as home appliances, industrial connected machines, connected vehicles and more. Find here more details\nAt Thoughtworks, we have been talking about Rise of XR for enterprises, and evolving human interactions thru our lens of looking glasses.\n Evolving interactions - Consumers aren’t just demanding availability and accessibility — they expect interactions to be seamless, and richer. Enterprises can deliver this experience through evolving interfaces that blend speech, touch and visuals.\n Recently our marketing specialist Shalini Jagadish interviewed me, and we had interesting conversation on how evolving human behaviour adopting XR for enterprises, and need of ethical XR. Key excerpt are here.\nWhat exciting technological development are we seeing in the XR space? How is it going to change the world around us? It is really exciting to talk about this technology, with advancements in hardware and internet speed, specially computer vision technologies, I feel digital transformation is getting to a new level completely.\n “Smartphones are getting more smarter, glasses are getting smarter and empowered with computer vision tech that can sense the environment”\n Things like, lidar sensors are coming to mobile devices, that can 3D scan the physical world and replicate in digital world in real time. These capable devices can now track your hand, face, lips movement, eyes and whole body, and all this brings new ways of interactions. For example, your hands might be busy working, and you do hands-free interactions with smart devices with head movement, gestures or just by talking to that device.\nXR is bringing the tech right in the context where it is needed the most. if I need see a help manual while working in the field, it will just brings the digital manual in front of your eyes with almost no interruption to my work. If I need to take take help from an expert sitting remotely, I can just share what I see in the field to the expert live, and the expert can guide me right in the field by markers and annotations. We can record and repeat the situation of physical world with almost no time and cost, and troubleshoot them better.\nWe will soon start believing the digital virtual world as real physical world.\nHow are the enterprises adopting the XR technology? I call the traditional digital business as unextended digital business, where they are loosing around 20–40% of productivity due to traditional old technologies. Enterprise are now focusing on building XR platform that can smoother the adaption of Enterprise XR (eg. ThinkReality platform). Enterprises are working on enabling their existing products and services with XR (eg. Flipkart brings Virtual Product Trial). The below article explains top five things to consider when business leaders embark on their Enterprise eXtended Reality journeys.\n Making pandemic-proof workplaces a reality with XR - Enterprise XR is gaining traction by making the pandemic-proof workplaces a reality. Here we describe the key considerations of enterprise XR journey.\n XR has already a proven tech for entertainment segment, and now it is going beyond entertainment, and getting adopted faster. For example industrial, manufacturing and automation, it is being explored for use-cases like training, remote collaboration, task based workflows, compliance, service and maintenance.\nHealthcare is also using it for training the staff and also experimenting as VR therapy where patient is introduced to a virtual environment that can cures the anxiety or psychological problems. Try before buying, fitments are popular use cases for XR in Retail and Real Estate domain. Indoor positioning is another big area where industry is exploring opportunity using XR tech. Online/Digital education segment is also growing big in this time, and bringing more realistic interactions with real looking 3D objects as compare to traditional paper pictures, it is helping the students to understand and memorize the concept faster.\nShould businesses consider investing in this technology in the Post-Covid World where people are still sceptical to go out? We are going in a new normal, where human behaviour toward many things will be changing. Read more here top 5 behaviour changes —\n The future of shopping - eXtended Reality (XR) that includes AR and VR technologies have been silently disrupting the gaming industry for more…\n We will start believing on digital world more than earlier, if technology can ease the life, we tend to accept it with its pros and cons. For example, today going with strangers on a carpool in private taxi Uber/OLA is quite common, it wasn’t the case earlier. Online schools are running for more than 1 year now, and we have accepted them as a normal thing. Working from home or work from anywhere is becoming a common thing in enterprises, and it wasn’t the case two year ago.\nToday to buy a furniture for my home, I don’t feel to go to store, do measurement and check fitment, rather I would prefer to try the furniture directly at home with XR app, and believe it as like I am placing a real furniture, this is much safer option today. Same goes for repairing a home appliances, if I can fix it with help of a remote expert, I would probably go with that and pay the expert online. If I can teleport the person in 3D in front of me collaborate at my comfort, then I don’t really feel a need to travel. If I can work from home at my comfort then I don’t really feel a need of a permanent physical office. So things are changing, XR tech is also evolving at faster speed, enterprises need to stay relevant and it is even more important to include such tech in the our digital transformation journeys.\nWhat is Thoughtworks doing in XR? We understand that XR is moving from Nicety to Necessity, it is tech that is no more a hype, it has move away from Gartner’s hype cycle. We have observed serious use cases for our clients, where they are getting benefited with this tech.\nWe understand XR adoption as a journey than just a few projects, we are helping our clients to define a XR journey, identify the right use cases that make their products and services XR enabled, and integrate with their existing enterprise services, and we bring our clients on a roadmap for XR immersive ecosystem.\nLooking at the data we unconsciously provide to these technologies, Should we be worried about Robot takeover or Mind-Control with XR? Yes, I understand your concern. It is a very valid concern. XR tech become more powerful when it gets integrated with other tech such as IoT, AI, 3D printing, robotics, wearables solutions, 5G, Brain Computer Interfaces. Advancements in other technologies is making XR more advanced. Today’s smart glass will be tomorrow’s smart contact lens, and today’s smartphone will be tomorrow’s wearable compute unit charged with body energy. We will living the tech.\nAs discussed earlier, XR is all about changing the belief, XR solutions are designed in such a way that we believe virtual replica of physical world as a real object, and get immersed in that. Just like any other tech, this tech also comes with some side-effects. This tech can produce virtual fake environments and fake situations so easily, and it is a mind changing too. This can be easily misused to brainwash people, and let them follow some unwanted motives. As XR getting more powerful, we need technology to identify what is fake and what is real. In a way, industry need to invest in producing standards and practices that allow XR devices to allow on Ethical XR, and stop this technology to get misused.\nIt is originally published at XR Practices Medium Publication\n","id":105,"tag":"xr ; ar ; vr ; mr ; enterprise ; covid19 ; thoughtworks ; practice","title":"Enterprise XR to Ethical XR","type":"post","url":"/post/enterprise-to-ethical-xr/"},{"content":"The world is finding the ways to get out of Covid impact, some are on recovery path and some are still bettling it. At ThoughtWorks, we have recently published our insights on navigating the new normal with augmented reality. Members of Communities of Practice of The VRARA come together to discuss the topic and share the opinion on way forward for XR.\nI presented thoughtworks insights to other renowned industry leaders and VRARA members like Pradeep Sharma, Swarup Choudhury and Rajiv Madane, and technology exports like Pradeep Naik and Dhruv. We have also discussed other updates from the VRAR Association.\nNavigating the new normal with augmented reality A research on how the panadamic is impacting people working from home, and how can XR technology address some of the challenges. Is 3D/XR experience really enjoyable, creative? What are the challenges in building right XR experiences? Why is the onboarding so important in XR solutions? Why to keep customer awareness part of go to market strategy? etc\u0026hellip;\n Navigating the new normal with AR Meetings, workshops, conferences, and webinars are happening in the same virtual - and often monotonous - ways.\n Also discussed the best practices for building spatial solutions described here. Then the session was well drived by Swarup, and discussed the future of XR Collaboration apps, Pradeep Khanna shared insights from VRARA on the same, also shared what\u0026rsquo;s coming. He is presenting on some interesting topics at upcoming VRARA events and VRAR Global Summit. Rajiv shared the importance of practical use case of XR for its adaptations.\nWe also discussed marketplace/XR platforms are growing in enterprises use cases - as described in this article.\n Making pandemic-proof workplaces a reality with XR The IMF called the COVID-19 crisis \u0026lsquo;unlike any other,\u0026rsquo; where the global growth contraction for 2020 was estimated at…\n Watch here our detailed conversations -\n This article originally published at XR Practices\n","id":106,"tag":"xr ; practices ; speaker ; discussion ; event ; thoughtworks ; vrara ; ar ; vr","title":"Discussion - Navigating the new normal","type":"event","url":"/event/thevrara-navigating-new-normal/"},{"content":"This article is originally published at Thoughtworks Insights, co-authored with Aileen Pistorius, Julie Woods-Moss and Anna Bäckström.\nTime was when virtual and augmented reality was regarded as simply gaming technology — the preserve of fun, consumer applications with little serious consideration for the enterprise. But that’s in the past, in the post-COVID world, remote working will be ubiquitous — forcing enterprise leaders to find innovative ways to improve collaboration. This, it transpires, is the sweet spot for what’s termed enterprise XR (extended reality) — a group of related technologies that enable us to blend the physical and digital worlds.\nThis urgent need to enrich remote working practices has been accompanied by the rapid evolution of XR-enabled devices. The once ugly, heavy and impractical smart glasses and heads-up displays are being superseded by a new generation of sleek devices, which have dissolved the barrier to adoption.\nRemote fatigue COVID-19 has forced us into new behaviours, one of which is working from home, leading to long hours on Zoom, or other video conferencing platforms and little to no personal interaction with our colleagues .\nMeetings, workshops, conferences, and webinars are happening in the same virtual — and often monotonous — ways. We’re stuck with a screen in almost the same pose and posture. Most of the time sitting down, which may well have longer term health consequences. “According to the American Psychological Association (APA) remote, and in many parts of the world, isolated online working also has psychological effects: feelings of apathy, anxiety and even depression have been reported since the onset of COVID-19.\nMore specifically, why do we experience the virtual meetings as more energy consuming than meeting our colleagues and clients face-to-face? Psychologists emphasise that this is because of lack of social and nonverbal cues, which can form as much as 80% of human communication. Combined with social isolation, it is all the more important to focus on mental health alongside physical health”, says Anna Bäckström, Principal of Consumer Science at Fourkind, part of ThoughtWorks, and Adjunct Professor of Social Psychology at the University of Helsinki.\nEven now with the re-opening of offices around the world, it will be difficult to go back to business as usual, while maintaining the social distancing norms. With constraints here to stay for the foreseeable future, it’s time to shape and rethink new ways of working and interactions.\nCan XR help here? XR provides a strong opportunity to extend our world from 2D to 3D, and, by adding wearables to the experience, to simulate a ‘real life’ experience. But could this be used in a valuable and productive way in a work environment?\nAt ThoughtWorks, we built an XR incubation center that lets us conduct XR experiments, while collaborating on agile development practices like TDD (test driven development) and CI/CD (continuous integration/continuous delivery) for XR, we have also open-sourced a test automation framework named Arium to fast track XR application development. All this work with XR tech has helped us understand the space more deeply. Recently our CMO sponsored the incubator to explore how XR could improve team working in a virtual environment. This resulted in us building an app called ThoughtArena, an augmented reality-based mobile app, that allows users to create a multi media space with virtual boards in which to collaborate. With this experiment, we wanted to test our hypothesis that XR can improve engagement, enjoyment, creativity and productivity while working remotely.\nWe engaged with just under 60 users to test their experience and reactions to the ThoughtArena tool. The testing was split into two parts, a qualitative and quantitative test. The quantitative one focused on usability and the experience itself, and the qualitative one focused on the behaviour and expression of the users while they were using the app.\nUsers found collaborating on a whiteboard in an AR (augmented reality) environment more interactive, enjoying the freedom of being able to move around their space to do this wherever they were, in their living room, office, kitchen or even outside. It’s common knowledge that changing your environment and moving around leads to improved creativity and productivity, which is made possible with AR’s flexible usage.\nThis improved interaction is driven by the fact that XR tech’s spatial solutions allow users to fully immerse themselves in the virtual environment and interact in a more natural way. In comparison to a video call, or more traditional methods of online training or meetings, which mostly require people to be in the physical environment, the AR equivalent is only limited by the individual\u0026rsquo;s imagination.\nThe user testing showed that the augmented board gave people a sense of a real physical board. Furthermore users stated an increase in creativity and productivity due to the opportunity to share ideas via images, videos or text notes and not just sticky notes on a physical whiteboard. Some users videoed their experience while testing the app as you can see here.\nChallenges of an evolving industry While XR spatial apps may provide a remedy for Zoom fatigue and lead to a more healthy and interactive work style, there are some challenges to be aware of too.\nWe need to be careful while designing such interactions, for several reasons. One is that too much movement may cause physical fatigue by holding or wearing devices that bring the user into the extended reality and this could negatively affect the users creativity and concentration levels.\nWe must also be aware of the physical space available in order to minimise the risk that users might fall if they are in an immersive environment and not conscious of the real environment while interacting with virtual objects. With advanced phones with Lidar sensors, you can also detect objects in the environment and warn users of obstacles,or guide the user forwards safely.\nAnd last, but not least, phone based XR applications, where the user needs to hold a phone in an awkward position for a longer duration, could cause a strain onhands and arms. This can be addressed by focusing on HMDs (head mounted displays), rather than purely phone-based applications. However, these HMDs are still a long way away from becoming mass market ready. We see a very good first attempt with Oculus Quest, however this is just the first of its new generation and older versions still lack precision. We are looking forward to seeing more progress in this space.\nXR tech is evolving at such a rate that the untapped potential boggles the mind. The standards and practices are growing from the learnings and experiments the industry is undertaking, but there are still many standards and practices that need to be defined for real immersion.\nWhat\u0026rsquo;s next and where to start? Hybrid and home working is likely to be one of the structural changes that persists well beyond 2020 (Forrester, Predictions 2021: Remote Work, Automation, And HR Tech Will Flourish). Technology will have a role to enable this mode or work beyond the video conferencing experiences of today. From our study, whilst some challenges were uncovered, AR technologies will no doubt have a value creating role going forward. If you or your company decides to experiment with XR as a potential solution to tackle the changing customer experience and expectations that you face now, start small.\nStart by creating awareness. Experience the devices, set up studio environments in the organization to give people the opportunity to learn via incubated practices or hackathons. By doing this, you will drive innovation and generate ideas and MVPs (minimum viable products) for products and solutions that are customer first, after all, XR is all about the user experience. From there invest in capability building, and up-skilling your people in the sales/pre-sales area. In an evolving digital landscape, XR technology will play an ever-increasing role, so the time to begin your exploration is now.\nThis article is originally published at Thoughtworks Insights, co-authored with Aileen Pistorius, Julie Woods-Moss and Anna Bäckström.\n","id":107,"tag":"xr ; ar ; vr ; mr ; thoughtworks ; technology ; covid19 ; learnings ; futureofwork","title":"Navigating the new normal with augmented reality","type":"post","url":"/post/navigating-new-normal-xr/"},{"content":"The IMF called the COVID-19 crisis ‘unlike any other,’ where the global growth contraction for 2020 was estimated at -3.5 percent. Image source : Using Lenovo ThinkReality A6 Glass at Workplace\nHere is a quick overview of the biggest challenges that CXOs (and their businesses) have faced over the recent past, to better illustrate how the pandemic is affecting businesses:\n Businesses are grappling with slowing global supply chains Sales and production growth is further impacted with a subsequent dive in opportunities for partnerships and growth The flattened or negative growth market result in less manpower and resources to work with, further affecting productivity ​The five biggest challenges that CXOs are facing today\nOn the other hand, consumer behaviour has irrevocably changed– the way people shop, consume education, travel etc. In trying to meet these challenges, leaders are crafting comprehensive business continuity plans that include what’s known as the ‘pandemic-proof infrastructure.’\nPandemic or crisis proof infrastructure is characterized by contactless, hygienic, location agnostic, remote and/or virtual working environments.\nSuch a setup will help business leaders:\n Adapt to and upgrade work environments to better suit the new normal Onboard new and old employees into a new work environment Better understand and work with quickly evolving business and customer priorities Given the scenario described above, we clearly see the potential that emerging tech lik eXtended Reality (XR) has to help enterprises crisis-proof their workspaces.\nEnterprise XR (EXR) goes mainstream XR tech has progressed from Gartner’s hype cycle and the confines of the consumer (gaming) industry to become a mature technology ready for business consumption.\n According to Digi-Capital - “Enterprise VR active users are beginning to reach critical mass in the training vertical\u0026hellip;Enterprise VR training company, Strivr partnered with Walmart to roll out 17,000 Oculus Go headsets loaded with its software to 4,700 stores and 1 million employees, as well as Verizon using it to train 22,000 employees across the US.”\n And, when we look at reports like IDC’s prediction of global AR/VR spends reaching $136.9 billion by 2024, or news of Microsoft winning a U.S. Army contract for augmented reality headsets, worth up to $21.9 billion – it’s evident that (public and/or private) enterprise-wide adoption of XR is on the rise.\nSo, can XR really crisis-proof workplaces? XR tech can simulate physical environments in immersive, virtual ones, thus creating a life-like experience. This helps entire workforces collaborate across remote locations, building on the ‘work from anywhere’ norm. Those principles also allow XR to be an efficient medium for remote upskilling of a large workforce.\n A recent HBR Analytic Services study stated, “[XR is] a key part of enterprise digital transformation initiatives. It is emerging as an important tool to improve employee productivity, training, and customer service.”\n Five considerations for EXR journeys ThoughtWorks has put together the top five things to consider when business leaders embark on their Enterprise eXtended Reality journeys:\n Enterprise integration: EXR solutions have to integrate with various enterprise systems - from Identity and Access Management (IAM) to Enterprise Risk Management (ERM) to Customer Relationship Management (CRM) and other proprietary services.\nThey must comply with organizational standards, processes and policies by integrating with Mobile Device Management (MDM) solutions. Additionally, EXR solutions should allow for a consistent user experience (and design) across an organization’s systems (e.g. a single sign across websites, mobile or desktop apps).\n XR content management: Conversations around XR content (like what’s used in gaming) are largely focused on the consumption stage i.e. the visualization and immersive quality of the content. However, the journey begins earlier, at the generation and publication stages, where EXR is yet to make significant progress.\nIt’s ideal for Enterprise XR to support entire content management pipelines – generating multi-format communique leveraging CAD, CAM, 3D Model, 360° videos etc. This ensures proactive and seamless integration across media platforms/channels. It also encourages efficient accessibility, publishing and managing of the content.\n Customizations: EXR adoption will grow when solutions are software and hardware agnostic and customizable to the needs of enterprises. For instance, EXR must be compatible with software apps and web services running on cloud, on-premise, on hybrid setups, and with evolving infrastructure.\nEnterprise XR adoption benefits from a vibrant partner ecosystem as that guarantees integration with a variety of devices, development tools and frameworks like Unity, Unreal, Vuforia, and others, alongside industry leading components like Google ARCore, Apple ARKit, MRTK etc. EXR also expects interoperability to enable existing mobile and web apps with XR, and this helps leverage existing investments.\n Monetizing solutions: EXR solutions can be a critical use case with measurable ROI. Organizations can implement turnkey XR solutions for remote assistance, guided workflow and training; remote data visualization, design collaboration and compliance. These solutions can be monetized by offering them to wider industrial usage.\n Standardization: The Enterprise XR space should implement software development best practices like Agile principles, open source, etc. Standardization matters because there are a multitude of tooling and tech stacks used for EXR solutions.\n While XR has the potential to reorient the enterprise environment, enterprises face multiple challenges with the XR development lifecycle. One gap is the automation of testing XR applications. ThoughtWorks developed Arium – an open-source, lightweight, extensible, platform-agnostic framework that helps write automation tests that are built specifically for XR applications.\nUse case: smart XR technology for the enterprise Earlier this year, Lenovo introduced the ThinkReality A3 - “The Most Versatile Smart Glasses Ever Designed for the Enterprise.” One of the market’s most advanced and versatile enterprise smart glasses, the ThinkReality A3 is part of a comprehensive digital solution offering to deliver intelligent transformation in business and bring smarter technology to more people.\nImage Source - Lenovo ThinkReality A3\nVikram Sharma, Director Software Development at Lenovo, explains how their ThinkReality Platform is designed to build custom XR solutions for a diverse range of hardware and software stacks, that can easily integrate with existing enterprise infrastructure. It comes with a powerful Software Development Kit (SDK) coupled with the management portal that bakes in cloud and device services.\nThinkReality headsets are loaded with enterprise ready services where users can login, work in tandem with the assigned task workflow and log out upon task completion. The device is compatible with dynamic enterprise device usage policies that can be enforced via integrated MDM or from Lenovo’s cloud services.\nVikram elaborates on how the ThinkReality platform works well with holo|one for workflow management, and with Uptale for onboarding and re-skilling the workforce. There are multiple usage opportunities with such a tool. For example, enterprises can use it to train personnel on new products, support troubleshooting or repairs from anywhere in the world.\n Lenovo’s ThinkReality Platform is a great example of EXR gaining traction, investment and subsequent advancement and maturity. Lenovo’s use case is a clear signifier that business recovery, distributed workforces and hybrid work models are pushing small and large businesses to adopt new technologies like XR for smart collaboration, increased efficiency, and lower downtimes.\nWould like to conclude this article with a quotation from ThoughtWorker, Margaret Plumley, Product Manager and Researcher -\n “Most people when they think of AR, VR, they think of games, they think of consumers, but they\u0026rsquo;ve actually been used in enterprises with really positive returns. You\u0026rsquo;re seeing actual cases where time on task is reduced by anywhere from eight to 30% in industry. So, there are some very real enterprise level applications out there with very real results.”\n This article is co-authored by Raju Kandaswamy and originally published on ThoughtWorks Insights\n","id":108,"tag":"xr ; ar ; vr ; mr ; usecases ; thoughtworks ; technology ; lenovo ; enterprise ; thinkreality","title":"Making pandemic-proof workplaces a reality with XR","type":"post","url":"/post/making-pandemic-proof-workplaces-a-reality-xr/"},{"content":"BML Munjal University,a Hero group initiative, is organizing a 4th edition Hackathon, named HackBMU 4.0. It is a hackathon focused on promoting innovation, diversity and networking among all future hackers. We will be hosting upcoming engineers and developers from all over the World to create mobile, web, and hardware hacks for an intense session.\nHackBMU seeks to provide a welcoming and supportive environment to all participants to develop their skills, regardless of their background. Last year and the year before, HackBMU garnered huge attention with 30+ teams from around the country. This year, I will join the event as a Judge\n23th Apr 2021 - 25th Apr 2021 Join Discord server : https://discord.gg/k65esbepEs Register here - https://hackbmu-3.devfolio.co/ Closing ceremony\n Experience and Conclusion It was a well organized event, Kudos to organising team. We have seen participation from schools and colleges on very interesting topics. The young minds were solving today\u0026rsquo;s problems, and building futuristic solutions.\n Virtual trainer for Gym and Yoga, an app that track the postures and correct you. Tracker for abled people integrated with alerts and geo-fencing, GPS, SOS Social networks for building hackathon culture, and platform manage college events Service aggregator for needie one in Covid time Covid portal with authentic information with bot Disease prediction right from medical reports and x-rays Waste segregation social profiling Cyrpo currency payment gateways Economical artificial arm for physical abled one Selecting top 3 teams from 800 participants and 100s of ideas was not an easy.\n","id":109,"tag":"iot ; ai ; xr ; ar ; mentor ; judge ; event ; university ; hackathon","title":"Judge at HackBMU 4.0","type":"event","url":"/event/bmu-hack4/"},{"content":"When the word “Coaching” comes into my mind, one name always pops up, the cricket legend Sachin Tendulkar and his coach Mr. Ramakant Achrekar who gave him cricket. This example also helps us break the myths that coaching is needed when there are performance issues or people who are under-performing need the most.\nSachin Tendulkar with his coach — Source\nIf Sachin Tendulkar needs a coach then we all need a coach in our life to get going. The corporate world is also realizing the importance of coaching. Organizations like ThoughtWorks, focus on coaching as one of the key parts of its cultivation culture. Being a ThoughtWorker, I am expected to coach people and I also have a coach, to get going. We do have mentors, trainers, consultants but in this article, I am restricting only to coaching.\nAs a part of our cultivation programs, we also go thru workshops and training to understand what effective coaching means to the corporate world.\n Is it about advising people? Is it about sharing our own experience and let them follow? Is it about providing solutions to the problems of a coachee? is it the same as what a sports coach does?\n Recently attended a coaching skills workshop as part of our talent growth programs, we discussed a number of such questions in the workshop, and it was hosted by Priya Kher from Collective Quest. Here are my key takeaways from the workshop.\nWord “Coaching” does not come from Sports Yes, it may surprise you, but coaching in the corporate world or at ThoughtWorks is not the same as coaching in sports. The word “coach” comes from a coach in a train, that lets you move and takes you to the destination that you have defined. So as a coach, we are not supposed to define where the coachee should go, but rather we focus on letting them go in the direction where they want to go.\nCoaching is a “no advice” zone Yes, coaching is purely a no advice zone, as a human this could be a difficult part, as we tend to advise people based on our rich experience, especially when someone come across a similar situation that we have already gone thru. We try to provide solutions to the problems and issues that the coachee comes with. But this is not coaching, it may come under mentoring, training, or consulting side of the cultivation based on what we do.\nAs a coach, we don’t need to have all the answers and let go our habit of providing fix based on our own assumptions.\nOne size doesn\u0026rsquo;t fit all, or our glasses may not fix the eyesight of others just because they have fixed our eyesight.\nGet insights It is important to first get insights of the ask, before really getting into the actions. It requires building a good rapport with the coachee so that they open up, the coach has to listen actively and emphatically. Some time most undervalued tool “paraphrasing” helps a lot. It boosts the confidence of the coachee that they are doing fine, and they can do it. Body language does play a great role here.\nWe have also discussed some good and bad coaching examples.\n Remember, or believe in that the coachee already has all resources and skills to handle the situation. We just need to ask the powerful questions that the coachee get provoked to find more ways to solve their problems on their own. Let them take time to understand the questions well and come up with answers.\nCoaching framework and tools There are some good coaching frameworks such as “GROW” — Goal, Reality, Options, and Will (or Way forward). It gives more structure to the conversations and may make the coaching more effective.\nGROW Model — Source\nWe can also use a spider chart, scale (0–10) to give weightage to different options or facts.\n Roleplay for GROW model All the situations can not be solved by Coaching Remember we can not solve all the situations with our coaching skills, we may need to switch between the other side of mentoring or training, managerial conversation based on the situation. For example, you may not want to discuss the promotion or salary review concerns with your coaching skills. So, It is important to set the right expectation before the conversations from both sides.\nConclusion In the conclusion, I would like to say that everyone needs a good coach that can provoke us to find ways, and choose the optimal way out. A coach also needs to practice effective coaching, there are some good tools that can help and structure the coaching conversations. It is in the best interest of the coachee and coach to set the right expectation for the conversations.\nHappy coaching!\nThis article was originally published on My Medium Profile\n","id":110,"tag":"thoughtworks ; motivational ; workshop ; takeaways ; coaching ; mentoring","title":"Coaching is not what you think!","type":"post","url":"/post/coaching-workshop-key-takaways/"},{"content":"Building AI Elements of AI Product Management Enterprise Architecture Internet of Things (IOT) User Experience Design Taradata Certified Java Certified Programmer ","id":111,"tag":"me ; profile ; certificates ; AI ; product management ; architecture ; enterprise ; technology ; IOT ; java ; design ; user experience ; about","title":"Certificates","type":"about","url":"/about/certificates/"},{"content":"Enterprise XR is gaining good traction, and software development’s best practices like Agile principles really matters due to multitude of tooling and tech stacks used for Enterprise XR solutions\nEnterprises do face multiple challenges with the XR development lifecycle. One of them is the automation of testing XR applications. In response to this gap, ThoughtWorks has developed Arium – an open-source, lightweight, extensible, platform-agnostic framework that helps write automation tests that are built specifically for XR Applications. Read more about Arium, here\nWe are organizing a workshop to cover Agility in Enterprise XR at Agile Engineering Lab in collaboration with Agility Today 2021 , where we will cover the basics of XR (AR/VR) and to understand how it translates to building XR Applications for enterprises, to Agile practices like TDD and Automation testing.\nJoin me and Neelaghya at the workshop with my team. Register Here, Its Free [https://www.townscript.com/e/AT2021-AEL-XR] (https://www.townscript.com/e/AT2021-AEL-XR)\nSave the date 6th Mar 2020 at 1:00-5:00PM IST During the workshop ","id":112,"tag":"xr ; thoughtworks ; agilitytoday ; talk ; event ; speaker ; workshop ; agile ; ar ; vr ; practices","title":"Speaker - AT2020 - Agility in Enterprise XR","type":"event","url":"/event/at_2020_agilexr/"},{"content":"NASSCOM is the premier trade body and chamber of commerce of the Tech industry in India and comprises over 2800 member companies including both Indian and multinational organisations that have a presence in India. Center of Excellence - IoT and AI is an initiative from NASSCOM and collective effort from different state governments in India,\nNASSCOM CoE is launching first-of-its-kind virtual SMART MANUFACTURING COMPETENCY CENTER (SMCC). The innovation platform which will help in accelerating technology adoption by bringing enterprises, governments, researchers, and innovators together. SMCC is designed for senior leaders from manufacturing companies to understand the application of Industry 4.0 to solve manufacturing challenges through the demos of digital solutions from the solution providers.\nStartup zone is key element of this competency center and to ensure we have best of the solutions come forward for the demo, we are hosting a challenge for the matured startups having proven digital solutions in the manufacturing domain. We have 28 booths across 7 categories of use cases and we plan to allocate these booths to startups that emerge as winners and finalist in the evaluation process.\n Challenge Categories Condition monitoring/ Predictive maintenance of high value assets Supply chain management AR/VR based Solutions for resource optimization Smart energy management AI and Analytics Solutions for process improvement Workers health \u0026amp; safety Computer Vision/IoT Solutions for Quality Inspection I have been a Jurar at the event.\n5th Feb 2021 - 3rd March 2021 Details here Experience It was great experience talking with startup and interesting ideas on predictive maintenance, conditional monitoring and computer vision technologies. Cost effective solutions such as drone based inspection of larger areas, and automated robots for solar plate dry cleaning, along with solutions using IOT sensor suites and platforms for predictive and prescriptive maintenance were real good. Automated defect detection, video analytics, augmented reality guides were other interesting solutions.\n","id":113,"tag":"iot ; ai ; xr ; ar ; mentor ; judge ; event ; nasscom ; hackathon","title":"Juror - SMCC Startup Challenge by Nasscom","type":"event","url":"/event/nasscom-smcc/"},{"content":"Self driving cars are a good example of what artificial intelligence can accomplish today. Such tech-centered advances are built on the back of highly trained Machine Learning (ML) algorithms.\nThese algorithms require exhaustive data that help the AI effectively model all possible scenarios. Accessing and pooling all that data needs complex and very expensive real-world hardware setups, aside from the extensive time it takes to gather the necessary training-data in the cloud (to train ML). Additionally during the learning process, hardware is prone to accidents, wear and tear which raises the cost and lead time in training the AI.\nTechnology leaders like GE have side stepped this hurdle by simulating real world scenarios using controlled-environment setups. This controlled environment simulations coined the term ‘Digital Twins’. A digital twin can model a physical object, process, or person with abstractions that might be necessary to bring the real world behaviour in digital simulation. This is useful to data scientists who model and generate the needed data to train complex AI in a cost effective way.\nAn example is GE’s Asset Digital Twin – the software representation of a physical asset (like turbines). Infact, GE was able to demonstrate a savings of $1.6 billion by using digital twins for remote monitoring. Research predicts the digital twin market will hit $26 billion by 2025, and major consumers of this technology being the automotive and manufacturing sector.\nTools at hand for digital twins Through experimentation, Extended Reality (XR) and tools used to create XR have been found best suited to leverage digital twins with ease. Also, one of the most popular tools used to create digital twins is [Unity3D[(https://unity.com/)], also mentioned in ThoughtWorks Technology Radar.\nUnity3D ensures an easier production of AI for prototyping in robotics, autonomous vehicles and other industrial applications.\nUse case: autonomous vehicle parking Self driving cars have a wide domain of variables to consider — lane detection, lane following, signal detection, obstacle detection etc. In our experiment at ThoughtWorks, we attempted to solve one of the problems; train a typical sedan-like-car to autonomously park itself in a parking lot through XR simulation.\nThe outcome of this experiment only produced signals such as the extent of throttle, brake and steering needed to successfully park the car in an empty lot. Other parameters like the clutch and gear shift were not calculated as they were not needed in the simulation. The outcome was applied in the simulation environment by a car controller script.\nThis simulation was made up of two parts, one was the car or the agent and the second was the environment within which we were training the car. In our project, the car was prefabricated (CAD model of car) and attached with a Rigidbody physics element (that simulates mass and gravity during execution).\nThe simulated car was given the following values:\n A total body mass was about 1500kg A car controller with front wheel drive Brake on all 4 wheels Motor Torque of 400 Brake Torque of 200 Max. steering angle of 30 degrees Ray cast sensors (rays projecting out from the car body for obstacle detection) in all four directions of the car. In the real world, ray cast sensors would be the LIDAR-like vision sensors. The picture below, displays the ray cast sensors attached to the car:\nRaycast sensors in all sides of the car to detect obstacles\nRay cast sensors in action during simulation\nThe car’s digital twin was set up in the learning environment. A neural network configuration of four hidden layers and 512 neurons per hidden layer was configured. The reinforcement training was configured to run 10 million cycles.\nThe Reinforcement Learning (RL) was set up to run in episodes that would end or re-start based on the following parameters:\n The agent has successfully parked in the parking lot The agent has met with an accident (hitting a wall, tree or another car) The episode length has exceeded the given number of steps For our use case, an episode length of 5000 was chosen.\nThe rewarding scheme for RL was selected as follows:\n The agent receives a penalty if it meets with any accident and the episode re-starts The agent receives a reward if it parks itself in an empty slot The ‘amount of reward’ for successful parking was based on: How aligned the car is to the demarcations in the parking lot If parked facing the wall, the agent would receive a marginal reward If parked facing the road (so that leaving the lot is easier), the agent would receive a large reward A training farm was set up with 16 agents to speed up the learning process. A simulation manager randomized the training environment with parking lots having varying lot occupancy and the agent was placed at random spots in the parking area between episodes - to ensure the learning covered a wide array of scenarios.\nReinforcement learning with multiple agents parallelly\nThe tensor graph after 5 million cycles looked like this: Not only had the agent increased its reward, it had also optimized the number of steps to reach the optimal solution (right hand side of graph). On an average it took 40 signals (like accelerate, steer, brake) to take the agent from the entrance to the empty parking lot.\nHere are some interesting captures taken during training: ​ Agent has leant to park both side of the parking lot\nAgent learnt to take elegant maneuvers by reversing\nAgent learnt to park from random locations of the lot\nWay forward We’ve just looked at how digital twins can effectively fast track production of artificial intelligence for use in autonomous vehicles at a prototyping stage. This applies to the industrial robotics space as well.\nIn robotics, specifically, the digital twins approach can help train better path-planning and optimization for moving robots. If we model the rewarding points around the Markov decision process, we can save energy, reduce load and improve cycle time for repetitive jobs - because the deep learning mechanisms maximize rewards in a lesser number of steps where possible.\nIn conclusion, XR simulation using digital twins to train ML models can be used to fast track prototyping AI for industrial applications.\nThis article is originally published at ThoughtWorks Insights and derived from XR Practices Medium Publication. It is co-authored by Raju K and edited by Thoughtworks content team.\n","id":114,"tag":"xr ; ar ; vr ; simulation ; autonomous ; vehicle ; thoughtworks ; enterprise","title":"Training autonomous vehicles with XR simulation","type":"post","url":"/post/training-autonomous-vehicles-in-xr/"},{"content":"The year 2020 was the year of change, it may be painful for some, but at the same time, it has taught us lessons in life. I have written about it earlier that we have to accept it, and it will be remembered and will have an impact on our whole life and after our life, no matter what we do. So it is important that we live the present fully.\n Change is the only constant, early we accept, early we get ready for the future. 2020 is a year of change, accept it, and change our lifestyle and get ready for the future. - 2020 - The year of change\n I am ending this year by reading the book “Power of Will” by Frank Channing Haddock. It explains the power of Will;\n The strong will is master of the body. We can achieve above and beyond if our actions are directed from strong and right will.\n The author talks about educating the Will, by cultivating memory, imagination, perception, and self-control. There are few exercises mentioned in the book that may help train the Will. I can very well relate to a few instances of the year 2020, where my Will played a bigger role, and unknowingly I was training my Will by observations, imaginations, actions, reasoning, physical senses, and actions. In this article, I reflect on them and in 2021, I will focus on consciously training my Will. I believe, we all have such instances, list them down, and educate the Will. With a well trained Will, we can achieve much more than what we think.\nA Better Start Always plan for a better start, it sets good motives to boost the Will. We started with a quickly planned trip to Ganga, at Rishikesh a holy place, with my cousins. They traveled from 500 km to join me, and from there we drove to Rishikesh overnight. We had been observing the situation in hand and few motives of being together, thoughtful conversations and imagination of driving near the riverbed. It boosted our Will, such strong that we took the trip in just a few minutes and the rest of the planning was done on the go.\nThis trip definitely gave us positive vibes that helped us go through to strongly face the challenge of 2020 and come out stronger. We were very clear about the flexibility needed for such a little unplanned trip. This gives us the capability to handle the unplanned. The positivity of the trip remains throughout the year and Ashok Ostwal, Gourav Ostwal in picture employed in 2020. Books also help in building point of views and strengthen thoughts, read here my take, Success Mantra from Amazon, change to become more successful from \u0026ldquo;What Got You Here Won’t Get You There\u0026rdquo; by Marshall Goldsmith.\nChange of Workstyle Who does not know the Covid19! The next big thing, nationwide lockdown, and restrictions forced all of us to think differently. Things that were occasional or non-preferable became a norm and maybe the preferable choice of work for many — The Work From Home. This change happened due to a strong Will to survive, and defeat Covid19. We are still educating the Will to deal with it, but we are now much stronger than before.\nSee how my day has changed! from kids school to whole day video calls, to energizers and more…\n Covid19 has challenged the way we eat, the way we commute, the way we socialize, and the way we work. In short, it… - XR development from home\n HumansOfRemoteWork at ThoughtWorks also covered my story.\n Kudos to the whole team for their strong Will power! Now we as a team are well placed, and continuously cultivating our Will to deal in the future.\nFinding the ways Work from home and Covid19 have not just changed the way we work but also changed the way live at home. Our ways of celebration have also changed. I have written about this change in an earlier article\n Such kind of changes have given me more points of view and facts that there are always alternatives, life does not stop at the current situation, it must go forward. I consider all such instances as a booster to mind and grow my will power.\nThere is always a lot to uncover in human potentials, read about it in the article below, and see how family members have uncovered their potentials in 2020.\n Covid19 has challenged the way work, I have penned down my earlier article to cover my workday and our shift toward new… - Uncovering the human potential\n Since the first change start from self, so I recorded my observations in the above article, now all these are good material to boost my will power, the will keep getting educated itself naturally, however, if we consciously put effort to train the Will then the result would be more fruitful.\nSharing and Caring Another important thing that helped me stay highly motivated throughout 2020 is a newly developed habit of sharing and caring. I have written more than 20 articles on thinkuldeep.com, ThoughtWorks Insights, XR Practices, TechTag.de and Dreamentor around technology, and motivations. Participated in 10 events as a speaker, guest lecturer, mentor, and juror at a platform like Google DSC, HackInIndia, Smart India Hackathon 2020, Techgig, Geeknights, The VRARA, GIDS, Nasscom.\nApart from this, I have been a mentor at PeriFerry, this has been a life-changing experience. Actually, we learn by sharing more and more. Here is my learning equation - The Learning Equation\n The learning equation — Learning is proportional to Sharing; Learning ∝ Sharing\n These instances give me good motives and to strengthen the Will to keep sharing. It is a continuous process.\nWork from Native Home By now, work from home has become a norm, I started full work from home at Gurgaon where we have an office, but later I shifted to my native place that is 500 km from the office. I have to spend this much time with my parents and natives after almost 20 years, it\u0026rsquo;s now an all-new experience with kids also enjoying non-metro city life that is more close to nature. This change enabled me to change my lifestyle, and focus on healthy living.\nBecame a cyclist with 1000 km run so far, the will is already trained to the next level, and hope to continue the same. Keep moving then only you can balance life. Watch the video with Ashok Ostwal, the mega cyclists.\n Home Renovation Last but not least, after moving to my native place, I was observing the home that is 25 years old and it was requiring repair and also some car parking space. It definitely requires real willpower not just from me but also from each family member here, as renovation means a lot of disturbance to daily life. I followed the principles of educating will and design the home in 3D, it not only helped me observe each change well but also imagine the future, then the next step was to train the will of the family by letting them imagine the future.\nRead my article here to know how technology helped me achieve this. XR enabled me to renovate my home!\nOther motives that got added to strengthen the Will are employment for the needy during these Covid19 times and keeping family busy. It was challenging to follow Covid19 safety protocols and continuing construction work with the office work, and however, as I said with a strong Will, we can achieve much more than what we think. I must thank everyone who has supported this directly and indirectly and bear with me.\nThis is definitely a life-changing moment for me that has trained my Will. Once things settle, I may move back to work from the office or from home near to the office, but this learning will remain with me and keep motivating me.\nThese were the few instances I consider as a contributor to my Will power, They have helped me provide a better perspective, imaginations, observations, reasoning, physical senses, and finally provoke actions. Most of these contributing to my Will unconsciously, but in the future, I will consciously exercise the will cultivation, to be better at it. I suggest reading the book “Power of Will” by Frank Channing Haddock, and share your thoughts.\nThank you, everyone, and wish you a strong Will Power in the year 2021.\nUpdated in 2021\nStart of 2021 with anniversary celebration This article is originally published at Medium\n","id":115,"tag":"thoughtworks ; motivational ; book-review ; work from home ; covid19 ; change","title":"Stories of Will Cultivation from 2020","type":"post","url":"/post/will-cultivation-2020/"},{"content":"SVNIT Surat has organized a hackathon - DotSlash 4.0, a 30 hours long hackathon. More information here - https://www.hackdotslash.co.in/\nI have participated in this event as a Judge.\nThe Event Details - 30-31st Jan 2021 It was really proud feeling to head interesting ideas from the young innovators, use the technology to find solutions during Covid and Post-Covid. Ideas to track and optimize carbon generation even from digitization (emails, internet and more), help specially-abled using technology to communicate, build digital restaurant menu to provide contactless experience, make the online exam cheat-proof, and many more.\nThanks you participants, and the organisers.\n","id":116,"tag":"hackathon ; event ; nit ; students ; dsc ; judge","title":"Judge - DotSlash 4.0 - Hackthon by NIT Surat","type":"event","url":"/event/dotstash_2021_hackathon/"},{"content":"Covid19 is forcing us into a ‘new normal’, A new normal that makes most of us work from home, and that leads to spending long hours on Zoom, or other video conferencing platforms. Meetings, inceptions, discovery workshops, conferences, and webinars are happening in the same monotonous ways. We are stuck with a screen in almost the same pose and posture. Some might use their mobile phone or tablet but that limits us to a small screen. We do miss our office days and face to face meetings; the brainstorming, retrospectives, daily stand-ups or other important workshops exercises. Using physical boards, whole office walls and seeing our colleagues and clients body language was so normal pre-covid. Now, even when offices are re-opening, it will be difficult to go back to business as usual, while maintaining the social distancing norms. We have to face it, there is no other option than shaping what the ‘new normal’ means to our ways of working and interactions. We need touchless and contactless experiences that lead to the same results, and great digital products as well as customer experiences.\nCan XR help here? XR moving from nicety to necessity. At ThoughtWorks, we brainstormed on new ways of working that may be more engaging, interesting and productive could look like. One of those experiments is ThoughtArena, where users can collaborate in 3D space. It is an augmented reality-based mobile app, that allows the users to create a space with virtual boards in it.\nThoughtArena allows its users to extend traditional 2D workspaces to real world environments. Users can place virtual whiteboards in the 3D environment and use them like physical boards. They can place those broadcasts in their comfort and in real-time with other users. In addition, users can add audio and video notes and may vote on the notes.\nThis article explains our learnings around building spatial applications and also presents the results from the user experience and behavior analysis using the XR applications. We have done a couple of experiments and project deliveries in XR, learnings from them can also be found here.\nBest Practices for Product Definition The XR tech evolution is happening at great speed and there are a number of products being built to meet different business needs. XR based products are still evolving, and there are not enough standards and practices to validate the product maturity. Here are our learnings while defining the product features for ThoughtArena.\n Go for Lean Inception - Inception is typically the phase where we define the product and its roadmap, as well as align on the big picture. However, after the initial meetings with stakeholders, we realized the need for a lean inception approach, which gives us a quick start and iteratively discovers the MVP. We planned a few remote meetings with stakeholders from business, marketing and technology groups, and came up with an initial feature list. We betted on Bare Minimum Viable Product (BMVP) features and product vision in the first couple of days.\n Evolutionary product definition - We knew that, we have signed up for a product which will evolve with time, and we would need a plan which enables evolution. In ThoughtArena, we have included many new features since inception, and deprioritized others based on user expectation and need.\n Recruit a diverse user group - One of enablers for the product evolution is the user group with diverse skills and experience. We have recruited a diverse user group in terms of ethical background as well as professional background including members IT, Operations, Delivery, Experience Design, Marketing and Support, and diversity. The group helped us strengthen product definition and vision. They helped us to think about newer ways of interactions with the 3D boards such as adding multimedia which is typically not available in a physical or digital wallboard.\n State the Risky Assumptions - Defining a set of hypotheses for risks and assumptions and then validating them is valuable for a product, especially in an emerging technology space. We stated following risky assumptions for ThoughtArena:\n 3D experiences may be more enjoyable and better than 2D experiences Spatial (3D) experiences may improve creativity and productivity XR Tech is ready for prime time. It may be a remedy for Zoom fatigue. XR Products may leave a long lasting impact on the users. Plan to Validate the Risky Assumptions - Once we know the risky assumptions, we should plan to validate those quantitatively and qualitatively.\n Is the product 3D/Spatial fit? Is spatial experience enjoyable? Does the product really solve the user’s problem? Map the user’s mental model of a physical collaboration space - Users should be able to relate the experience of the product to the flexibility and intuitiveness of a physical collaboration space.\n Focus on lifecycle and diversity of content (creation, reorganization, consumption) - The tool should assist users in the different stages of content lifecycle : Creation (editing, duplication, look and feel of content), Reorganization (grouping, sorting, moving), and Consumption (updates, viewing real time participation, tagging, commenting etc.). The users should also be allowed to choose from a diverse range of content like freehand drawing, shapes, flowcharts, tables etc.\nUnderstand the user journeys for different use cases and focus on mapping them end to end functional flows. The product should aim to provide more transparency to view and manage collaborators and activity across the spaces and boards.\n Adapt the experience to the user’s context - Identify the relevant use cases to allow users to engage using their mobile devices. The tech should be stable to adapt to user’s need to move around or stay stationary without impacting the experience.\n Market analysis and coverage - Market analysis of similar products is important to understand what is already there. It also helps prioritize the product features. We analyzed XR collaboration products such as Spatial.io, Big Screen as well as popular products like Jabboard and Mural. This analysis enabled us to prioritize features such as audio, voice to text and video notes, and targeting to Android and iOS/iPadOS — which in turn would help to have more user base coverage.\n Best practices of User Experience Design The “Reality” is what we believe, and we believe what we see, hear and sense. So “Extended Reality (XR)” is about extending or changing the belief. Once the user gets immersed in the new digital environment, they also expect to interact with digital objects presented to them in the same/similar ways as they interact with real objects in the physical world.\n Minor glitch in AR applications can break the immersive experience and users may not believe in the virtual objects. It is more important to keep the user engaged all the time.\n AR Applications need great onboarding — As AR is a new technology, most of the first time users don’t know what to expect. They are only familiar with the 2-D interactions and they will interact with the 3-D environment in the same way while using a mobile device. This often results in interactions with the digital content in AR not being intuitive.\nEg. In the ThoughtArena app, users did not realise that they needed to physically move around or move their phone or tablet to be able to work on the virtual board. We needed to design a good onboarding experience, that properly communicated and educated the users about these interactions, to help them to get familiar with the new experience.\n We need to build hybrid user interactions of both digital and physical worlds, keep the best out of both worlds. Eg. During the user testing phase of the ThoughtArena app, we experienced that many of the users would like to use the gestures and patterns that are familiar to them. Users want to zoom in and out on the board or board items, however, there is nothing like zoom in and out in the real world. In the real world, in order to see things more clearly, we just move closer to the objects. Similarly, on a real board, editing is done by removing and adding an item again, however, in a virtual board, users expect an edit functionality on the added objects. For the best experience, we should include the best features of both 2D and 3D.\n Understanding the environment — As AR technology lets you add virtual content to the real world, it’s important to understand the user needs and their real-world environment. While designing an AR solution consider what use cases, lighting conditions or different environments your users will be using the product in. Is it a small apartment, vast field, public spaces? Give users a clear understanding of the amount of space they’ll need for your app and the ideal conditions for using it.\n Be careful before introducing a new interaction language. XR interactions itself are new to a lot of users. We realized that at this stage it would be too much for the users to introduce a completely new language of interaction. We followed Jakob’s Law of UX, and continued the 2D board and sticky paradigm in the 3D environment, tried to keep interactions such as pasting notes, moving from one place to another, removing etc similar to what we do in 2D.\n We have also researched on if 3D arrangement of content is always better than 2D arrangement, and found that spatial cognitive memory comes into play if the objects are distincts in look and feel and lessor in number, but if there is a large number of similar objects arranged in 3D then the user interfaces look more cluttered. Ref : 2D vs 3D\n Text Input is a challenge in XR applications. If one takes the text input in the spatial environment, then the 3D on-screen keyboard needs to be shown to the user, and it is hard to take input from there. Another way is to take text input with a 2D keyboard from the device, but then it breaks the immersive experience. User interface needs to be designed to provide pseudo immersion.\n Define product aesthetics — Define color, theme and application logo. We conducted a survey to take an opinion of different color themes. Based on the environment, color of the UI component may be adjusted to have better visibility.\n XR spatial apps may allow users to roam around in the space. It may be a remedy for zoom fatigue and may break the monotonous style of working. This may also make people more creative or attentive or atleast lead to a healthy work style. At the same time we need to be careful while designing interactions, for following reasons\n Too much movement may cause physical fatigue. It may be fatal if the user is in an immersive environment and not conscious of the real environment while interacting with virtual objects. Phone based XR applications where the user needs to hold a phone awkward position for a longer duration would cause strain in hand. Understand human anatomy while designing user experience in XR.\n Field of view (FOV) — The field-of-view is all that a user can see while looking straight ahead both in reality and in XR content. The average human field-of-view is approximately 200 degrees with comfortable neck movement. Devices with less FOV provide less natural experience. Field-of-Regard (FOR) — It is the space a user can see from a given position, including when moving eyes, head, and neck. Content positioning needs to be carefully defined, and provide provision to change the depth as per user’s preference. The human eye is comfortable focusing on objects half a meter to 20-meters in front of us. Anything too close will make us cross-eyed, and anything further away will tend to blur in our vision. Beyond 10-meters, the sense of 3D stereoscopic depth perception diminishes rapidly until it is almost unnoticeable beyond 20-meters. So the comfortable viewing distance is 0.5-meters to 10.0-meters where we can place relevant contents Neck/Hand movement — We need to position the primary UI element in the direct FOV without neck/hand movement, and secondary UI elements can be placed with neck/hand movement. UI elements which are placed beyond range require physical movement. Length of arm is another vital factor, It is essential to consider arms while designing UI, try to place fundamental interactions within this distance. Best Practices of XR Development XR development introduces us to the new set of technologies and tools. The standard development practices such as Test Driven Development, eXtreme Programming, Agile development helps building a better product. XR is influenced by gaming a lot, and many standard development practices for enterprise software development may not be straight forward.\n Plan for spikes - XR is still fairly new so it is better to have a plan for spikes for uncovering the unknowns.\n CI/CD first - Having a CI/CD setup makes the development cycle very smooth and faster. Investing some time in the beginning to set up the CI so that it is able to run the test suite and deploy to a particular environment with minimal effort, saves a lot of time later on. We used CircleCI for building both the Unity app, as well as the server side app. For the server side, we integrated with GCP. We maintained different environments on the GCP itself and CircleCI was handling the deployment to those environments.\n Quick feedback cycles - We follow a 2 week iteration model and had regular showcases at the end of those. The showcase consisted of a diverse user group (it is important to have a user group to better inputs, see previous articles of this series) , which helped us a lot in covering all aspects of the app, the features, the learnings, the process. All the feedback helped a lot in making our app better. Having regular catch ups and IPMs made sure we had the stories detailed and ready to be picked up for development.\n Learn the art of refactoring - A lot of times refactoring takes a backseat while delivering POCs or during tight deadlines. We did not let that happen here. The team made sure to keep refactoring the code as we moved along. In order to achieve refactoring, one must have a test suite to support it and we made sure that we have good code coverage and scenarios getting covered through our tests (unit and integration tests). There will always be scope for more refactoring, the trick is to keep doing it continuously and in small bits and pieces, so we do not end up with a huge amount of tech debt.\n Decision analysis and resolution - Collect facts and figures and maintain decision records.\nFor example following decisions we had to take for ThoughtArena App.\n Unity for the mobile apps — With Unity and its AR Foundation API, we were able to develop our app for both Android and iOS devices with minimal extra effort. The built in support for handling audio and video also helped to speed up the development process. Java and Micronaut for server — We decided to work in the Java ecosystem based on the tight schedule we had and the familiarity of the team with the Java ecosystem. We chose Micronaut for its faster startup times and smaller footprint. Also, we were focusing mainly on APIs and deploying on cloud, so we felt it to be the right framework for us. Websockets for real time communication — There are multiple ways to go about the real-time communication aspect. The first thing that comes to mind is websocket, an open TCP connection between server and client to exchange messages and events in a fast manner. There are other options as well to achieve real time, like queues, some DBs, etc. We went ahead with websockets as this does not require any additional infrastructure to make it up and running and it fits our use case as well — both client and server are sending messages to each other real time. All our data is structured and straightforward, so it was easy for us to choose a relational database for transactional data. Tech evaluation - Make sure we have a plan for tech choice evaluations to check if the tech is ready for prime time. We have evaluated multiple tech choices, for example cloud anchors, environment scanning, video playback, real-time sync, session management, maintaining user profile state locally etc.\n Extendable design — Separate out the input system from the business logic, so that multiple input mechanisms can be integrated such as click, touch, tap, double tap and gaze. We have used Unity’s event system as a single source of event trigger.\n XR tech considerations - AR takes a bit of time to initialize and learn the environment. Speed of this initialization depends on the hardware and OS capabilities.\n Tracking of the environment goes haywire when the device is shaking alot. Hard to control the AR system as it is a Platform specific capability. Longer use of the AR might make the hardware hot and also consumes battery as algorithms for AR are really CPU intensive. AR Tech is getting more mature year over year, but good hardware is still a dependency for this technology. AR Cloud anchors are early in stage but getting matured in a rapid manner. It needs proper calibration. In simple terms the user needs to do a little more scanning to achieve good results (to locate the points). We have to de-prioritized this feature due to its instability. Real time collaboration considerations\n Define how long the session should be. Handle disconnections gracefully. Define approach for scaling sessions, we have to introduce distributed cache for managing sessions. Cloud considerations — We have used Google App Engine (Standard environment) for our app, because of its quick setup, Java 11 support and auto-scaling features. Also, GCP handles the infrastructure part in case of GAE apps. However, Standard App Engine does not support websockets. So we had to switch to the Flexible Environment, which does su-pot Java 11 out of the box, to keep things on Java 11, we had to provide custom docker configuration in the pipeline. Best Practices of XR Testing Our learnings from developer level testing for XR applications\n We learned that most of the devices have software emulators which can be integrated to the XR development tools such as Unity, it helps developers to test the logic without a physical device. We realized that 70% of the functionalities of ThoughtArena app didn’t require a physical environment/movement like adding/moving of stickies or playing of videos, making of API calls to get data, displaying list of boards or members in a space, adding of boards/deleting of boards. All these could be tested in the editor’s instant preview. Cases where physical movement were required like the pinning and unpinning of Boards were the ones that could not be directly tested in the editor, without manually rotating the user’s POV from the inspector. Things that had platform specific plugins like uploading of Image, saving snapshot of board to local storage could not be tested in the editor. Unit testing - Unity engine comes with a great unit testing framework, and by plugging in a mock testing framework, we can test units very well.\n Features dependent on API calls — We had an entire internal mock server setup early into development to reduce dependency on the APIs. Once the API contracts were settled on, we could write tests for them and continue development even if the APIs were under development Board/Notes interactions by mocking user inputs such as click/touch Simulated the interactions as movement via mocking Pinning/Unpinning of boards and moving stickies. 2D UI interactions and use flow Automation testing - We extend Unity Editor’s unit testing framework and built a functional test suite that can do integration tests and end to end tests for some scenarios on the plugin device. We have open-sourced an Automation Testing Framework — Arium have a look here.\n Integration Testing - We have an integration testing suite for the backend, and for testing the APIs. Integration testing helped us to test out the websockets as well and if they are working in the intended way. These tests run on the CI after every push we make to the backend repo. Since our code was also using GCP storage as well, we needed to make sure that we are able to replicate that behaviour. We found out that GCP storage APIs do provide a test storage, which mimics the storage locally and used that in our integration tests.\n Functional Testing - Since XR depends on the environment, testers need to test the functionality in different environments, different lighting conditions, noise level, indoor, outdoor, moving into the environment to test the stability of functionalities.\n Acceptance criteria for the XR stories are quite different from what we see in general software development stories, a lot of factors need to be considered.\n Keep the user immersion intact while working. Acceptable Performance on different environment conditions. A functionality working very well may not work that well if the environment condition changes, then the user may lose the immersion, and the XR app does not meet the expectation. Testers must consider spikes to define the correct acceptance criterias. The factors which may impact may not be known before, for example what would be the impact of rain on a business story that requires outdoor testing. Acceptance criteria also needs to be evolved as the product evolves. Make sure we have supported devices with required resolutions for the test team, plan it from the stars.\n User Experience Testing Plan for user experience testing to get user insights and validate the hypothesis with users. It helped us to better understand the impact, benefits, limitations, flexibility of our spatial solution. The user experience also focused on the aspect of collaboration and assessing the effectiveness of the developed concept in terms of usefulness and usability. It can be divided in two parts: User research and Usability Testing.\n The user research covers as-is analysis of user, user interactions and tools the user is using relevant to the problem statement of the solution, and also note down user expectation from the changing environment. Usability testing covers how well the solution engages the user, and focuses on usability and usefulness of the solution. The next section describes the result of user experience testing. User experience testing should cover quantitative and qualitative analysis. Here are the our key learnings\n Recruit users for dedicated testing sessions with the UX Analysis team. Recruit minimum 9 users. Plan user interviews before and after the testing sessions, and over user behaviour during the testing. Recruit independent test users, and ask them to share their observations in a survey for quantitative analysis. Recruit users with diverse backgrounds, experiences and familiarity with XR. Define a test execution plan with a set of scenarios for dedicated user testing sessions, and try out mock sessions to see if the plan is working and we are getting the expected inputs. Define detailed questionnaires for independent testers. Collect observations only after a few trials of the app otherwise we might get correct feedback on the app feature if app onboarding/user training is not that great. Conclusion XR tech is evolving at such a rate that the untapped potential boggles the mind. XR business use cases are growing with convergence to the AI and ML. XR devices are now able to scan and detect the environments, and can adjust to the environment. The standards and practices are growing from the leanings and experiments industry is doing. We have shared our experience from building XR applications.\nAuthors and contributors : Kuldeep Singh, Aileen Pistorius, Sumedha Verma and Team XR Practices at ThoughtWorks. XRPractices\nThis article is originally published at Medium\n","id":117,"tag":"xr ; thoughtworks ; technology ; ar ; vr ; mr ; best practices ; experience ; learnings ; covid19","title":"Best Practices for Building Spatial Solutions","type":"post","url":"/post/xr-best-practices/"},{"content":"NASSCOM is the premier trade body and chamber of commerce of the Tech industry in India and comprises over 2800 member companies including both Indian and multinational organisations that have a presence in India. Center of Excellence - IoT and AI is an initiative from NASSCOM and collective effort from different state governments in India,\nNASSCOM has organized Manufacturing Innovation Challenge 2020 from 12th Oct to 11th Dec, on theme of redefining enterprises and entrepreneurs engagement.\nI was a mentor and Jurar at the event.\n Program details here https://gujarat.coe-iot.com/broadcast/manufacturing-innovation-challenge-2/\nIt was great to participate in the mentoring and evaluation round, and listening to the ideas from start-ups to revive industry from Covid19 by improving efficiency and effectiveness using technology.\nDiscussed interesting solutions such as augmented reality based quality inspection clubbed with artificial intelligence and IOT, improving energy efficiency and providing actionable insights\u0026hellip;\nListen more and register for exclusive e-conference, and see the winners here\u0026hellip;\n And here are the winners - ","id":118,"tag":"iot ; ai ; xr ; ar ; mentor ; event ; hackathon ; judge ; nasscom","title":"Mentor and Juror - MIC 2020 by Nasscom","type":"event","url":"/event/nasscom-iotcoe/"},{"content":"Interesting conversation on eXtended Reality (XR) with Eddie Avil, starting from my background, how I landed up in the XR space and where are we heading…\nXROM PODCast XROM PODCast is a youtube channel maintained by serial entrepreneur, dreamer \u0026amp; a believer Mr. Eddie Avil. This channel has number of interesting conversations with industry leaders in the emerging technology space. Eddie Avil, also runs a marketplace for AR VR hardware and service, visit here https://www.xrbazaar.in/ for more details.\nWatch and listen to the podcast here.\n Here are highlights of our conversation at the PODCast\nMy background IT career started right after graduating from NIT Kurukshetra in 2004. I have been a techie filling the gap between tech and businesses, throughout my 16+ years of journey with Quark, Nagarro and ThoughtWorks. We have discussed what has exited me to pick XR space… Opportunities and challenges in ARVR ARVR (or XR) has been there from quite some time now even before Microsoft Hololens, Oculus and Google Glass of the world, and now trends says that XR will be most explored for enterprises and than consumer segment for next couple of years. XR technology is reaching to its tipping point and businesses are seeing it as one of the savior to recover from Covid19. Upskilling workforce, remote collaboration, simulation of work environment are major use cases. We have discussed a number of opportunities along with challenges are lying there… Role of Technology in changing Human Behaviour Technology become part of life once it start solving real pain area, see how Uber and similar others has revolutionized the commuting, now there is no stranger, we feel more secure than before due to technology. We talked about how people will be wearing technology and how a mobile phone will remain as compute unit for other wearable devices in future. I feel technology will go closer and closer to the need, if my eye want to see something technology need to go there, if my mind need something, technology should go near to head or inside head…\nVirtual over Physical We discussed about how advancement in AI and 5G technologies will fuel XR use cases. Digital twin with AI power behaviour will be the future, when we can try things in virtual world when why to waste time and money on trying with physical things. There has already been great investments going in this direction by Apple, Microsoft, Google and Facebook of the world…\nXR Practice at ThoughtWorks We have discussed, how we at ThoughtWorks are trying to solve real world problems using technology. Being pioneer in Agile Software Development, we are building practices and sharing our thought leadership for Enterprise XR product development, some point of views from our team are here …\nBe the partner of future  We finished our first discussion with a closing for the industries to come forward and be the partner of future, collaborate and share learnings, and make the world great…\n ","id":119,"tag":"xr ; ar ; vr ; interview ; event ; xrom ; thoughtworks ; AI ; 5G ; practices ; iot","title":"eXtended Reality for Enterprise - a PODCast by XROM","type":"event","url":"/event/xrom-podcast-enterprise-xr/"},{"content":"Extended Reality is not only meant for humans, they can very well serve machines by producing Digital twins. Digital twins encapsulate real world elements with all important physics characteristics enabling the researcher to produce large volume of training data needed for reinforment learning and CNN based deep learning algorithms. Extended Reality environments are mature today and are capable of producing photorealistic rendering and physics emulation of real world just by using digital twins.\nI and Raju Kandaswamy are presenting at Great International Developer Summit (GIDS) is going online this time, and we will discuss and showcase a case study in producing a prototype AI for autonomous car parking using Extended Reality and Reinforcement Learning.\nHighlights Introduction to XR - Beyond Keyboard and Screen How do the machines learn? Digital Twins and XR eXtended Reality for Machines Buy Tickets Find details of the session and on-demand videos after the session here.\nhttps://wurreka.com/ict/virtual-conference/lang/session/extended-machine-learning-with-xr\nSee you on 5th Nov 2020 at 06:30AM IST ","id":120,"tag":"xr ; thoughtworks ; developer-summit ; talk ; event ; speaker ; gids","title":"Speaker - GIDS - Extended Machine Learning with XR","type":"event","url":"/event/gids_2020_xr4ml/"},{"content":"This article is originally published at TechTag.de, you may find the German version here too.\nThe interaction between humans and technology is evolving radically. We need to develop our thinking and skills beyond the keyboard and screen. But what does this new technological reality look like? Here it is worth taking a closer look at the different forms of extended reality and how they can change our everyday lives.\nThe way we interact with technology today isn\u0026rsquo;t too far from what was predicted by futuristic sci-fi pop culture. We live in a world where sophisticated touch and gesture based interactions with wearables, sensors, augmented reality (AR), virtual reality (VR) and mixed reality (MR) work together. This creates an extremely immersive user experience.\nGartner predicts that the implementation of 5G and AR / VR will transform not only how customers interact, but also the entire product management cycle of brands. In fact, according to Deloitte, it is expected that in 2024, VR hardware and applications will generate sales of 530 million euros in Germany, in the best case even 820 million euros if new applications or gadgets conquer the market.\nIn order to better understand what influence augmented forms of reality can have on our everyday lives and what opportunities companies and users have to interact with the world around them, it is first necessary to illustrate the various terms used in the discussion be used.\nVR, AR, MR - the variety of realities Forms of Extended Reality\nVirtual Reality (VR) - the best-known term - is an immersive experience that opens up completely new worlds by allowing users to immerse themselves in a virtual, computer-generated world with innovative forms of interaction. The technology can be used with Head Mounted Display (HMD) devices. Smartphone-based VR is the most economical choice to test the technology. Google Cardboard and similar variants are quite popular in this segment. All you need is a smartphone that fits in a pre-made box. Many media players and content providers such as YouTube now offer stereoscopic 360-degree content.\nUsers testing VR devices\nAugmented Reality (AR) in turn represents digital content in a real environment. Smart glasses project digital content directly in front of the user\u0026rsquo;s eye. In the past five years, such devices have evolved from bulky gadgets to lightweight glasses with more computing power, longer battery life, a larger Field of View (FoV) and consistent connectivity. This projection-based technology makes it possible to display digital content on a glass surface. One example of this is the head-up display (HUD), which is used by many automobile manufacturers. It projects content onto the vehicle\u0026rsquo;s windshield. Drivers can see contextually relevant information such as maps or speed limits.\nIf you combine AR and VR, you move in the area of ​​mixed reality (MR), in which the physical world and digital content interact. Users can interact with the 3D content in the real world. In addition, MR devices recognize and store environments while simultaneously tracking the specific position of the device within an environment.\nUser testing a Microsoft MR device\nMixed reality devices such as Microsoft Hololens enable object recognition, scene storage and gesture-controlled operations.\nThe trend towards Extended Reality (XR) can be evaluated on the basis of the components mentioned. This collective term describes use cases in which AR, VR and MR devices interact with peripheral devices such as environmental sensors, 3D printers and internet-enabled smart home devices such as household appliances, industrial machines, vehicles and much more. Due to the synergies of AR, VR and MR technologies for the interactive display of 3D content as well as the modern development tools and platforms, the term XR is increasingly becoming a common language. In the industry, numerous use cases and opportunities for XR platforms are currently being evaluated, especially those that enable services across devices, development platforms and company systems.\nExtended reality in its various forms is not just for gamers and tech enthusiasts. There are also a plethora of implementation options that businesses can benefit from.\nA technology with versatile potential Extended Reality can be used in a variety of areas in companies, starting with training your employees or maintaining your equipment. VR technology can create and simulate environments that are either difficult to reproduce or involve enormous costs and / or risks. Examples of such scenarios are factory training courses or training courses on fire protection or controlling an aircraft. AR, on the other hand, can offer technicians contextual support in a work environment. For example, a technical specialist wearing AR-enabled smart glasses can view digital content that is displayed in their work environment. This could take the form of step-by-step instructions for a task or looking through relevant manuals and videos while on the assembly line or on site performing a task. The AR device is also able to take a snapshot of the work environment for compliance checks. In addition, the tech specialist can stream their activity to a remote specialist who can annotate the live environment and review the work.\nBut companies can also take advantage of the technology that allows autonomous vehicles to create virtual maps and locate objects in an environment. Similar technology enables XR-enabled devices to recognize unique environments and track the location of the devices within. Virtual positioning systems of large systems such as airports, train stations or factories can be created. Such a system also enables users to mark, locate and navigate to specific target points.\nXR technology can also be used in the retail sector to improve the customer experience and make it more unique. In an XR environment, customers have the opportunity to try out 3D models of products before buying them. For example, users can customize a virtual model of a piece of furniture in terms of color, interior design and fabrics. You can also look at the piece of furniture from all sides and experience the personalization and assess how it looks in the surroundings of your home.\nExtended Reality: Hardware and software have to go hand in hand XR technology is evolving at such a rate that the untapped potential is overwhelming. In order to be able to fully exploit it, it is recommended that the development of the software keep pace with the development of the hardware. The latter needs to be user-friendly and lightweight in terms of field of view, with immense processing power and battery power.\nThe software, on the other hand, needs to make better use of the advantages of artificial intelligence and machine learning in order to better assess different environments and make them accessible for every possible scenario. Organizations should also proactively create as many use cases as possible, fix deficiencies, and contribute to XR development practice. VR applications are a reality that prepares us for the pronounced effects of AR and MR technology in the future.\nThis article is derived from ThoughtWorks Insights and originally published at TechTag.de\n","id":121,"tag":"xr ; ar ; vr ; mr ; fundamentals ; thoughtworks ; technology","title":"eXtended Reality - the new norm?","type":"post","url":"/post/extending_reality_new_norm/"},{"content":"The VRAR Association\u0026rsquo;s Product Design and Development (PDD) Communities of Practice (COP) brings its first webinar on \u0026ldquo;Essentials of XR Product Design and Development\u0026rdquo;.\nI will be sharing the stage with industry leader Mr. Pradeep Khanna, VR AR Association, and XR innovator Mr. Raju Kandaswamy, ThoughtWorks.\nSpread the words Registration Only Please register at the below link, and save the date Oct 17, 2020, 4:30 PM IST.\nhttps://www.brighttalk.com/webcast/14711/445904\nThe webinar will be available at https://www.thevrara.com/webinars\nHighlights This webinar covers the challenge of building XR products, and also the practices and guidelines to overcome the same. We will share our experience of using industry practices and patterns for software development and will cover what worked and what did not work for us. Also, the similarity and differences between XR based software development and traditional software development on from product definition, design, development, and testing perspective.\nEvent Summary 80+ participants from 10 countries (India, Singapore, Australia, USA, Canada, Finland, Jordan, Russia, Philippines, Ukraine), 30 companies (likes of IBM, Wipro, VMWare, ThoughtWorks, Nagarro and many upcoming startups) and 5 universities/schools. Participants are with diverse background from founders, projects managers, developers and students.\nThanks everyone, slides and video are available on demand at the same link https://www.brighttalk.com/webcast/14711/445904\n","id":122,"tag":"xr ; practices ; speaker ; webinar ; event ; thoughtworks ; vrara","title":"Webinar - Essentials of XR Product Design and Development","type":"event","url":"/event/thevrara-ppd-cop-webinar/"},{"content":"Covid19 is still unmeasurable, and we are more or less getting adjusted to this new normal. We are eagerly waiting to say goodbye to this year 2020 and welcome the coming year of hope, 2021. Read more here to know why I call this year 2020, as the year of change.\nIn April this year, I explained XR development from home, and that was still near shore to office, I was less than an hour away from the office. The year 2020 wanted more changes and two months ago, we decided to take a break from the hustle-bustle metro life, and move to my native place, Raisinghnagar, a small peaceful town in Rajasthan, India. Here, covid19 cases are almost nil and we feel safer here. Above all, we got a chance to stay with family after more than 2 decades (after my schooling days).\nThe changes continued, I took this opportunity to bring other positive changes and started following small-town life, early rise, walking, cycling near the canal, yoga, spending time in the farms, and celebrating with family. Kids are enjoying the most. All this, I am doing with regular working from home, keeping in touch with fellow ThoughtWorkers, and my Mentees from PeriFerry. Surprisingly, I am getting very good internet speed here, and never faced any issue due to the internet. I have also hosted few webinars from here.\n Good, but how is the XR helping…\n Ok, let me come to the point — We moved to this town around 25 years ago, and my parents are keeping up the house we built at that time. Now, the building is asking for a good makeover, any change is not easy to accept, it required me to convince my parents about the change and bear with the inconvenience caused by renovation. XR tech has made my life easy when they saw a to-be house in XR, and I got “Let’s do it”. Once we accept the change, it is easy to travel further on the path. As I explained in my earlier article that, this 2020 will be remembered for life, no matter what, then why not remember it for a positive change.\n(See video demo at the end of this article)\nThe Need By now, you may have understood the need for renovation, and that is not just because of the old construct but also it required changes for car parking, separate passage for tenants on the upper floor, etc. I took charge to design the home on weekends and continue to uncover more potential of mine, read about uncovering human potentials in Covid19 time. I realized the following challenges with my initial paper/picture based designs.\n Actual scale design — We measured the house and I borrowed a pencil and paper sheet from my son, and drawn few designs with my engineering skills in ortho and isometric views, also created some 3D designs on paper, I love all this.\n Hard to change in the drawing — This is what I hate, re-draw for every change, and the eraser also spoils the drawings. When I added color to the drawing, it added another complexity, and change becomes impossible on the same drawing. It is hard to show the same drawing of different colors.\n Sharable multiple options — We need multiple options with color, style, and measurements that can be share easily. Digital options are good for this, for example sharing pictures on WhatsApp or by other means is quite popular.\n Visualize beyond imagination — We need a medium to visualize the To-Be state easily, need pictures of the house from multiple angles, and multiple locations. Ideally, I want to let viewers go into the To-Be state and feel it. Rather than see things only after completion.\n Safety and Security — Renovation and structural changes are risky, any change that causes an imbalance in structure could be fatal immediately, near term, or in long term. It needs a close review from experts and got few options rejected due to too many changes, unsafe or too costly.\n After designing on papers and utilizing all the drawing sheets of my kid, I realized the issue with the approach I took, every time there was a change expected and I need to tell mason and parents to assume things to avoid making changes in the design. I was losing flexibility to change, and not reaching to my satisfaction. See the next section on How XR helped me in coming out of this dilemma.\nXR solution I utilized XR tech to solve some of the above challenges, here are the steps I have taken so far.\n3D Modeling I think paper drawings were good to keep measurements at real scale, the next step I took to design the house in 3D. 3D modeling is a basic step for XR solution. There are a number of free/commercial tools for 3D design available like Blender, 3ds Max, Maya, CAD, etc. Even there are some concepts that can directly convert designs into 3D models. I was not comfortable in any of these and went ahead with my own skills, used plain Unity 3D objects to create a 3D model of To-Be house at the real scale. It wasn\u0026rsquo;t as difficult as I was thinking. We can also get a few good models for buildings such as doors, windows, vehicles, kitchen from the Unity Asset Store.\nIf you haven’t yet gone through Unity 3D, find this easy article by my colleague Neelarghya.\nJust the 3D model would solve many things, we can try different structures without complete rebuilding, try different options with colors, tiles, etc. Take as many snapshots as you want from a different angle and share it. Not just that we can try different lighting conditions, day and night, and add some lights to see the impact. The 3D design tool allows the designer/developer to get inside the building and see from inside, and snapshot and share.\n You may say it is just 3D modeling, where is the XR here?\n The 3D model is more of static designs with few animations, the only easy way to share these designs is via pictures or videos. 3D model walk-thru needs knowledge of the editor, and I had to show it to on my laptop every time anyone wants to see it, and that\u0026rsquo;s not a scalable solution. Also, it is not immersive and doesn\u0026rsquo;t give a feel. That\u0026rsquo;s where the next part comes in to fill the gap.\nBuilding XR App to Visualize 3D model Once we have a 3D model, we can visualize it in AR or VR devices and can roam around and inside it. You can watch the AR-based car customization demo in the following article.\n eXtending reality with AR and VR - Part 2\n Such a concept can be built using ARCore + Unity or similar alternatives.\nFollow the below steps to build an easy visualization using Unity for the AR capable smartphones/tablets.\n[Note: — This is done in Unity 2018, but it is recommended to use Unity AR Foundation with the latest version, please go ahead and explore that]\n Setup Project for ARCore — follow the google guidelines, and import the ARCore SDK package into Unity.\n Remove the main camera, light, and the add prefabs “ARCore Device” and “Environment Light” available in ARCore SDK.\n Make a prefab out of the 3D model created in the last step for the home.\n Create an InputManager class that instantiates the home prefab at the touch on the phone screen, and attaches it to an empty game object.\npublic class InputManager : MonoBehaviour { [SerializeField] private Camera camera; [SerializeField] private GameObject homePrefab; private GameObject home; private Vector3 offset = new Vector3(0, -10, 10); private bool initialized = false; void Update() { Touch touch; if (!initialized) { if (Input.touchCount \u0026gt; 0 \u0026amp;\u0026amp; (touch = Input.GetTouch(0)).phase == TouchPhase.Began) { CreateHome(touch.position); } else if (Input.GetMouseButtonDown(0)) { CreateHome(Input.mousePosition); } } } void CreateHome(Vector2 position) { Vector3 positionToPlace = offset; home = Instantiate(homePrefab, positionToPlace, Quaternion.identity); initialized = true; } } You can test it directly on an Android Phone (ARCore compatible). Connect the phone via USB, and test it with ARCore Instant preview. It needs some extra services running on the device. Alternatively, you can just build and run the project for android, and it will install the app to the device, and you can play with your XR app.\n I have not included plate detection features into it, but you may include it to make it more real. The house may appear on the detected plane.\n Since we need to run the app in outdoor, ARCore doesn\u0026rsquo;t work perfectly work outdoor, we need to do more optimization in the code to make it work, for now, I have added manual control to move the 3D model at our convenience.\nAdd the following code in the input manager\n[SerializeField] private Button leftButton; [SerializeField] private Button rightButton; [SerializeField] private Button upButton; [SerializeField] private Button downButton; [SerializeField] private Button forwardButton; [SerializeField] private Button backButton; [SerializeField] private Button resetButton; [SerializeField] private Button zoomInButton; [SerializeField] private Button zoomOutButton; private float step = 1; private float zoomStep = 0.1f; private float minZoom = 0.3f; private void Start() { leftButton.onClick.AddListener(() =\u0026gt; OnClickedButton(new Vector3(-step, 0, 0))); rightButton.onClick.AddListener(() =\u0026gt; OnClickedButton(new Vector3(step, 0, 0))); upButton.onClick.AddListener(() =\u0026gt; OnClickedButton(new Vector3(0, step, 0))); downButton.onClick.AddListener(() =\u0026gt; OnClickedButton(new Vector3(0, -step, 0))); forwardButton.onClick.AddListener(() =\u0026gt; OnClickedButton(new Vector3(0, 0, step))); backButton.onClick.AddListener(() =\u0026gt; OnClickedButton(new Vector3(0, 0, -step))); resetButton.onClick.AddListener(OnClickedReset); zoomInButton.onClick.AddListener(() =\u0026gt; OnClickedZoom(zoomStep)); zoomOutButton.onClick.AddListener(() =\u0026gt; OnClickedZoom(-zoomStep)); } private void OnClickedButton(Vector3 position) { if (initialized) { home.transform.position = home.transform.position + position; } } private void OnClickedReset() { if (initialized) { home.transform.position = offset; } } private void OnClickedZoom(float scale) { if (initialized) { Vector3 currentValue = home.transform.localScale; if (currentValue.x + scale \u0026gt; minZoom \u0026amp;\u0026amp; currentValue.y + scale \u0026gt; minZoom \u0026amp;\u0026amp; currentValue.z + scale \u0026gt; minZoom) { home.transform.localScale = new Vector3(currentValue.x + scale, currentValue.y + scale, currentValue.z + scale); //home.transform.position = offset; } } } This completes the code. You may find source code here\nExperience the To Be home in XR Run the app on your phone device, when trying outdoor, some of the the effects do not work well, use the given control to manually move left, right, top, down, near, and far. We can try using zoom-in and zoom-out control to scale it to real. You can get inside the model either by moving toward the model holding phone there or just use the control to bring the home closer such that we get inside the home.\nHere is a video demo.\n I showcased it to parents, wife, and kids, they experienced the To-Be home they are now more than interested in “Let’s do it” than earlier confusion.\nConclusion I have shared my quick experiment of XR that could save a lot of paper and time and money, and still achieve better agility in designing the home. This experiment is just a concept, it needs a lot of optimization to take it to production quality, and need alternative interactions to deal with tech limitations.\nThe following cases can enhance the use case much better\n Structural change simulation — Design each 3D parts with right weights as a rigid body, and load the BIM (Building Information Models) with gravity. Then try to change the structure and see the impact like pressure changes on beams and remaining structures. This would give a clear assessment of safety after the changes. Show actual size scale in 3d itself, as this still does not eliminate the need for paper design. Automatic scan the older structure and create a 3D model that can be changed later. Add tools for style customizations — color, lights, tiles, etc. Build a platform to include any 3D model. More suggestions are welcome.\nThis article is originally published at Medium\n","id":123,"tag":"xr ; thoughtworks ; motivational ; technology ; ar ; vr ; mr ; work from home ; covid19 ; change","title":"XR enabled me to renovate my home!","type":"post","url":"/post/xr-enabled-home-renovation/"},{"content":"Geek Night is a regular event to promote the sharing of technical knowledge and increase collaboration among the geeks. It is organized by a passionate group of programmers and sponsored by ThoughtWorks.\nSep month\u0026rsquo;s geek night has covered Extended Reality - An umbrella term covering various technologies like AR,VR and MR. I will be discussing what is XR, its usage and significance today, that is stretching beyond gaming.\nEmbracing an Agile approach is essential to any successful XR implementation. We will be learning TDD and CICD, an agile way of developing XR applications to meet ever changing business requirements, code quality and security.\nRegistration Only [Closed] Please register yourself here to receive the webinar details:\nAgenda Time Topic By 4:00PM Intro to XR - From nicety to necessity Kuldeep Singh, Hari R 4:30PM Break / Q\u0026amp;A 4:35PM XR Practices - TDD, CD4XR Neelarghya Mandal, Akshay Arora 5:20PM Q\u0026amp;A XR Community : https://medium.com/xrpractices\nThe crew Videos Intro to XR - From nicety to necessity XR Practices - TDD, CD4XR ","id":124,"tag":"xr ; ar ; vr ; mr ; webinar ; event ; thoughtworks ; speaker ; geeknight","title":"Geeknight - XR - eXtending the Reality","type":"event","url":"/event/thougthworks_geeknight_sep_2020_xr/"},{"content":"Today, The world is celebrating teacher’s day. I wish Happy Teacher\u0026rsquo;s day to all of us! We all are teachers and learners as every one of us has something to learn from others and something to teach others. I am thinking of an equation of learning that defines my way of learning and hope it would be helpful for you too. The learning equation — Learning is proportional to Sharing; Learning ∝ Sharing\n The more we share, the more we learn. The better we are at sharing, the better we become at learning. When we stop sharing, we stop learning too.\n We learn every day in all situations of life, good or bad. The learning matures only by sharing, you explore more of yourself by giving it away to others. Writing helps me discover what I know, and that I can say confidently. — https://thinkuldeep.com/\n I echo my colleague Jagbir’s view on why he shares his takeaways to the wider audience. He is an influential writer and he starts with “IT’S NOT YOU, IT’S ME”.\n Why Does he Spam us with Book Reviews!? \u0026ldquo;A lot of folks read books and quite a few book reviews already exist, then why should I care about your post?\u0026rdquo;. Truth…\n You may find some of my takeaways here.\nYesterday, I was discussing my mentoring experience with the PeriFerry team, and some life long lessons I learned here. This mentoring initiative is supported by PeriFerry and ThoughtWorks.\nIt’s been over 2 months since @thoughtworks partnered with @periferrydotcom for launching Prajña. Meet @KuldeepSingh \u0026amp; Julie. We wish all of PeriFerry\u0026#39;s trainers, teachers, mentors a very Happy Teacher’s Day! 🌱\nRead more - https://t.co/odWuMBQM8z#Teachers #education#TeachersDay\n\u0026mdash; PeriFerry (@periferrydotcom) September 5, 2020 Here’s something from one of our mentor’s. Kuldeep says, “I had goosebumps on the very first call when Julie said, ‘’I am unique, and I am proud of myself’’. It was just so powerful to hear that. Mentoring Julie has given me a new perspective of life; that each one of us have unique things to be proud of. We are all constantly learning and evolving, and the learning only matures by sharing. I’ve explored more of myself only by giving it away to others; to teach is to reaffirm our own lessons. I am Julie’s mentor, yet I have learnt a lot from her too… Thanks to ThoughtWorks and PeriFerry for providing us this platform!”\n There was a question, when should we mentor others? I relate mentoring with sharing. As per my equation above, we should start sharing when we want to learn. See, Learning is a natural process from birth, we keep learning, so is the process of sharing. We never stop learning and we never stop sharing. Unknowingly we keep sharing, no matter we want to share it or not. Mentoring is all about share what we want to share and becoming better at it.\n If we share anger with others, we become better at it, If we share jelousy with othres, we become better at it,\nif we share knowledge with others, we become better at it\nif we share our value with others, we become bettter at it\nif we share hate with others, we become better at it\nif we share love with others, we become better at it…\n There is a lot to learn and a lot to share, even this year 2020 is teaching us.\nHappy Teacher\u0026rsquo;s Day, to all my teachers for teaching me the values I carry today! Keep sharing!\nThis article is originally published at Medium\n","id":125,"tag":"general ; thoughtworks ; motivational ; takeaways ; mentor ; life ; experience ; learnings","title":"The Learning Equation","type":"post","url":"/post/the-learning-equation/"},{"content":"Techgig is the largest community of developers, also organizes webinars on emerging technologies.\nI have spoken at a webinar on Extended Reality (XR) - Moving from Nicety to Necessity on coming Aug 13, at 3PM IST.\nJoin @thinkuldeep, Head- ARVR Practice, #ThoughtWorksIndia, for a webinar on \u0026quot;why XR is moving from nicety to necessity in the new normal?\u0026quot;\nRegistrations are now open: https://t.co/IHcOpLgLXZ@techgigdotcom #ExtendedReality #ThoughtWorks pic.twitter.com/MCf7EcWSjV\n\u0026mdash; ThoughtWorks India (@thoughtworksIN) August 11, 2020 Highlights COVID-19 has impacted economies, businesses and human behaviour. The sudden change has forced organizations to embrace digital-first at accelerated speeds. The ensuing chaos has given emerging tech like XR the opportunity to become a need-to-have over a nice-to-have in industries like health, retail, manufacturing and more. We will look at examples of XR application across several sectors and also delve into where and how one can pick up the skill set needed to excel in XR tech.\nKey points of discussion\n Why XR is moving from nicety to necessity in the new normal? Retail and eCommerce – xCommerce (Future of shopping) Manufacturing – Efficiency and Productivity (Lesser man-power) Automotive – Virtual showroom, product customization, remote maintenance Real estate management - Indoor virtual positioning system Training and Education – Immersive learning Healthcare - Augmenting telemedicine How to learn and tap into XR technologies Event Summary 900+ enthusiasts signed up from diverse background such as developer,BA,QA,PM,AR VR technologists. It was a smooth session, thanks you the organizers from TechGig. At the end, there were a good discussions and question answers session.\nWhy is XR moving from nicety to necessity in the new normal?\nJoin an eminent speaker Kuldeep Singh, Head of XR Practice, ThoughtWorks for a webinar on #ExtendedReality at 3 PM today.\nRegister here https://t.co/mjBrpfooiG#TechGig #Webinar @thinkuldeep @thoughtworks pic.twitter.com/F1Hc1Tri6p\n\u0026mdash; TECHGIG (@techgigdotcom) August 13, 2020 Video Here 2020-08-13 1500 Extended Reality (XR) - Moving from Nicety to Necessity from TechGig on Vimeo.\nSlides This browser does not support PDFs. Please download the PDF to view it: Download PDF.\n ","id":126,"tag":"xr ; ar ; vr ; covid19 ; speaker ; webinar ; event ; techgig","title":"Webinar - XR - Nicety to Necessity","type":"event","url":"/event/techgigs-xr-webinar-1/"},{"content":"It’s now two years since I accepted a challenging change to leave the previous organization after 13 long years, and join ThoughtWorks. Any change requires sacrifices and dedication before it becomes a positive change, but the journey becomes easy when we believe in the change and accept it. A change introduces us to a new environment and enables new thoughts, and new energy. The two years journey at ThoughtWorks is no different.\nThoughtworks interview conversation leaves a long-lasting impact on people, and I was also influenced from the interview processes, and by the time I joined ThoughtWorks, I read and heard multiple things about ThoughtWorks, interesting culture, different ways of working, people-first organization, connection matters most here, and also how hard it would be for a lateral to get settled. Putting down other offers wasn\u0026rsquo;t easy, but I was kind of prepared to accept the challenge and joined ThoughtWorks with mixed feelings.\nHere I am writing down the changes that happened around me, how I learned to sail in the sea of thoughts, and why do I call these two years as thoughtful years\nInduction in ThoughtWorks Initial few days I got the cultural dump, history, vision, who is who, office working structures, why we, etc. This is nothing different than other organizations. However, the interesting part started when I was asked to meet a number of people who were little settled in the organization or have gone through the journey I just started.\nSee what I got, a lot of inputs from different people, and most were talking about how ThoughtWorks is different and maybe difficult too. Some suggestions to go away with the baggage of titles/experience I was carrying, some suggestions were a little lighter asking to just go with the flow and take it easy. As a result, I got more confused.\nFew things, I learned initially, were hard to absorb, like democratic decision making in delivery, too much transparency, combined ownership of each and everything (even we have two MDs for India), people signing up on their own, almost no hard structures in the team and organization. There are just 4 grades, I joined the highest grade here, but nothing to be proud about, no one talks about what grade you are in.\nNow you are in a sea of people, build your network, create your influence, and influence the influencers. Choose your mentors, cultivators as so on, In short, define your own journey!\nWelcome to ThoughtWorks, your journey starts now! If all of the above bothers you then yes it\u0026rsquo;s hard, but I followed a golden rule to observe more and react less. I liked the culture in theory but was more curious to know the things on the ground, and that\u0026rsquo;s where the next section starts.\nCheck the ground to build a strong foundation I knew I have to change and learn new ways of working, but unless I believe in the to-be state it won\u0026rsquo;t be possible. Enough of theory, I wanted to see things on the ground, what does the word “Agile” mean here? How do the teams collaborate? How strong are the processes? What is so special about TW and TWers?\nI signed up to start my TW\u0026rsquo;s journey from the ground and joined a super techie team as a developer, the team is working on a commodity trading solution in Blockchain and Microservices. Blockchain was new but I was well equipped in microservices-based development. I started taking functional stories, and writing code but was that that easy? It was time to test myself, TW, TWers, and get yourself tested by TWers. Listing here my below observations and learnings from the ground.\n Sign-up — Yes, you already get a hint in the last paragraph, people do sign-up here. I learned to move from the paradigm of “task assignment” to “sign-up”. Stand-up — Everyone shares updates on the tasks they have signed up, any issues or bottlenecks. The interesting part of stand-up is, it is run by anyone in the team, you don’t need a scrum-master/TL/PM role or special experience to do that, in fact, TL/PM remains silent here unless there is any update to share. Every other member carries the responsibility to maintain the discipline of stand-up, they raise the flag of any discussion going beyond the limit, and then you are supposed to pass on the discussion for a later time. Team sign-up for the next tasks, and inform the team if there is any call-out. eXtreme Programming (XP) in practice — I have seen XP in practice for the first time. To my surprise I was productive from day-1, no special induction, I started pairing with other developers after a project brief from Business Analyst. Every developer (irrespective of experience level) is so confident in talking about unit tests, their value, and knows the art of refactoring. The team is focused on maintaining the test pyramid, and reducing the feedback cycle with pair-programming, unit tests, DevBox, and automation testing. All these make TWers super agile. Thanks to the Dev BootCamp where I learned these XP concepts, I have also shared my learnings on test-first-pair development here. Huddle helps take better decisions — Team members call for a huddle to discuss any tech issue or bottleneck, and take input from the rest of the team before proceeding. It\u0026rsquo;s a collective decision by the team to go for a particular option, and it leads to a more confident decision and less burden on individuals. Even the estimation of stories/tasks are done collectively. Overlapping boundaries across roles — Business Analyst here is not just gathering requirements but also manages the scope of the project, iteration management, and prioritization with Product Owners. Project Managers (if any) are more focused on enabling the team, providing autonomy to the team, no wonder if one sees a PM pairing with people. QAs review the unit tests during DevBox, do active programming for automation test suites, and help BA is grooming stories with the right acceptance criteria. Learned blockchain technology and tried hands in streamlining architecture cross-cutting concerns. This became a great platform for me to experience ThoughtWorks practices and build a network inside, outside the team. It has definitely built a strong foundation. I learned the “Agile” process means “what makes sense” here, the project teams decide how they want to run the project, no organization level enforcement.\nFound new love in Reading Books I was not a habitual book reader, this has changed now. ThoughtWorks gives a set of books as part of the joining kit, and also there is a yearly allowance to buy books. I must give credit to the books and their authors who helped me understand the ThoughtWorks analogy better and believe in it. It started with Extreme Programming Explained by Kent Beck. Another book Agile IT Organization Design — For digital transformation and continuous delivery by Sriram Narayan explains software development as a design process, this book exactly represents ThoughtWorks as an organization, I have experienced most of the concept on the ground.\nAnother book that influenced me a lot is What Got You Here Won’t Get You There by Marshall Goldsmith. This book is my change enabler and motivates me to change for good, change to become better, I literally followed this book to change myself on some of the improvements I received in my feedback from colleagues. Yes, you get feedback upfront, no matter at what role you are, ThoughtWorks has great feedback culture imbibed in the project delivery practices.\nWriting is an addiction Just like reading, writing is also an addiction. We just need a good platform and motivation to do that. Just after joining ThoughtWorks, I realized that people write a lot, there are a number of books written by ThoughtWorks. It motivated me to set up my personal blog site thinkuldeep.com — the world of learning, sharing, and caring.\n Welcome to Kuldeep\u0026rsquo;s Space\n The world of learning, sharing, and caring\n In the last two years, I have published more than 30 articles on ThoughtWorks Insights, Medium, DZone, and LinkedIn Pulse. I have collected all past authored content, but never published them, my personal blog now has around 70 articles, 13 events, and more about me, my social profiles, all at one place. It covers a wide range of topics: motivational, take-aways, covid19, technology, blockchain, ARVR, IoT, micro-services, enterprise solutions, practices, and more.\nI have been sharing my writing experience with my team, and now my writing is spreading, my teams have written more than 70 articles on the following publications XR Practices, Android-Core, and ThoughtWorks Insights.\nDoers are welcome By now, you get the sign-up culture here, I did sign up for setting up capabilities/practice for ARVR. We got some good assignments, and have a number of upcoming prospects. We have started from scratch, got signup from interested people, organized some inhouse training, and did a number of experiments around eXtended Reality. We have shared our learnings on ARVR practices internally and externally in the form of training, webinars, and blogs. We have found ways of working in the time of Covid19 and working from home without any impact on productivity and delivery.\nFear of failure is a major bottleneck for signing up for such an initiative, but at ThoughtWorks, we have overcome the fear of failures, we celebrate the failures too. We have events like FailConf at ThoughtWorks.\n My colleagues discussing successful failures We discussed what went well, and what we learned from those failures and how made them successful failures.\nGrow your social capital I realized the real importance of building social capital after joining ThoughtWorks and tried multiple ways to strengthen connections and communities inside and outside ThoughtWorks. Participated/Organized in more than 10 events, that cover workshops, webinars, and other events.\n Mentor and Judge at Hackathons (Smart India Hackathon, Hack In India) AwayDay and XR Days in ThoughtWorks was really helpful for me to know different offices and cultures. Inception Workshop was an eye-opener for me. Spoken for Students and Developer Communities (Google DevFest 2019, Google Developer Student Club) Joined The VRAR Associations, and Co-leading Product Design and Development community. Never bothered of social connections, but it matters, linked followers has also grown from 200 to more than 2000 in the last two years.\nPurposeful work I wrote about Embibe purpose and balance in life inspired by the book Life’s Amazing Secrets. ThoughtWorks also believes in balancing 3 Pillars; sustainability, excellence, and social justice. Working for social justice helps bring purpose to our work. I was able to revive a few hobbies which got buried under the work before ThoughtWorks and uncovered new potentials. I along with others ThoughWorkers have started mentoring at PeriFerry, and also supported Indeed Foundation. I have also signed up for mentor-ship at Dreamentor. It gives real pleasure when we guide someone and see them become successful.\nThoughtWorks is a more than 25 years old organization, and has an immense culture built over these years, understanding everything in just 2 years is not enough, I am still learning. Thank you all for making the journey fruitful.\nStay tuned and keep in touch!\nThis article is originally published at Medium\n","id":127,"tag":"general ; thoughtworks ; nagarro ; life ; change ; reflection ; takeaway ; motivational","title":"Two Thoughtful Years!","type":"post","url":"/post/two-thoughtful-years/"},{"content":"After a wonderful experience of participating as a judge and mentor at Smart India Hackathon 2019, I am delighted to share that I again got a chance to meet and judge India\u0026rsquo;s young winners and innovators at Grand Finale of Smart India Hackathon 2020, world’s biggest open innovation modal, organized by AICTE and MHRD, Govt of India. This time it is no more a hackathon but a hackathon of hackathons — 4 Lacks students, 1500 small hackathons, 40 Nodal centers, around 2000 mentors and 1000 evaluators, 11000 finalists and more…\nQuoting the beautiful lines written by one of the SIH host:\n बूँद बूँद एकत्रित करता चला, समुंदर बनाता चला । जिसका लक्ष्य जहाँ मिला मंथन वहां कराता चला ।\n I attended the grand finale from 1-3 Aug 2020, and sharing here my experience and learning. I was assigned to Nodal center — Oriental College Of Technology, Bhopal, here is how it rest of it goes.\nVirtual Inauguration Ceremony Day 1 started at sharp 8 AM with an institute level inauguration ceremony at Oriental College Of Technology, Bhopal. Prestigious guests from the institute, AICTE, I4C, and CAPT have addressed the event online. The chief guest Shri Pawan Srivastava, IPS has talked about the importance of Predictive Policing and building a sustainable environment.\n National Level Inauguration event started at 9 AM by Dr. Abhay Jere, Chief Innovation Officer, MHRD, Government Of India. He explained how the hackathon concept is democratized by SIH, and it now becomes the Largest Open Innovation Modal. We come to know that there were around 1500+ college-level hackathons that happened before this grand finale, and we will be judging the already winners in this event.\nRahul Sharma — President, AWS India, Amazon explained how cloud is democratizing innovations at an economical level. Shri VK Kaumudi, IPS, Director General, BPRD explained the importance of empowering digital forensic and what are the big challenges being faced such as misuse of social media. Anil Sahasrabudhe, Chairman AICTE, explained that around 72 central and state govt departments, ministries, private corporates, institutions have contributed. SIH2020 a good example of a partnership of govt, private, and people.\n Anand Deshpande — President, Persistent Systems, shared that world is all about networking, how well are we connected defines us. I second the idea of competing with ownself, shared by the Guest of Honour, Hon’ble Minister of State HRD, Shri Sanjay Dhotre. and A really motivating speech by the Chief Guest, Hon’ble Minister of HRD, Shri Ramesh Pokhriyal ‘Nishank’. See the video above.\nMentoring Sessions Mentoring sessions are planned at 11 AM every day. Problem statements for my teams were in the category of \u0026ldquo;Security and Surveillance\u0026rdquo;. Thanks to Dr. Sandeep Garg and his team for seamlessly executing the event for our team. Dr. Amit Kanskar and Mr. Amit Dubey have also mentored and judged the teams with me.\nDay 1\u0026rsquo;s primary suggestions were around alignment towards problem statements, product thinking, and analysis. On Day2, some of the team have shown tremendous progress and revived completely. On Day3, more discussion was about user experience and an end to end working.\nLeisure Activity Break Everyday 3:30 to 4 PM was a leisure break, live sessions by students, and artists.\nOur #SIHFinalists are not only coders and techno-geeks but are also creative, enthusiastic, bold and know how to have fun! Here are some awesome leisure activities held on Day 1 of #SIH2020. #SIHGoesVirtual @abhayjere @HRDMinistry @mhrd_innovation @AICTE_INDIA @Persistentsys pic.twitter.com/itYMGD5JIi\n\u0026mdash; i4C (@i4CIndia) August 1, 2020 Number of other leisure activities were also there, such as Team Pic Contest, Social channels, and Leadership talks and more\u0026hellip;\nPM Modi interacts with Students PM Modi addresses and interacts with students on Day 1.\n PM Modi and students interacted on their solutions and technology covering face detection while there is a mask, virtual policing, use tech to reduce the gap between people and police, data-based health solutions, prediction, AI, ML, corporate inefficiency detection and what not!\nPM Modi urges to bring human touch in technology.\n Interesting Ideas 1000s of ideas were there for 100s of problem statements given to the students, this is just a glimpse of some.\n Automatic Invoicing, settlement, reconciliation using AI Virtual Police Station (AR VR)  Use of AI, ML, NLP, Chatbot to reduce gap of Police and People Predictive Policing CCTV based criminal activity detection using face, expression, and gestures. Remote education solutions to working offline and at low internet speed. Face detection using part of the face not covered with a mask. Use of AI to detect corporate inefficiencies, and overcome them to boose growth Predictive analytics on network traffic to detect malicious attacks NLP and sentimental analytics on Chatbot/social media to predict violence, hate, harmful content. AI to detect Chatbot vs human conversations Reusable sanitary products, hygiene services, economical and environmentally-safe products.  Predict violations/damages on water-body, rainfall predictions, prevent loss of life and property A virtual assistant to build a strong health system. Data based health services Analyze corporate health of organizations based on actions and steps taken in the past and planned in the future. Evaluation Round Each Day ends with online evaluations of the participants on the following parameters\n User Experience Solution appearance Impact Technology Execution This time SIH platform was fully online and streamlined, evaluations result are also submitted online.\nDay 3 ends with vote of thanks to participants, evaluators and organisers. Results will now be analyzed at institute level and by AICTE, and the participants needed to wait for some more time.\nValedictory Ceremony Day 4 was about validation and consents for on the results. It was time to say good bye to participants. Oriental College Of Technology, Bhopal organized valedictory ceremony. Chief guest Shri Rajendra Kumar (IPS), Special DG (Crime Branch), MP given great guidance along with Guest of Honour Dr. Sunil Kumar.\n Finally winners at the Nodal Center, are announced. Congratulations to all the winners and all the best to learners. You all participants are winners! I feel proud that future is in your safe hand!\nGood bye, keep in touch, keep rocking, stay safe!\n","id":128,"tag":"sih ; experience ; takeaways ; event ; hackathon ; judge ; mentor","title":"Judge and Mentor - Smart India Hackathon 2020","type":"event","url":"/event/sih-2020/"},{"content":"I got chance to judge the HackInIndia event, India\u0026rsquo;s National Level hackathon organized by Script Foundation on 2nd July, 2020.\nAround 150 teams signed up across multiple tracks (Healthcare, Open Innovation, ARVR, EdTech, RPA and FinTech) supported by couple of talented mentors. I and Raviraj Subramanian got chance to meet only top 20 teams in Jury round, and I was amazed to see their innovative ideas, and demonstrations the young blood of nation bring on the table. They were easily explaining some of the complex concept of machine learning, computer vision, spatial computing, NLP and IoT. Not just that, they were well concerned about challenges we as a country facing in this COVID time in Health-care, Education, Agriculture and Finance.\nIt was amazing to see these youngsters come up with interesting usecases, and demoed working concepts. It is not the a complete list, just some glimpse.\n Face mask detection, social distance monitoring and alert the authorities for non-compliance. See how useful this could be in the time, when India is unlocking and the offices are at the stage of re-opening. This type of monitoring is a must to comply with. Virtual try-on, Virtual showrooms, 3D body scanning - A complete virtual shopping. See my earlier article on Future of Shopping, where we talked about few concepts, and team here showed them. Using technology to report and reduce crime Accelerate education by easy remote tools, accessible content management, attentiveness checking and alerting - really important when most of the education being planning as online classes. Manage your pocket well by spending analysis on online tools in this time. Patient monitoring and record keeping, remote consultation - this is need of time to digitally transfer patient records and have remote consultation facilities. Use technology to detect diseases and reduce chances of mis-diagnosis Economical soil testing methods to boost production in agriculture, and build economy. Tools for specially-abled people. If I talk about technology, then you name one, and it was there. Python, Java, JS, TensorFlow, YoloY3, OCR, Agora, Angular, React, Unity, vuforia, ARCore, ARFoundation, ATmega chip, ardunio, Knn, Embedded System, C, C#, Android, iOS and more..\nThanks to the participants, mentors, organizers and sponsors of the event. Keep it up!\n","id":129,"tag":"xr ; ai ; judge ; hackathon ; event ; covid19 ; ar ; innovation ; ml ; machine learning ; cv","title":"Judge - HackInIndia - Hackathon 2020","type":"event","url":"/event/hackinindia-hackathon-2020/"},{"content":"eXtended Reality (XR) that includes AR and VR technologies have been silently disrupting the gaming industry for more than a decade. In the recent past, the tech has gained some traction in the education and training space.\nToday, however, COVID-19 with its mandates of social distancing and less to no physical interactions has accelerated adoption for XR. The tech is virtual, contactless, and delivers outstanding brand experiences - perfect for the new normal that multiple industries like retail will benefit from. To do that, the sector will first have to asses gaps that the new normal brings, and XR could service -\nFive ways COVID-19 is changing human behavior Trust and confidence: People are low on trust these days. The fear mongering mixed amid COVID-19 news has become a daily challenge to deal with. According to Outlook India, a C-Voter nationwide survey indicates that the trust levels of Indians in regard to social media is very low.\n Self-policing: Studies have shown that one\u0026rsquo;s behaviour in emergency situations depends on the individual’s perceived sense of security. And during the current pandemic, we are seeing unreasonable or unusual behaviour in response to what was once considered regular conduct.\n Virtual-friendly: 2020 has been the start of everything virtual. People indulge in virtual workout sessions, meetings, conferences, coffee breaks, dating, gaming, ceremonies, and more.\n Companionship: While social distancing might stay, our social nature will require us to be more intentional about seeking and responding to companionship and emotional support.\n Freedom: The lockdown has morphed the meaning of freedom. People have to reimagine what freedom means within the framework of acceptable behaviour during and post the pandemic.\n How does ‘virtual vs. physical’ impact retail consumers and enterprises When talking about consumers, Let’s use the example of a store visit; XR is changing online shopping and making it an immersive and virtual experience. 360 degree store layouts allow consumers to explore and discover products at leisure, exactly as they would in the real world - if not enjoy a more empathetic and gamified experience.\n Source : https://www.groovejones.com/shopping_aisle_of_the_future_ar_portal/ Imagine a shopping experience where one could tag family and friends and go on a communal and virtual shopping trip. While online shopping allows people to share their wishlists, immersive virtual shopping experiences definitely ups the ante.\nAdd to this, the option of virtually bumping into social contacts who are shopping at the same place as you. Social alerts on their buys and a native chat application will bring us as close to the real deal as possible.\nCustomers can also choose to interact with store assistants. Such experiences can be designed as invested conversations with the brand. Consumers are also encouraged to make intelligent buying decisions with contextual product information like offers, reviews, sizes, colors, etc. ThoughtWorks’ TellMeMore app tries to do just that - provide a better in-store product experience by overlaying information like book reviews, number of pages, and comparative prices beside the product (in this case; a book) view.\nVirtual positioning systems help build indoor positioning applications for large premises like shopping centers, airports, train stations, warehouses, factories, and more. In the case of a retail store, such a system will allow users to mark, locate, and navigate to specific items on virtual shelves.\nVirtual trials could easily become the norm where customers try products from fashion to make-up to cars and even houses and hotel rooms at ease and in as much detail as they’d like. And, when this is a collective experience for customers’ friends and family or even their favorite influencers - the resulting emotional connect with the retail brand is long lasting.\nFinally, adding tech like Haptics goes beyond the usual visual and hearing senses and introduces the sense of touch to customers’ shopping experiences.\nFor retail enterprises, one of the most obvious advantages of XR is stores being open 24/7 and 365 days a year. Enterprises also have the opportunity to personalize their VR store layouts to suit exactly what each of their customers wants to see and experience.\nInstore customer journeys are difficult to track. With VR, enterprises will track who the customer is, where the individual spent the most time in the store, buying habits and history, favorite products, the influencers followed, and usual triggers to complete a purchase.\nThe collected data, thanks to AI tech, can be processed real time for insights and used to influence the customer when in the store. Additionally, algorithms that determine ‘propensity to buy’ and ‘propensity for churn’ will be leveraged to help clients make quicker buying decisions. Retailers can give their customers proactive and personalized offers or discounts when they are in the VR store. As customers ‘walk’ through the store, such pop-ups will gain attention and push sales.\nIt’s common knowledge that customers, sometimes, defer buying decisions because of financial constraints. While most retail enterprises tie-up with financial institutions to alleviate this hesitance, the failure lies in leveraging that partnership at the right moment. With XR, retailers can integrate such financial support into the shopping experience. A customer can simultaneously explore products and available EMI or other financial support options.\nA word of caution; not all customers, except gamers, might own Head Mounted Devices (HMD). This suggests that retail VR experiences should not limit themselves to engaging with only certain devices but should enable immersive shopping experiences through a wide spectrum of mobile devices. This will aid customer adoption further leading to explosive growth in the XR-powered retail space.\nConclusion COVID-19 is perhaps irrevocably changing human behaviour and that includes the many retail-led transactions that people have on a regular basis. Physical touch and trials will be replaced with virtual experiences powered by immersive technology. People will make buying decisions based on a virtual, computer-generated replica of physical goods.\nWhat\u0026rsquo;s encouraging is the rate at which XR technology is accelerating. Retailers could embed touch and smell into customers’ virtual experiences quicker than earlier expected. We believe the combination of emerging tech like XR and AI is going to provide the world with an array of immersive and perhaps sci-fi-like shopping experiences to explore.\nThis article is originally published at ThoughtWorks Insights and derived from XR Practices Medium Publication. It is co-authored by Hari R and edited by Thoughtworks content team.\n","id":130,"tag":"xr ; ar ; vr ; future ; retail ; technology ; change ; thoughtworks ; enterprise","title":"The future of shopping","type":"post","url":"/post/future-of-shoping/"},{"content":"Change is the only constant, early we accept, early we get ready for the future. 2020 is a year of change, accept it, and change our lifestyle and get ready for the future.\nI have been writing about the change in the past, right from accepting the change to change for good to have a purposeful and balanced life and uncover human potential in this Covid19 time.\nToday, June 20, is my younger son Lovyansh’s 3rd Birthday, and June month is really time to rejuvenate and celebrate for the kids when they are free from school. In India, this is the time, when families plan to go for holiday destinations with kids, and generally that the holiday destination is the maternal and paternal place where kids spend time with their grandparents. Especially families like mine where kids only get a chance to live with grandparents during this time. My wife Ritu, who has taken some break from work life and started enjoying the June month along with Kids at her parent\u0026rsquo;s place, and I too, a bachelor\u0026rsquo;s work-life here :) but this Covid19 has upset all of us. Neither my sisters and their kids could come to our place, nor we could go anywhere. We can not celebrate this birthday with family, was setting us back.\nAnd today morning, I come across a really nice note — What if 2020 isn\u0026rsquo;t canceled?\n View this post on Instagram A post shared by LESLIE DWIGHT (@lesliedwight) on Jun 3, 2020 at 12:38pm PDT\n We all know that it can not be canceled. We have to accept it. This is definitely the year of change and it will be remembered for our whole life and after our life, no matter what we do.\nSo fine, then why not celebrate a birthday, as usual, we probably need to make some changes as per our comfort and can still go with it. This article is about the changes that helped us enjoy Lovyansh’s birthday. So don’t setback and go ahead and find the change that makes you happy, and make this year as the year to remember.\nI feel satisfied when I get the following four senses when we plan a get-together. Getting together is really important for mental health and keeping social balance in life.\nSense of Preparation The hosts generally plan a lot, invitations, return gifts, venue, decorations, and some events during the celebrations to entertain the guests, and executing all of these. We try our best as per budget and satisfaction. Now all this may not there, but there are so many things that we used to ignore. The small moments that we never thought could be enjoyable. Here are few snapshots\n Decoration — become the home decorators, try all the creativity that you want, involve family members. My son Lakshya decorated the home, we observed the creativity of him. We handed over a hut that we prepared last week to Lovyansh. Cooking — Ritu prepared cake, and delicious paw bhaji, and a few items as per kids\u0026rsquo; choices. Getting ready — I got this duty to get kids bath and get them ready and myself ready. Cleal the area for the party. I was advised to stay in the party mood always and warned to stay away from office-related talks :) Planning — Go with the flow, but still, it is better to plan a few things. We also did some planning for two types of events and some plays on songs. Entertainment — Get ready to be a DJ, for playing songs, prepare laptops, PC and devices to the video calls. Sense of Celebration Getting a sense of celebration is important, it comes with multiple factors, such as dresses, mack-up, decorations, food, entrainment. When we are home, all these things look unnecessary. That\u0026rsquo;s where the change comes.\nGet dressed well, be in the party mood. It may be good ideas to ask your guest to be in the party mood and dressed well.\nWait a moment, I said guest? yes. Virtual guests! Invite the guests to join the video call. Be as creative as possible here, all the guests may not be well equipped with video conferencing tools. We were not well prepared here, but kids took the full credit here. Plan some virtual games if that entertains the audience. Make sure you include everyone in all the conversations. Do some videography, record every moment. Plan some plays with kids that you want to remember for years to come.\nOf course, the cake cutting, with a full house with the family.\nSense of Togetherness When we host a function, invites guests, and they happily join the function. It gives a great sense that we are not alone. We exchange gifts that symbolize a sense of giving and taking. But now, due to COVID, getting the physical presence of our loved one is hard.\nThe change is, belief in the virtual presence, let the loved one join on the video conference, and we can see the expressions, voice, just like when they come physically. It is definitely not the same as it is with a physical presence, but definitely it will boost the sense of togetherness. Try it out.\nYou may exchange the gifts by means of many digital ways, but in my opinion that also not desirable, exchange the expression and feeling may be treated at the give and take. Here our kids played well, they have shown handmade cards to each other and then shared them digitally, they also shared some email/WhatsApp messages.\nIn short, use technology to fulfill the sense of togetherness.\nWe followed the same on the next day, and wished my maternal grandfather and grandmother their 59th anniversary, on a video call. From last some time, the wishes were getting limited to what\u0026rsquo;s app text wishes, this time we wanted to break the stereotype, and follow the video group call instead of audio or text wishes.\nKeep video call as first preference, then the audio, and at last use text message. A text message is an asynchronous mode of communication, it should be used only when we don\u0026rsquo;t expect a quick response, and let the receiver response as per their availability. However, in the above case, we did not want to keep waiting to get the blessings from our grandparents over a text message and chose to do a video call and feel the blessed.\nSense of Remembrance Another aspect of hosting a get-together for friends and family is getting a sense of remembrance in the future. I am sure, many of us have gone through the flashback many times in this COVID19 time. By now, we might have looked at the videos, pictures of our past events, which we have not touched for years. We need to keep creating that sense of remembrance.\nSo record each and every moment, that’s what I am doing now, even writing this article is an act of remembrance. I have edited family videos for the first time.\nI would like to remember this birthday for years to come whenever 2020 will be remembered, this birthday must also be. Thank you all my loved ones for joining this celebration.\nConclusion Don\u0026rsquo;t let Covid19 stop the celebrations in life. Let’s accept Covid19 in our life and change our ways, change our beliefs, and live a healthy happy life.\nThis 2020 is an important year of our life, we have to remember it for our life, let’s make it the year of change and year to remember with good memories too.\nI wish you all the best! stay healthy, stay happy.\nThis article is originally published at Medium\n","id":131,"tag":"general ; selfhelp ; covid19 ; motivational ; life ; change","title":"2020 — The year of Change","type":"post","url":"/post/2020-year-of-change/"},{"content":"Conducted a live session at Google Developer Students Club of Sri Krishna College of Technology, Coimbatore, TN, India\nIntro here LinkedIn\nLive session, recording. The talks is about What is AR VR? Why to start with AR VR? How to start with AR VR? Detailed Slides How to get started with arvr - DSC 2020 from Kuldeep Singh Feedback This session was really awesome and I have got lot of new knowledge about ar/vr and it\u0026rsquo;s importance in solving real life problems - Kannan Subramanian\n R.Manoj Aiyer It was really amazing session by Kuldeep sir\u0026hellip; I had read abt AR VR before but sir had explained so well, that its much easy to recall \u0026hellip; I\u0026rsquo;ll surely try to write one blog in this week regarding this\u0026hellip; For much better reach of this information\u0026hellip;😊✌ - Komal Pal\n The workshop was very useful Thanks R.Manoj Aiyer for organising it - Vishnu Varthan Rao\n Kuldeep Singh Sir, I don\u0026rsquo;t know about AR, VR before, but after attending your session, I came to know new things \u0026amp; opportunity about AR \u0026amp; VR. - Farhath Manaz\n LinkedIn Kuldeep Singh The session was really useful and awesome Sir. Thanks to R.Manoj Aiyer for organising this session. I got lot of insights on AR/VR. I also got to know about the power of XR Technologies. - Midhun Raaj\n Kuldeep Singh Sir The session is really amazing.I had got some knowledge about AR\u0026amp;VR.I Hope that it will be useful for me for my improvements.!. Thanks bro R.Manoj Aiyer for arranged this wonderful session. - Sethu Pathi\n The session is really useful and amazing\u0026hellip;I came to know what is AR and VR.. Thanks R.Manoj Aiyer for organizing such a amazing session ☺️ - Swethaa C\n The session was really informative. I learnt a lot of new things. Thank you for organising such kind of session! - Subashree B\n awesome session \u0026amp; it was very useful and informative. I got lot of resource to learn AR VR. thanks to R.Manoj Aiyer and Kuldeep Singh - Ranjithkumar B\n ","id":132,"tag":"xr ; ar ; vr ; mr ; introduction ; fundamentals ; dsc ; google ; general ; talk ; speaker","title":"Speaker - Google DSC - How to get started with AR VR?","type":"event","url":"/event/google_dsk_skct_xr_introduction/"},{"content":"In my earlier article, I have explained the fundamentals of XR and the type of XR, and the popular devices in those XR types. ARVR is not the only about Head-Mounted Display(HMD) devices. There is an ever-growing list of Google ARCore supported smartphones/tablets and ARKit is being supported in recent Apple phones. But still, when when it comes to AR and VR devices, the first thing that comes in mind is head-mounted display devices (HMD), however, the newer mobile phones/tablets are now coming with AR/VR support with a minimal set of tools attached.\nSimilarly, when it comes to ARVR Headsets, only a few names come in our mind, like Google Glass, Microsoft HoloLens, MagicLeap, Oculus, GearVR, but the list is really big and growing fast. CES 2020 has seen many new entrants in the segment.\nIn this article, I am trying to curate a list of XR device manufacturers and the devices they have produced so far, and producing in the near future. I am not categorizing the list in AR, VR, or MR but it just the XR.\nKey highlights There are more than 10 manufacturers trying to make normal looking XR glasses. There are number of competitive offerings from manufacturers other than the top known ones (Google, Microsoft) While some big brands are in catch-up game, but few new entrants are much ahead in the game. Near human eye resolution is achieved, and it\u0026rsquo;s not by the big players. Near human eye Field Of View is achieved, and it\u0026rsquo;s not by the big players. 60% of the XR devices are Android (AOSP) based. The Detailed List Please find the detailed list from 70+ manufacturers and their 150+ XR devices in my article at XR Practices Medium Publication\nConclusion So far article mentions 65+ XR device manufacturers, and 100+ devices may be. I have not included a list for Mobile VR, you can find many such vendors building VR boxes, just by searching in Amazon, Flipcart or similar eCommerce website. So the overall list is even bigger and growing day by day.\nThere are a number of peripherals that are also being manufactured for XR devices such as haptic gloves, jackets, and hats, we have not taken them into the account yet.\nLets now build more use cases using these devices and solve some real issues being faced by humanity. We are going through the COVID time, and we have seen usage of XR in Health, let\u0026rsquo;s try to solve some more, build more solutions which promote hygiene, touch-less, remove working, etc. More we use these devices, sooner they will reach in our price range.\nThanks for reading. Let me know if I have missed any, will include them.\n","id":133,"tag":"xr ; ar ; vr ; devices ; analysis ; general","title":"The Growing List of XR Devices","type":"post","url":"/post/the-growing-list-of-xr-devices/"},{"content":"Accessing native XR features from the device using a web page is another form of WebXR.\nIn this article, I want to share an experiment where I tried to control the android device’s native features such as controlling volume and camera torch from a web page. This use case can be really useful in the XR world, where I can control the feature access in the device from cloud services, even the whole look and feel for showing the device controls can be managed from a hosted web page.\nI treat this use case and as a new form of Web XR, where I try to use native features of devices from the web. It opens the web page in Android WebView in the device, and then intercept the commands from the web page, and handle them natively. This way, we can also create a 3D app in Unity using 3D webview and show any 2D webpage in 3D. There could be many use-cases of this trial, I will leave up to you, and think more and let me know where you could use it more.\nLet me start, and show you what I have done.\nThe WebView Activity Let’s create an activity which opens webview, I have implemented it in one of my earlier articles. It intercepts the parameter “command” whenever there is a change in the URL. On the successful interception, it loads the URL with a new parameter called “result”. We will discuss the use or result later.\nprotected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); final String baseURL = getIntent().getStringExtra(INTENT_INPUT_URL); webView = new WebView(this); webView.setWebViewClient(new WebViewClient() { public boolean shouldOverrideUrlLoading(WebView view, String url) { Uri uri = Uri.parse(url); Log.d(TAG, \u0026#34;shouldOverrideUrlLoading\u0026#34; + url); String command = uri.getQueryParameter(\u0026#34;command\u0026#34;); if (command !=null) { ResultReceiver resultReceiver = getIntent().getParcelableExtra(RESPONSE_CALLBACK); if(resultReceiver!=null) { Bundle bundle = new Bundle(); bundle.putString(\u0026#34;command\u0026#34;, command); resultReceiver.send(RESULT_CODE, bundle); Log.d(TAG, \u0026#34;sent response: -\u0026gt; \u0026#34; + command); } view.loadUrl(baseURL + \u0026#34;?result=\u0026#34; + URLEncoder.encode(\u0026#34;Command \u0026#34; + command + \u0026#34; executed\u0026#34;)); return true; } view.loadUrl(url); return true; // then it is not handled by default action } }); WebSettings settings = webView.getSettings(); settings.setJavaScriptEnabled(true); setContentView(webView); //clear cache webView.loadUrl(baseURL); } Pass the command to the originator in the, via callbacks.\npublic static void launchActivity(Activity activity, String url, IResponseCallBack callBack){ Intent intent = new Intent(activity, WebViewActivity.class); intent.putExtra(WebViewActivity.INPUT_URL, url); intent.putExtra(WebViewActivity.RESPONSE_CALLBACK, new CustomReceiver(callBack)); activity.startActivity(intent); } private static class CustomReceiver extends ResultReceiver{ private IResponseCallBack callBack; CustomReceiver(IResponseCallBack responseCallBack){ super(null); callBack = responseCallBack; } @Override protected void onReceiveResult(int resultCode, Bundle resultData) { if(WebViewActivity.RESULT_CODE == resultCode){ String command = resultData.getString(WebViewActivity.RESULT_PARAM_NAME); callBack.OnSuccess(command); } else { callBack.OnFailure(\u0026#34;Not a valid code\u0026#34;); } } } The originator needs to pass the callback while launching the webview.\npublic interface IResponseCallBack { void OnSuccess(String result); void OnFailure(String result); } I have created the above activity in a separate library and defined it in the manifest of the library. however, it is not necessary you may keep it in one project as well. I kept it in the library so, in the future, I may include it in Unity applications.\nThe Main Activity In the main project, lets now create the main activity which launches the WebView activity with a URL where our web page is hosted for controlling the android device, along with the Command Handler callbacks.\nThe Command Handler It can handle volume up, down, and torch on and off.\nclass CommandHandler implements IResponseCallBack { @Override public void OnSuccess(String command) { Log.d(\u0026#34;CommandHandler\u0026#34;, command); if(COMMAND_VOLUME_UP.equals(command)){ volumeControl(AudioManager.ADJUST_RAISE); } else if(COMMAND_VOLUME_DOWN.equals(command)){ volumeControl(AudioManager.ADJUST_LOWER); } else if(COMMAND_TORCH_ON.equals(command)){ switchFlashLight(true); } else if(COMMAND_TORCH_OFF.equals(command)){ switchFlashLight(false); } else { Log.w(\u0026#34;CommandHandler\u0026#34;, \u0026#34;No matching handler found\u0026#34;); } } @Override public void OnFailure(String result) { Log.e(\u0026#34;CommandHandler\u0026#34;, result); } void volumeControl (int type){ AudioManager audioManager = (AudioManager) getApplicationContext().getSystemService(Context.AUDIO_SERVICE); audioManager.adjustVolume(type, AudioManager.FLAG_PLAY_SOUND); } void switchFlashLight(boolean status) { try { boolean isFlashAvailable = getApplicationContext().getPackageManager() .hasSystemFeature(PackageManager.FEATURE_CAMERA_FLASH); if (!isFlashAvailable) { OnFailure(\u0026#34;No flash available in the device\u0026#34;); } CameraManager mCameraManager = (CameraManager) getSystemService(Context.CAMERA_SERVICE); String mCameraId = mCameraManager.getCameraIdList()[0]; mCameraManager.setTorchMode(mCameraId, status); } catch (CameraAccessException e) { OnFailure(e.getMessage()); } } } Load Web Page in WebView It will just launch the webview activity, with a URL.\nprotected void onCreate(@Nullable Bundle savedInstanceState) { super.onCreate(savedInstanceState); WebViewActivity.launchActivity(this, \u0026#34;https://thinkuldeep.github.io/webxr-native/\u0026#34;, new CommandHandler()); } Define the Main Activity in Android Manifest and that\u0026rsquo;s it, the android side work is done.\nThe Web XR Page Now the main part, how the Web XR page can pass the commands and reads the result back from android. It is quite simple, Here is the Index.html, which has 4 buttons on it to control the android native functionalities.\n\u0026lt;!DOCTYPE html\u0026gt; \u0026lt;html lang=\u0026#34;en\u0026#34;\u0026gt; \u0026lt;head\u0026gt; \u0026lt;meta charset=\u0026#34;UTF-8\u0026#34;\u0026gt; \u0026lt;title\u0026gt;New form of Web XR\u0026lt;/title\u0026gt; \u0026lt;link rel=\u0026#34;stylesheet\u0026#34; href=\u0026#34;style.css\u0026#34;\u0026gt; \u0026lt;/head\u0026gt; \u0026lt;body\u0026gt; \u0026lt;div id=\u0026#34;myDIV\u0026#34; class=\u0026#34;header\u0026#34;\u0026gt; \u0026lt;h2\u0026gt;New form of Web XR\u0026lt;/h2\u0026gt; \u0026lt;h6\u0026gt;Control a device from a web page without a native app\u0026lt;/h6\u0026gt; \u0026lt;div\u0026gt; \u0026lt;span onclick=\u0026#34;javascript: doAction(\u0026#39;volume-up\u0026#39;)\u0026#34; class=\u0026#34;btn\u0026#34;\u0026gt;Volume Up\u0026lt;/span\u0026gt; \u0026lt;span onclick=\u0026#34;javascript: doAction(\u0026#39;volume-down\u0026#39;)\u0026#34; class=\u0026#34;btn\u0026#34;\u0026gt;Volume Down\u0026lt;/span\u0026gt; \u0026lt;/div\u0026gt; \u0026lt;div style=\u0026#34;margin: 25px\u0026#34;\u0026gt; \u0026lt;span onclick=\u0026#34;javascript: doAction(\u0026#39;torch-on\u0026#39;)\u0026#34; class=\u0026#34;btn\u0026#34;\u0026gt;Torch On\u0026lt;/span\u0026gt; \u0026lt;span onclick=\u0026#34;javascript: doAction(\u0026#39;torch-off\u0026#39;)\u0026#34; class=\u0026#34;btn\u0026#34;\u0026gt;Torch Off\u0026lt;/span\u0026gt; \u0026lt;/div\u0026gt; \u0026lt;/div\u0026gt; \u0026lt;div id=\u0026#34;result\u0026#34; class=\u0026#34;result\u0026#34;\u0026gt; \u0026lt;/div\u0026gt; \u0026lt;script\u0026gt; function onLoad(){ const urlParams = new URLSearchParams(window.location.search); const result = urlParams.get(\u0026#39;result\u0026#39;); var resultElement = document.getElementById(\u0026#39;result\u0026#39;); resultElement.innerHTML = result; } onLoad(); function doAction(command){ var baseURL = window.location.protocol + \u0026#34;//\u0026#34; + window.location.host + window.location.pathname; window.location = baseURL + \u0026#34;?command=\u0026#34; + command; } \u0026lt;/script\u0026gt; \u0026lt;/body\u0026gt; \u0026lt;/html\u0026gt; Look at the onClick action listener, it is just appending the command parameter and reload the page. That\u0026rsquo;s where webview will intercept the command pass it to the Command Handler in the android native layer.\nAlso, see the OnLoad method, it looks for the “result” parameter, and it found then it shows the result on load. The webview append a result parameter after intercepting a command.\nThe webpage is hosted in the same source code repo — https://thinkuldeep.github.io/webxr-native/\nTry all the buttons and see if they should correctly add the command in the URL.\nIf you don’t want to get into the publishing webpage, then you can package the HTML pages in the android assets as well as follows\n put the index.html and css in /src/main/assets and change URL in Main Activity @Override protected void onCreate(@Nullable Bundle savedInstanceState) { super.onCreate(savedInstanceState); WebViewActivity.launchActivity(this, \u0026#34;file:///android_asset/index.html\u0026#34;, new CommandHandler()); } That\u0026rsquo;s it.\nDemo So now you understand the whole flow, its time for a demo, right. Let\u0026rsquo;s go with it.\nTry it on your phone, and see it working.\nConclusion In this article, I have tried a few basics commands passing from a web app and android then can handle those commands natively. Now get the idea,\nHere is the source code, Now I leave to you to extend it further. https://github.com/thinkuldeep/webxr-native\nTry to address the following things —\n Security implications in this approach? How to handle the offline scenario Reload/Sleep/Cache scenarios. Build 3D app on top of webview activity, and build the app for say Oculus device. It is originally published at XR Practices Medium Publication\n","id":134,"tag":"xr ; android ; webxr ; native app ; tutorial ; technology","title":"A new form of WebXR","type":"post","url":"/post/a-new-form-of-webxr/"},{"content":"Covid19 has challenged the way work, I have penned down my earlier article to cover my workday and our shift toward new ways of working. Covid19 has impacted not just the way we work, but also our lives. All-day, we keep hearing how it is hitting hard on humanity and it has brought so much negativity around us.\nEnough of negativity, let’s uncover human potential to cop with it. In this article, I will cover how the ways of our regular life are shifting, and uncovering new human potentials to try out new things and performing our chores own our own. This shift is important for us, not just to fulfill the need, but also to keep our mind busy, active, and be positive toward life. The kind of human mind shift I am talking about here will also impact various related businesses, and they need to think thru again to survive.\nShifting from… Let me start with before lockdown situation, where we were meeting and greeting physically. Starting from early Feb, this year, we have celebrated our 13th marriage anniversary the 1st anniversary of cousins, and few more family get-togethers.\nThe party was organized by my cousin Mukesh Verma, who works with Shri Arjun Ram Meghwal, Minister of State, Govt Of India. At that time and the schools were finishing up their annuals days, and we joined those proud movements also.\nThen comes the alarming situation of Covid19, puts the whole country into lockdown, we are still in lockdown, even govt has relaxed the lockdown restrictions but looks like humanity will be in the lockdown of the Covid19 impact for the years to come.\nStories of Uncovering Human Potential I will cover stories of me and my family members who have uncovered the unknown potentials in them or explored it more, and still emerging.\nStory of Mukesh Let me start with a story of the naughtiest child (then) of family, my cousin Mukesh Ostwal, who recently got placed in e-learning giant Byjus. He has some great people connect skills, and will do wonder at Job, nothing to surprise here. He surprised us with his painting skills during the COVID situation. Wall painting, stone painting, and paper painting, or whatever comes he can now paint, looks like his brushes are unstoppable.\nI feel he is an addict of Snapchat, Instagram, and more, and loves photography. I wish him all the best for the future.\nStory of Mamta The next story is about my sister Mamta, a teacher by profession at Kendriya Vidyalaya. She has surprised us by different dishes she prepared, from Momos to Coconut Sweets, to Panipuri at home. My brother-in-law Mr. Surender Kumar, who works in DMRC, is more available for the tasting and support (due to lockdown) and approved the dishes :). She is very good at painting, and this situation has given her a chance to explore more. Not just that, she has tried her tailoring skills and prepared some beautiful dresses for our angle Liesha (Kuku).\nWelldone Mamta, keep it up! eager to try the dishes.\nStory of Dr. Saroj Basniwal This story is about my sister Dr. Saroj Basniwal, who is a medical officer, and on an active govt duty at this time. My brother-in-law Dr. Basant Basniwal who runs a clinic and is also great support at home. In addition to duty, they surprised us with all the different dishes and sweets they have prepared at home and the way the kids celebrated their anniversary, birthday, and other functions inhouse.\nIt looks like they will not buy a cake, fries, chips, and sweets (like Jelebies, Gulab Jamun, etc) from outside in future.\nSaroj is also a great painter, and the same skills are coming in the kids. During this time, they became hairstylist for the kids, and also stitched and distributed masks.\nWelldone doctors, keep serving the nation, stay safe.\nStory of Ritu My story will be in-completed without the story of my wife Ritu, she has been a great support to me. She is an engineer by education and assisting our kids comfortably in their school from home. Lakshya (8years) need lessor attention of her in video classes from school, but Lovyansh (3years) just started, she needs to do the handholding and is a teacher for all the subjects, dance, sports, art and overall development of the kids.\nShe prepares delicious and healthy food, but in this COVID time, she has uncovered the potential by preparing many dishes and sweets for which we used to go out, such as Pizza, Pao Bhaji, Burger, Tikki, Dosa, Uttapam, Jalebi, Gulabjamun, and Rasgulla.\nI must say, after tasting these homemade sweets, I realized what is a pure desi ghee sweets. Lakshya and Lovyansh have enjoyed the same a lot.\nShe becomes hair-stylist for me and did a hair cut and given a totally different style, my son Lakshya is happy for my new hairstyle :)\nThanks, Ritu, and it’s more work for you but don’t worry I am here to support you.\nStory of Me (Kuldeep) Let me cover my story now, you have already seen my work from the home story. Now is the time to tell my non-working story. Being part of such as talented family members, it motivates me to re-invent myself, and keep trying new things.\nI used to be a painter in college days and did wall paintings, stage backdrop painting, floor painting, and more, but could not follow the habit since then. Last X-mas, I got a gift from my team drawing board, paint, and brushes to revive my habit, and I utilized the gift recently in this Covid19 time.\nWe have been waiting for getting the sofa cover-stitched for long, and you know what. I stitched the sofa cover, it was not that difficult, that I was thinking. I also stitched a few washable masks for family. I generally carry most of the repair and service at home on my own, be it servicing a fan, chimney, bicycle, and so on.\nAnd as my sister, I become a hairstylist for my kids, and after getting feedback on the hairstyle, it looks like I will be the permanent hairstylist for the younger ones for long.\nIn this COVID time, I become a frequent writer and written around 10 articles and implemented my own blogging site — Kuldeep’s Space (The world of learning, sharing and caring)\nStory of Kids How can kids stay behind, they follow the parents, looks like we are bringing up a few great artists.\nStory of Family There are other members of the family who also who have uncovered and explored the potentials in this COVID time.\nMy cousin, Permila Verma, is servicing the government as a teacher and is a great artist, her paintings are marvelous. Sunita Verma, wife of my brother has also joined the hand with Permila. Sonu Ostwal, is an engineer by education, and a great cook, and artists, prepared a lovely cake for mother’s day.\nGourav Ostwal, our forensic expert, supporting duty at the hospital, and attending virtual international conferences on forensic science. Brother-in-law, Madan Lal Arya, has shown gratitude towards people and stitched and distributed masks.\nIn the above picture, you also see my maternal grandfather, Shri. Bhagirath Verma stitching masks.\nThis article is incomplete if I don\u0026rsquo;t include the story of my motivation, he is a retired Head Master, but still a teacher for us, teaching us lessons of life.\nI try to follow him closely but there are many things only he can do at this age. Now you know the secret of my motivation level, whatever I do is nothing close to my grandfather.\nI have spent my childhood with him and learned that there is always much more potential in us than we think.\n So always trying, always improving.\n With this note, I would like to complete the article and wish you all the best, keep fighting, keep trying, keep improving, you can do much more than what you think…\nIt is originally published at Medium\n","id":135,"tag":"general ; selfhelp ; covid19 ; motivation ; life ; change ; motivational","title":"Uncovering the human potential","type":"post","url":"/post/uncovering-the-human-potential/"},{"content":"There are a number of Enterprise XR use cases where we have a need to open the existing 2D apps in the XR devices in a wrapper app, and this article covers a concept, have a look!\nAfter trying the OS level customization on an XR device, you may like to run your XR apps as a system app. Here is a simple article by Raju K\n Android System Apps development - Android Emulator Setup and guidelines to develop system applications\n Now, you may also want to open existing applications in an XR container, and that\u0026rsquo;s where the XR container app must be a system app and the above article will help you there, however, this article is about building the XR container itself to open existing Android Activity in an XR container and interact with it.\nThis article covers building a XR container app using Unity, and ARCore, which renders an activity of another android app, in 3D space.\n3D Container Dev Environment Setup Create a Unity Project If you are new to Unity then follow this simple article by Neelarghya\n Download and setup XR project using AR Core — the google guidelines,\n Import the ARCore SDK package into Unity.\n Remove the main camera, light, and the add prefabs “ARCore Device” and “Environment Light” available in ARCore SDK.\n Create a Cube and scale it such that it looks like a panel. This is where we would see the android activity.\n Create a material with add to Cube, and set shader “Unlit/Texture”, may choose some default image. Go to File \u0026gt; Build and Settings\n Switch Build target to Android\n Change the Player Settings \u0026gt; Other Settings — package name, minimum SDK version 21.\n Select ARCore Supported in XR Settings\n Check export and export the project. It will export the project as an android project at a path say — EXPORT_PATH\n Create an Android Project Open Android Studio and create an empty project. Create a module in it as a Library Implement a 2D Wrapper Copy the Unity Generated Unity Activity Go to Android Studio.\n Copy EXPORT_PATH/src/main/java/com/tk/threed/container/UnityPlayerActivity.java — -\u0026gt; AndroidLibrary/main/java/com/tk/activitywrapperlibrary/UnityPlayerActivity.java\n Copy EXPORT_PATH/libs/unity-classes.jar to Library/libs\n File \u0026gt; Sync Project and Gradle Files — this should fix all the errors in UnityPlayerActivity.\n Now let the UnityPlayerActivity extend from ActivityGroup instead Activity, I know it is deprecated, there are other ways in the latest Android API using fragments, etc, but let\u0026rsquo;s go with it until I try other ways. Don\u0026rsquo;t change anything else in this class.\npublic class UnityPlayerActivity extends ActivityGroup { protected UnityPlayer mUnityPlayer; // don\u0026#39;t change the name of this variable; referenced from native code // Setup activity layout @Override protected void onCreate(Bundle savedInstanceState) { requestWindowFeature(Window.FEATURE_NO_TITLE); super.onCreate(savedInstanceState); mUnityPlayer = new UnityPlayer(this); setContentView(mUnityPlayer); mUnityPlayer.requestFocus(); } ... } Implement an App Container The app container loads the given activity as a view in a given ActivityGroup. It gets initialized with an instance of an Activity Group.\npublic class AppContainer { private static AppContainer instance; private ActivityGroup context; private AppContainer(ActivityGroup context){ this.context = context; } public static synchronized AppContainer createInstance(ActivityGroup context) { if(instance == null){ instance = new AppContainer(context); } return instance; } public static AppContainer getInstance() { return instance; } ... } Add a method to start a given activity by a LocalActivityManager and add it to the activity group. It runs in the UI thread, and avoid certain exceptions (not listing them here)\npublic void addActivityView(final Class activityClass, final int viewportWidth, final int viewportHeight){ context.runOnUiThread(new Runnable() { @Override public void run() { Intent intent = new Intent(context, activityClass); LocalActivityManager mgr = context.getLocalActivityManager(); Window w = mgr.startActivity(\u0026#34;MyIntentID\u0026#34;, intent); activityView = w != null ? w.getDecorView() : null; activityView.requestFocus(); addActivityToLayout(context, viewportWidth, viewportHeight); } }); } Add the activity to Layout — this will add the activity to a lower layer so it is not visible in the foreground.\nprivate void addActivityToLayout(Activity context, int viewPortWidth, int viewPortHeight) { Rect rect = new Rect(0, 0, viewPortWidth, viewPortHeight); FrameLayout.LayoutParams layoutParams = new FrameLayout.LayoutParams(rect.width(), rect.height()); layoutParams.setMargins(rect.left, rect.top, 0, 0); activityView.setLayoutParams(layoutParams); activityView.measure(viewPortWidth, viewPortHeight); ViewGroup viewGroup = (ViewGroup) context.getWindow().getDecorView().getRootView(); viewGroup.setFocusable(true); viewGroup.setFocusableInTouchMode(true); initBitmapBuffer(viewPortWidth, viewPortHeight); viewGroup.addView(activityView, 0); //Add at lower layer } What? are you serious, if this is not visible in the foreground, how would you see it. Well, we will render the bitmap of this activity view. Remember the screen we created in Unity, we will render this bitmap there, wait and watch!\n Let’s initialize a bitmap as follows\nprivate Bitmap bitmap; private Canvas canvas; private ByteBuffer buffer; private void initBitmapBuffer(int viewPortWidth, int viewPortHeight) { bitmap = Bitmap.createBitmap(viewPortWidth, viewPortHeight, Bitmap.Config.ARGB_8888); canvas = new Canvas(bitmap); buffer = ByteBuffer.allocate(bitmap.getByteCount()); } Now the buffer is ready and ready to be consumed. let\u0026rsquo;s expose a consumption method.\n Render the texture — Here comes the magic method which renders a given native texture pointer using OpenGL (Don’t ask me details, some help from here). It draws the activity views on a canvas (which has bitmap) and then copies the bitmap’s pixel to the buffer, then it magically writes the row pixel data against the native texture pointer.\npublic void renderTexture(final int texturePointer) { byte[] imageBuffer; imageBuffer = getOutputBuffer(); if (imageBuffer.length \u0026gt; 1) { Log.d(\u0026#34;d\u0026#34;, \u0026#34;render texture\u0026#34;); //activityView.requestFocus(); GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, texturePointer); GLES20.glTexSubImage2D(GLES20.GL_TEXTURE_2D, 0, 0, 0, activityView.getWidth(), activityView.getHeight(), GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE, ByteBuffer.wrap(imageBuffer)); } } private byte[] getOutputBuffer() { if (canvas != null ) { try { canvas.drawColor(0, PorterDuff.Mode.CLEAR); activityView.draw(canvas); bitmap.copyPixelsToBuffer(buffer); buffer.rewind(); return buffer.array(); }catch (Exception e){ Log.w(\u0026#34;AppContainer\u0026#34;, \u0026#34;getOutputBuffer\u0026#34;, e); return new byte[0]; } } else { return new byte[0]; } } Using The App Container Add just line at the end of UnityPlayerActivity.OnCreate method, and initialize the app container. No more changes here.\n@Override protected void onCreate(Bundle savedInstanceState) { ... AppContainer.createInstance(this); }` Ok, enough of Java, let’s get back to Unity. A 3D screen is waiting to consume the texture.\nUsing ActivityWrapper Library Before we go, let\u0026rsquo;s build this library, so that it can be included in Unity Project.\n Go to Build \u0026gt; Make activitywrapperlibrary\n Copy the library (AAR) from “activitywrapperlibrary/build/outputs/aar/activitywrapperlibrary-debug.aar” to “Unity Project/Assets/Plugins/Android”\n Add an android manifest file coping from earlier exported path EXPORT_PATH/src/main, make the following changes.\n Change the name of the main Activity to the activity we defined as the wrapper.\n... \u0026lt;application tools:replace=\u0026#34;android:icon,android:theme,android:allowBackup \u0026#34;android:theme=\u0026#34;@style/UnityThemeSelector\u0026#34; ... \u0026lt;activity android:label=\u0026#34;@string/app_name\u0026#34; android:screenOrientation=\u0026#34;fullSensor\u0026#34; .... android:name=\u0026#34;com.tk.activitywrapperlibrary.UnityPlayerActivity\u0026#34; \u0026gt; \u0026lt;intent-filter\u0026gt; \u0026lt;action android:name=\u0026#34;android.intent.action.MAIN\u0026#34; /\u0026gt; \u0026lt;category android:name=\u0026#34;android.intent.category.LAUNCHER\u0026#34; /\u0026gt; \u0026lt;category android:name=\u0026#34;android.intent.category.LEANBACK_LAUNCHER\u0026#34; /\u0026gt; \u0026lt;/intent-filter\u0026gt; \u0026lt;meta-data android:name=\u0026#34;unityplayer.UnityActivity\u0026#34; android:value=\u0026#34;true\u0026#34; /\u0026gt; \u0026lt;/activity\u0026gt; \u0026lt;meta-data android:name=\u0026#34;unity.build-id\u0026#34; android:value=\u0026#34;d92db379-0eb8-4355-a322-0ba760976bb7\u0026#34; /\u0026gt; \u0026lt;meta-data android:name=\u0026#34;unity.splash-mode\u0026#34; android:value=\u0026#34;0\u0026#34; /\u0026gt; \u0026lt;meta-data android:name=\u0026#34;unity.splash-enable\u0026#34; android:value=\u0026#34;True\u0026#34; /\u0026gt; ... \u0026lt;/application\u0026gt; Now, this newly implemented activity will act as a Main Player Activity and on the start of the unity app, it will start the app container.\nBut when you try to run this, you will see multiple Manifest merger conflicts, use User the “tools: replace” at the \u0026lt;application\u0026gt; as well as \u0026lt;activity\u0026gt; level wherever it is required. You can always see merge conflicts in the Android Manifest Viewer of Android Studio.\nOther issues will be related to unity-classes.jar which somehow gets packages in the generated library as well as supplied by Unity itself. You need one copy,\nSomehow “provided” and “compileOnly” wasn\u0026rsquo;t working for me while generating library, so I did a quick workaround, please find some better solution for it. I have to exclude unity-classes.jar from the main Gradle file of unity project, here is a way to do that.\n Go to File \u0026gt; Build Settings\u0026gt; Player Settings \u0026gt; Publishing Settings \u0026gt; Check on Custom Gradle Template\n It should generate a file mainTemplate.gradle, edit the file and add exclude in dependencies in dependencies section as follows\n... **APPLY_PLUGINS** dependencies { implementation fileTree(dir: \u0026#39;libs\u0026#39;, include: [\u0026#39;*.jar\u0026#39;], excludes: [\u0026#39;unity-classes.jar\u0026#39;]) **DEPS**} ... This should fix the build issues in unity. Enough now, we haven\u0026rsquo;t yet build any 3D activity loader, let\u0026rsquo;s jump there.\n Implement XR Activity Loader Let\u0026rsquo;s add more components in the scene, label, and a Load button in Unity. Render the Activity in a 3D Screen Let\u0026rsquo;s create a script to render the activity on the screen.\n Script with the name of the activity class, and height, width as input\npublic class ActivityLoader : MonoBehaviour { public string activityClassName; public Button load; public Text response; public int width = 1080; public int height = 1920; private AndroidJavaObject m_AppContainer; private IntPtr m_NativeTexturePointer; private Renderer m_Renderer; private Texture2D m_ImageTexture2D; private void Start() { m_Renderer = GetComponent\u0026lt;Renderer\u0026gt;(); load.onClick.AddListener(OnLoad); m_ImageTexture2D = new Texture2D(width, height, TextureFormat.ARGB32, false) {filterMode = FilterMode.Point}; m_ImageTexture2D.Apply(); } ... } On clicking the load button, it should load the given activity class. By this time the AppContainer must be initialized, and let\u0026rsquo;s call it to load the activity in a view.\nprivate void OnLoad() { if (activityClassName != null) { response.text = \u0026#34;Loading \u0026#34; + activityClassName; AndroidJavaClass activityClass = new AndroidJavaClass(activityClassName); AndroidJavaClass appContainerClass = new AndroidJavaClass(\u0026#34;com.tk.activitywrapperlibrary.AppContainer\u0026#34;); m_AppContainer = appContainerClass.CallStatic\u0026lt;AndroidJavaObject\u0026gt;(\u0026#34;getInstance\u0026#34;); m_AppContainer.Call(\u0026#34;addActivityView\u0026#34;, activityClass, width, height); m_Renderer.material.mainTexture = m_ImageTexture2D; Debug.LogWarning(\u0026#34;Rendering starts\u0026#34;); StartCoroutine(RenderTexture(m_ImageTexture2D.GetNativeTexturePtr())); } else { response.text = \u0026#34;No package found\u0026#34;; } } Load the texture, call the magic method now in a coroutine, and keep calling it. I know it\u0026rsquo;s not optimal, but lets first make it work then we will discuss multiple methods of optimizations.\nprivate IEnumerator RenderTexture(IntPtr nativePointer) { yield return new WaitForEndOfFrame(); while (true) { m_AppContainer.Call(\u0026#34;renderTexture\u0026#34;, nativePointer.ToInt32()); yield return new WaitForEndOfFrame(); yield return new WaitForEndOfFrame(); yield return new WaitForEndOfFrame(); } } Add this script to Screen GameObject. Assign button, response.\n What to give in Activity Class Name. Oh! we haven’t created any app activity yet which we want to render here. Let’s do that first.\n The 2D App Let\u0026rsquo;s build a 2D app that we want to render in the unity 3D container. Go back to the Android Studio, Choose Login Activity while project creation. It generates a full login app and input fields and so.\nTry to run this app. Just make and deploy to the phone or emulator. and see how it runs. Now we can open this activity from another app if we build our container app as a system app. Ref to article Or we can include this app in Unity as a library. Let\u0026rsquo;s go with this.\nBuild it as a Library To include this app in Unity, we need to build it as a library.\nFollow the steps.\n Open build.gradle and change apply plugin: ‘com.android.application’ to apply plugin: ‘com.android.library’\n Remove applicationId “com.tk.loginapp” from defaultConfig\n Update the Android Manifest as follows — remove the intent filters\n\u0026lt;?xml version=\u0026#34;1.0\u0026#34; encoding=\u0026#34;utf-8\u0026#34;?\u0026gt; \u0026lt;manifest xmlns:android=\u0026#34;http://schemas.android.com/apk/res/android\u0026#34; package=\u0026#34;com.tk.loginapp\u0026#34;\u0026gt; \u0026lt;application android:allowBackup=\u0026#34;true\u0026#34; android:icon=\u0026#34;@mipmap/ic_launcher\u0026#34; android:label=\u0026#34;@string/app_name\u0026#34; android:roundIcon=\u0026#34;@mipmap/ic_launcher_round\u0026#34; android:supportsRtl=\u0026#34;true\u0026#34; android:theme=\u0026#34;@style/AppTheme\u0026#34;\u0026gt; \u0026lt;activity android:name=\u0026#34;.ui.login.LoginActivity\u0026#34; android:label=\u0026#34;@string/app_name\u0026#34;\u0026gt; \u0026lt;/activity\u0026gt; \u0026lt;/application\u0026gt; \u0026lt;/manifest\u0026gt; Click on Sync Now\n Go to Build \u0026gt; Make module ‘app’\n It will generate AAR in app/build/outputs/aar/app-debug.aar\n Using the 2D App library in Unity Follow the same steps as earlier, copy the AAR to Unity Project/Assets/Plugins/Android\n Open file mainTemplate.gradle, edit the file and copy all the dependencies entries from 2D apps Gradle file and paste in dependencies section as follows\n... **APPLY_PLUGINS** dependencies { implementation fileTree(dir: \u0026#39;libs\u0026#39;, include: [\u0026#39;*.jar\u0026#39;], excludes: [\u0026#39;unity-classes.jar\u0026#39;]) implementation \u0026#39;androidx.appcompat:appcompat:1.1.0\u0026#39; implementation \u0026#39;com.google.android.material:material:1.1.0\u0026#39; implementation \u0026#39;androidx.annotation:annotation:1.1.0\u0026#39; implementation \u0026#39;androidx.constraintlayout:constraintlayout:1.1.3\u0026#39; implementation \u0026#39;androidx.lifecycle:lifecycle-extensions:2.2.0\u0026#39; **DEPS**} ... Now, the 2D app is fully available in Unity.\n Let\u0026rsquo;s define the activity in /Assets/Plungs/Android/AndroidManifest.xml It is better to redefine the activities as per the available themes which we are going to render in 3D.\n\u0026lt;activity tools:replace=\u0026#34;android:theme\u0026#34; android:name=\u0026#34;com.tk.loginapp.ui.login.LoginActivity\u0026#34; android:theme=\u0026#34;@style/Theme.AppCompat.Light\u0026#34;/\u0026gt; If you still run into InflateExceptions while running the app, then follow the article below by Raju K\n The InflateException of Death - AppCompat Library woes in Android\n Running a 2D Activity in XR Container So, the time has come to know the class name of the activity, here it is “com.tk.loginapp.ui.login.LoginActivity”.\n Go to Unity and give this name in the Script of the Screen Game Object. Connect your android phone via USB and Go to File \u0026gt; Build and Run. Wait for the build to complete and here you go. Wow, the login app’s activity is getting rendered in the 3D environment but the texture is mirrored, and the color is not coming as it was in 2D.\nI can see the cursor blinking in the text box, which means the texture is continuously getting rendered. See logcat, there is a continuous log of rendering. Great!\n Fixing Mirror Issue Let’s fix the mirroring issue, Luckily Raju K has done good research here, see the article\n Unity Shader Graph — Part 1 - In one of the projects that I’m working on recently, there was a need to mirror the texture at runtime.\n Let\u0026rsquo;s create a simple Unlit shader that just does mirroring. Right-click and Create \u0026gt; Shader \u0026gt; Unlit Shader \u0026gt; Give it a name and Open.\n This project will open in the editor. You may directly modify the file (if you don’t want to follow full steps), make the changes in bold.\nShader \u0026#34;Unlit/Mirror\u0026#34; { Properties { _MainTex (\u0026#34;Texture\u0026#34;, 2D) = \u0026#34;white\u0026#34; {} [Toggle(MIRROR_X)] _FlipX (\u0026#34;Mirror X\u0026#34;, Float) = 0 [Toggle(MIRROR_Y)] _FlipY (\u0026#34;Mirror Y\u0026#34;, Float) = 0 } SubShader { ... Pass { ... #pragma multi_compile_fog #pragma multi_compile ___ MIRROR_X #pragma multi_compile ___ MIRROR_Y ... v2f vert (appdata v) { v2f o; o.vertex = UnityObjectToClipPos(v.vertex); float2 untransformedUV = v.uv; #ifdef MIRROR_X untransformedUV.x = 1.0 - untransformedUV.x; #endif // MIRROR_X #ifdef MIRROR_Y untransformedUV.y = 1.0 - untransformedUV.y; #endif // MIRROR_Y o.uv = TRANSFORM_TEX(untransformedUV, _MainTex); UNITY_TRANSFER_FOG(o,o.vertex); return o; } --- } Change shader for the material of Screen GameObject select this newly created shader. Check Mirror Y, and rotation 180. XR Container in Action Run the app again. Here you go. Congratulations! you are able to run your 2D app in an XR container. You can go closer to it.\nThis is not complete yet, we have just rendered the texture, and the next challenges are implementing all the interactions Gaze, Touch, Gestures, Voice, keyboard inputs for UI controls or work with controllers, etc. However, interactions are also possible, we need to dispatch the events to all the views which are rendering in texture, and the x, y coordinate may be fetched on the texture area by adding a collider on the Screen GameObject. Get the touch, gesture event, and raycast to the object, the raycast hit coordinates, then can be transformed to x, y coordinate on-screen, the dispatch mouse click event at that position in the internal activity-view. Similar ways we can implement drag, key pressed, etc.\nConclusion Though we are able to run a 2D app’s one activity in an XR app container and but this approach has many flaws\n ActivityGroup is deprecated and may soon be removed from android\u0026rsquo;s future versions. Since the other activity runs in the background, there would side effects on observers inside the activity. The different apps may need different customizations, and the approach is not scalable to meet all the requirements Handling full activity life-cycle is a challenge We are controlling only one activity and if that activity is spawning some child activity then there would be issues. Building AAR for the existing system app is challenging, even if we build them maintenance would be a nightmare. Invoking existing running activity from other app is not allowed due to security, we get over it by building the container as a system app, but still, not all the permissions and access to shared storage would not be possible. Multiple versions of android would create conflict and issues while inflating the layout in texture. This approach is just a Proof of Concept, it may not be a production-ready approach.\nThere are other approaches you may try out such as\n Customize the Window Manager of the OS, and place the 2D windows in spatial coordinates. Use VirtualDisplay and Presentation Analyse Oculus TV/VrShell App Try these out and share your learning in this field, do let me know if you have any other finding. if this can be done in a better way.\nThis article is original published on XR Practices Publication\n","id":136,"tag":"xr ; ar ; vr ; mr ; 3d ; enterprise ; android ; system ; customization ; Unity ; ARCore ; technology","title":"Run a 2D App in XR Container","type":"post","url":"/post/run-a-2d-app-in-xr-container/"},{"content":"The Operating System of XR (AR VR) devices is mostly customized to the hardware of the device, and not just that the OS level customizations are needed to meet the enterprise needs such as integration with Enterprise MDM (Mobile Device Management) solutions (such as Intune, MobileIron and Airwatch), 3D lock screen, idle-timeout, custom home screen, custom settings, security, as so on.\nMost organizations currently using traditional hardware and software assets such as mobile devices, laptops, and desktops, and the processes and policies for managing these devices are quite stable and straightforward by now. Now the expectation is to follow the same/similar process for managing the organization’s XR devices. Most enterprise MDM supports standard OS (Windows, Mac, Linux, and Android), but mostly these are not well aligned to 3D user experience.\nThere are a number of XR devices based on Android, eg Oculus,Google Glass, Vuzix, and ARCore/VR supported Smart Phones. This article is a first step toward building a custom Android OS and small customizations. We will use the Android Open Source Project (AOSP) to build the custom image as per our needs.\nTake a deep breath, this will be a lengthy task, you need to wait for hours and hours to finish the steps, and plan your time ahead.\nLet\u0026rsquo;s start …\nBefore you start Wait, before you start. It needs 230GB plus free disk space, and a decent Linux based environment. I was able to get a spare Mac Book Pro with 230 GB available space, I thought of setting the environment following the guide building-android-o-with-a-mac.\nBut later, I found that after basic build software installations and IDE setup, space available is less than 220 GB. Luckily I was in touch with an Android expert Ashish Pathak, he suggested not to take a risk and find other PC.\nAnd here we go, found a Windows 10 PC with 380GB free space.\nLet’s first set up a Linux environment from scratch. Remember Virtual Box etc are not the option here, let\u0026rsquo;s not complicate things.\nOk, let’s start now…\nSetup a Linux based Development Environment Follow the steps below to set up a Linux environment from scratch on a Windows PC.\n Free up the memory space — Open Disk Manager and choose the Shrink disk/delete an unused drive, and let space comes as unallocated, no need to format it, while installation the unallocated space will be formatted as per the Linux OS.\n Download Ubuntu Desktop OS Image — https://releases.ubuntu.com/16.04/\n Create a Bootable image — Download Rufus from https://rufus.ie/, insert a USB pen drive (I used one of size 8GB), select the downloaded Ubuntu ISO image, and start creating the image bootable image. It will some minutes and once you see “Ready” eject the USB safely.\n Boot from the Pen Drive — On windows OS, choose restart, and press F9 while starting (for me it worked, try to look at what does your PC support to see boot menu), it will show a boot menu, choose the pen drive option. It will start the Ubuntu installation. Go with the default options.\n Setup Ubuntu — During installation, it will reboot, and choose Ubuntu option from the boot menu, and then choose the timezone, language, and set up a user account (remember credentials please for later login). You will see the Ubuntu home screen, and see keyboard shortcuts. Remember these. Congratulations! it\u0026rsquo;s ready now.\nSetup Android Build Environment Follow the below steps to set up a build environment for AOSP. Detailed steps are here\n Install Git — https://git-scm.com/downloads — Linux version Install Repo Tool — https://gerrit.googlesource.com/git-repo/+/refs/heads/master/README.md $ sudo apt-get install repo Install Java 8 — $ sudo apt-get install openjdk-8-jdk Create a working directory arvr@arvr-pc:/$ cd ~ arvr@arvr-pc:~$ mkdir aosp arvr@arvr-pc:~$ cd aosp arvr@arvr-pc:~/aosp$ Download AOSP Source Code Initialize the AOSP repository source code (the default is master): I fetched from the android-8.1.0_r18 branch to build for my old Android device, You may want to go for the latest. arvr@arvr-pc:~/aosp$ repo init -u https://android.googlesource.com/platform/manifest -b android-8.1.0_r18 --depth=1 You need to set the correct username and email in git config, to make the above succeed. git config --global user.name \u0026#34;FIRST_NAME LAST_NAME\u0026#34; git config --global user.email \u0026#34;MY_NAME@example.com\u0026#34; Sync the Repo arvr@arvr-pc:~/aosp$ repo sync -cdj16 --no-tags ... ... Checking out files: 100% (9633/9633), done. Checking out files: 100% (777/777), done.tform/system/coreChecking out files: 23% (181/777) Checking out files: 100% (34/34), done.latform/test/vts-testcase/performanceChecking out files: 32% (11/34) Checking out projects: 100% (592/592), done. repo sync has finished successfully. arvr@arvr-pc:~/aosp$ It takes a couple of hours to complete, I let it run in the night, with caffeine ON. Parallel to this, I have also started downloading Android Studio, with SDK, emulators or so. — Follow the steps\nGood night!\nBuild the source code Good morning, source code is downloaded, the Android Studio is also ready.\nLet’s do another big chunk, building the source code\n Set the environment\narvr@arvr-pc:~/aosp$ source build/envsetup.sh including device/asus/fugu/vendorsetup.sh including device/generic/car/vendorsetup.sh including device/generic/mini-emulator-arm64/vendorsetup.sh including device/generic/mini-emulator-armv7-a-neon/vendorsetup.sh including device/generic/mini-emulator-mips64/vendorsetup.sh including device/generic/mini-emulator-mips/vendorsetup.sh including device/generic/mini-emulator-x86_64/vendorsetup.sh including device/generic/mini-emulator-x86/vendorsetup.sh including device/generic/uml/vendorsetup.sh including device/google/dragon/vendorsetup.sh including device/google/marlin/vendorsetup.sh including device/google/muskie/vendorsetup.sh including device/google/taimen/vendorsetup.sh including device/huawei/angler/vendorsetup.sh including device/lge/bullhead/vendorsetup.sh including device/linaro/hikey/vendorsetup.sh including sdk/bash_completion/adb.bash Now lunch command is available for you, pass an x86 target to it, arm ones are quite slow. lunch command without any parameter lists the options, choose one of them.\n arvr@arvr-pc:~/aosp$ lunch You\u0026#39;re building on Linux Lunch menu... pick a combo: 1. aosp_arm-eng 2. aosp_arm64-eng 3. aosp_mips-eng 4. aosp_mips64-eng 5. aosp_x86-eng 6. aosp_x86_64-eng 7. full_fugu-userdebug arvr@arvr-pc:~/aosp$ lunch 5 ============================================ PLATFORM_VERSION_CODENAME=REL PLATFORM_VERSION=8.1.0 TARGET_PRODUCT=aosp_x86 TARGET_BUILD_VARIANT=eng TARGET_BUILD_TYPE=release TARGET_PLATFORM_VERSION=OPM1 TARGET_BUILD_APPS= ... OUT_DIR=out AUX_OS_VARIANT_LIST= ============================================ Run make command — “make” or “m” with parallelization 16.\n arvr@arvr-pc:~/aosp$ make -j16 ... REALLY setting name! Warning: The kernel is still using the old partition table. The new table will be used at the next reboot. The operation has completed successfully. #### build completed successfully (01:43:31 (hh:mm:ss)) #### This command took good 2 hours, and your emulator is ready now. Good noon!\n Run the emulator\nNow the emulator is ready, let\u0026rsquo;s run it.\n arvr@arvr-pc:~/aosp$ emulator \u0026amp; [1] 11121 arvr@arvr-pc:~/aosp$ emulator: WARNING: system partition size adjusted to match image file (2562 MB \u0026gt; 200 MB) emulator: WARNING: cannot read adb public key file: /home/arvr/.android/adbkey.pub qemu-system-i386: -device goldfish_pstore,addr=0xff018000,size=0x10000,file=/home/arvr/aosp/out/target/product/generic_x86/data/misc/pstore/pstore.bin: Unable to open /home/arvr/aosp/out/target/product/generic_x86/data/misc/pstore/pstore.bin: No such file or directory Your emulator is out of date, please update by launching Android Studio: - Start Android Studio - Select menu \u0026#34;Tools \u0026gt; Android \u0026gt; SDK Manager\u0026#34; - Click \u0026#34;SDK Tools\u0026#34; tab - Check \u0026#34;Android Emulator\u0026#34; checkbox - Click \u0026#34;OK\u0026#34; Your fresh android image is running. Congratulations. You should see it in the list of connected devices in ADB.\narvr@arvr-pc:~/aosp$ adb devices List of devices attached emulator-5554 device You can install applications on this emulator just like any other.\n Customizing the Settings App Let\u0026rsquo;s assume, we want to customize the Settings app for our XR need. Let\u0026rsquo;s open the settings I wanted to customize a settings app.\nOops! the settings app is failing to load. Logcat tied to look through the logs using logcat.\narvr@arvr-pc:~/aosp$ adb logcat 05–10 11:21:10.091 1640 1651 I ActivityManager: START u0 {act=android.intent.action.MAIN cat=[android.intent.category.LAUNCHER] flg=0x10200000 cmp=com.android.settings/.Settings (has extras)} from uid 10015 05–10 11:21:10.092 1604 2569 W audio_hw_generic: Not supplying enough data to HAL, expected position 895044 , only wrote 894994 05–10 11:21:10.126 1383 1383 D gralloc_ranchu: gralloc_alloc: Creating ashmem region of size 1540096 05–10 11:21:10.130 2782 2782 W zygote : Unexpected CPU variant for X86 using defaults: x86 05–10 11:21:10.132 1640 1759 I ActivityManager: Start proc 2782:com.android.settings/1000 for activity com.android.settings/.Settings ... This issue may be related to the branch Android 8.1 that I checked out, I tried to rebuild “m clean” and “m” but no luck.\nGood evening!\nLuckily I am not the only one facing this, I found this.\n Android 8 Settings app crashes on emulator with clean AOSP build stackoverflow.com\n Now I got two suggestions, change WifiDisplaySettings.java in the AOSP packages or just clean build with the user-debug target. I will go with the first option.\nOpen the Settings App in Android Studio Locate the settings in /packages/apps/Settings and open in Android Studio. Open class - /src/com/android/settings/wfd/WifiDisplaySettings.java and Change the code snippet\npublic static boolean isAvailable(Context context) { return context.getSystemService(Context.DISPLAY_SERVICE) != null \u0026amp;\u0026amp; context.getSystemService(Context.WIFI_P2P_SERVICE) != null; } To\npublic static boolean isAvailable(Context context) { return false; } Let’s build again, another few hours? No, don\u0026rsquo;t worry this time it won’t take long, it will build just what you have changed. Run the make command again.\narvr@arvr-pc:~/aosp$ m -j16 The operation has completed successfully. [1]+ Done emulator (wd: ~/aosp) (wd now: ~/aosp/packages/apps/Settings/src/com/android/settings/wfd) #### build completed successfully (02:44 (mm:ss)) #### arvr@arvr-pc:~/aosp$ emulator \u0026amp; The emulator is now built with changes, run the emulator now. And here you go, Settings app is working! Customize the Settings App Hay! you know, you have already customized the android image. Not excited enough, let\u0026rsquo;s do one more exercise to customize the Settings Menu options.\nOpen /res/values/strings.xml and change some menu titles eg. change “Battery” title to “Kuldeep Power” and “Network \u0026amp; Internet” to “K Network \u0026amp; Internet”.\n\u0026lt;string name=\u0026#34;power_usage_summary_title\u0026#34;\u0026gt;Kuldeep Power\u0026lt;/string\u0026gt; \u0026lt;string name=\u0026#34;network_dashboard_title\u0026#34;\u0026gt;K Network \u0026amp;amp; Internet\u0026lt;/string\u0026gt; Build the image (remember the m command) and run the emulator as described in the last step, wait for a few minutes and you have your customized android menu. With this, your custom settings app is ready and available in the Andorid Image, and now I would like to close the article, and say you good Night.\nI will write about flashing this image to a device later, the spare phone for which I am building this image, is now not working, looks like my son has experimented with that old phone.\nIn the future, we will also discuss how to customize native users managements in android based XR devices, and also how to handle device-specific options that are not supported in the regular phone.\nGood Night, keep learning! keep sharing!\nThis article is original published on XR Practices Publication\n","id":137,"tag":"xr ; customization ; enterprise ; android ; aosp ; os ; technology","title":"OS Customization for XR Device","type":"post","url":"/post/os-customization-for-xr-devices/"},{"content":"Change is the only constant, it requires continuous effort from humankind to understand the change that leads to success, and make us even more successful. In my earlier article, I talked about the success mantra, and also wrote about accepting the change when required. Sometimes the way we perceive our success becomes a blocker for becoming more successful. We accumulate some habits knowingly or unknowingly along the path of being successful, and mostly we never realize that those habits start blocking our way forward.\nToday, in the time of the Covid19 pandemic, even nature wants us to change the way work, the way we eat, the way we use natural resources, and other words the way we live. We will get over this pandemic, with a lot of learning and change. We need to change for humankind, and ourselves, after all, what got you here, won’t get you there.\nI penned down an article for the successful people and want to become more successful. Know the changes that are required to reach there. We might have achieved many goals and crossed milestones, but with all this, we may not reach there, as it requires constant monitoring and change in us.\nThe article is highly influenced by the book What Got You Here Won’t Get You There by Marshall Goldsmith Marshal is one of the world’s preeminent executive coaches, and this book is his experience of coaching successful people to become more successful. I recommend this book to everyone who thinks that they are successful, and they don’t need or need advice for being more successful. This book covers 20 habits that hold you back from the top, honestly, you may link many of them with your habits.\nMy takeaway from the book here -\nSuccess Myths The book starts nicely, directly hitting the critical point in the very first chapters with nice examples. Being a successful person, we are in the delusion of success,\n We too much believe in our skills that made us successful, This gives us confidence that we can succeed in the same way, Which adds another myth that we will succeed in the future, Why not we, as with all this, we are committed to the goals and beliefs. in fact, they are obsessed with the goal. Successful people are the ones who want to become more successful, but the change required to become more successful is hard to finds, it is hard for us to believe why to change.\nAccumulated hold back habits Successful people with strong success myths described in the last section generally accumulates some of the following habits that hold them here, and don’t let them there. The higher we go, the more we prone to behavioral issues.\nSome of them holdback habits are -\nToo much me We the successful people, accumulate some habits where we value too much of being “me”. With this, wining becomes necessary for us in all situations even if it is not important,\n We argue too much and go at the level of putting down other people, or sometimes we withhold important information to stay important and disclose when the time to the limelight. We also try to add unnecessary value to other’s ideas, to remain in the light. When we listen to other’s ideas, even if it is of great importance, rather than saying “Thank you” to the person, we may suggest some un-valuable. With such a state of mind, it is hard to value other’s ideas, and consequences might be that other people stop giving their opinion, which boosts this habit. We gain the habit expressing our opinion, no matter how hard hurtful or non-useful it is, just to realize our rights of “Too much me”. It blocks our way We can’t be masters of everything, but with a habit of putting “too much me” keep telling the world how smart are we, and this can some times lead to embarrassment when we arguing with other people with that expertise, we also tend to not value other’s expertise.\nDefend and Oppose With this holdback habit, we tend to defend what we say and oppose each and everything that others say. We tend to use words like “if”, “No”, “But” and “However” a lot in our conversation.\nWhen someone comes with an idea, our first reaction is “Let me tell you why it won’t work”. No matter what the idea is, our mind, first starts finding negatives from the idea.\nSuccess Bias We are so biased of the success that we feel we can make fun of anyone, pass judgment, or make even some descriptive comments, and they would ignore them in respect to our success. It is not true. The damage by our word is irreversible, it doesn\u0026rsquo;t matter how hard we apologize later.\nSome have a bias for some people, and they play favoritism.\nFailed to recognize others or eat up their credit Even some successful people have this habit of not recognizing others well and encourage them for their work. Even worse, some leader eats up the credit of others, and use their idea as their own idea.\nGeneral interpersonal holdback issues Always making excuses, and blaming others to the issues Failed to express regret or gratitude when it is needed most Speak more and listen less. Talking in anger Not ready to listen to bad news React more and respond less Know your holdback Habits The only way to know about your habits that hold you back is feedback. ThoughtWorks has a great culture of feedback, everyone gives feedback frequently, and it is advised to get feedback from all the diverse roles, and across teams.\nThe ‘self’ we think, and the ‘self’ rest of the world sees in us is mostly different. The only way to fill the gap is by getting right feedback.\nSome organizations also have 360-degree feedback and encourage people for one-on-one feedback with peers. We should know the art of taking feedback, and share a vision and agenda for asking the feedback. Consume the feedback peacefully. This is just one way of feedback, there are other ways to get feedback from others.\n Observe others, if we find some problem with them, try to see that problem in ourselves. It\u0026rsquo;s hard but, but be honest, we will get immediate feedback without really talking to anyone. Observe others, and see how they react when we talk to them or talk in general in the meeting. Note down their physical expressions, physical movement, ignorance, or so. Note down all of these, you will see some pattern. The Change Begins The right time to start the change is now, identify the right thing you want to change and start it now. Make a plan, and tell the world that you are improving, yes, in fact, publicize to the world, that you are changing, and ask for a favor. Remember, and convey the message, that you can’t change the past but from today are trying to improve. Leave the past behind and move forward.\nGenerally, change begins with apologizing, the author calls it a magic move. Please mean it when you apologize to someone.\nImprove your listening skill, My friend Gary Butters says -\n You have two ears and one mouth\u0026hellip; use them in that proposition\u0026hellip;\nand in that order!\n If your statement not adding any value then don’t just say anything other than ‘thank you’. and yet this is the next key thing you do. Starting thanking people in your life, have gratitude towards them.\nAnd so on, identify that change that you want to make, and be clear why you want to do that change. eg. Complete the statement\n if you get rid of this holdback habit then ……………\n Measure the change The change becomes easy when we can measure it, if we can measure it then we can achieve it. and the best technique to measure is follow up. Once we declare that we are changing, just set up the follow ups with your peers, and people who signed up to favor us in our journey of change. Ask them about how much have we changed, what next they expect etc.\nYou are here, define your there and let the journey begins\nConclusion Let the change continue till the last bread, and keep improving every day for humankind and self, and be better you.\nIt is originally published at Medium\n","id":138,"tag":"general ; selfhelp ; success ; covid19 ; book-review ; takeaways ; change ; motivational","title":"Change to become more successful!","type":"post","url":"/post/change_to_become_more_successful_/"},{"content":"In the previous article, we described the importance of interoperability in while building Enterprise XR solutions, In the article, we will discuss how to manage user accounts in the XR device, and implement single sign-on across the XR apps. We will use Android’s AccountManger approach to login into active directory and integrated it into Unity. Please read my earlier article on setting up the app in active directory, log in using android webview, considering a web-based company login form should appear for login, and no custom login form is allowed to capture the credentials.\nImplement Android Library with custom AccountAuthenticator We will continue from the last section of the previous article, which has WebViewActivity which launch webview for logon.\nWebview Login Activity Let’s add a method to launch a webview activity.\npublic class WebViewActivity extends AppCompatActivity { ... private WebView webView; public static void launchActivity(Activity activity, String url, String redirectURI, IResponseCallBack callBack){ Intent intent = new Intent(activity, WebViewActivity.class); intent.putExtra(INTENT_INPUT_INTERCEPT_SCHEME, redirectURI); intent.putExtra(INTENT_INPUT_URL, url); intent.putExtra(INTENT_RESPONSE_CALLBACK, new CustomReceiver(callBack)); activity.startActivity(intent); } private static class CustomReceiver extends ResultReceiver{ ... @Override protected void onReceiveResult(int resultCode, Bundle resultData) { String code = resultData.getString(INTENT_RESULT_PARAM_NAME); callBack.OnSuccess(code); } } Create binding service for custom Account Type Create a custom authenticator, extending AbstractAccountAuthenticator, implement all abstract methods as follows. public class AccountAuthenticator extends AbstractAccountAuthenticator { public AccountAuthenticator(Context context){ super(context); } @Override public Bundle addAccount(AccountAuthenticatorResponse accountAuthenticatorResponse, String s, String s1, String[] strings, Bundle bundle) throws NetworkErrorException { // do all the validations, call AccountAuthenticatorResponse for //any failures. return bundle; } ... @Override public Bundle getAuthToken(AccountAuthenticatorResponse accountAuthenticatorResponse, Account account, String s, Bundle bundle) throws NetworkErrorException { return bundle; } ... } Create a service to bind the custom authenticator public class MyAuthenticatorService extends Service { @Override public IBinder onBind(Intent intent) { AccountAuthenticator authenticator = new AccountAuthenticator(this); return authenticator.getIBinder(); } } Add the service in Android Manifest in the application tag \u0026lt;service android:name=\u0026#34;package.MyAuthenticatorService\u0026#34;\u0026gt; \u0026lt;intent-filter\u0026gt; \u0026lt;action android:name=\u0026#34;android.accounts.AccountAuthenticator\u0026#34; /\u0026gt; \u0026lt;/intent-filter\u0026gt; \u0026lt;meta-data android:name=\u0026#34;android.accounts.AccountAuthenticator\u0026#34; android:resource=\u0026#34;@xml/authenticator\u0026#34; /\u0026gt; \u0026lt;/service\u0026gt; It refers to an XML file that defines the type of account the authenticator service will bind. \u0026lt;?xml version=\u0026#34;1.0\u0026#34; encoding=\u0026#34;utf-8\u0026#34;?\u0026gt; \u0026lt;account-authenticator xmlns:android=\u0026#34;http://schemas.android.com/apk/res/android\u0026#34; android:accountType=\u0026#34;com.tw.user\u0026#34; android:label=\u0026#34;MyAccount\u0026#34; android:icon=\u0026#34;@drawable/ic_launcher\u0026#34; android:smallIcon=\u0026#34;@drawable/ic_launcher\u0026#34; /\u0026gt; This adds support for account type “com.tw.user”. The app which includes this library will intercept the request for this type.\nManage Accounts of Custom Type Let\u0026rsquo;s add class to manage user accounts.\nNow let\u0026rsquo;s add methods to this class, to manage account\n Add User Account — This method adds an account to android AccountManager. Since we are trying to add an account of type “com.tw.user”, binder service will call our AccountAuthenticator.addAccount. We may do data validation there, however, we are currently returning the same bundle there. After this, it will call AccountManagerCallback which we have provided in the addAccount call below. public static void addAccount(Context context, String bundleData, final IResponseCallBack callback) { final AccountManager accountManager = AccountManager.get(context); Bundle bundle = new Bundle(); try { JSONObject jsonObject = new JSONObject(bundleData); bundle.putString(KEY_ACCOUNT_TYPE, ACCOUNT_TYPE); bundle.putString(KEY_ACCOUNT_NAME, \u0026#34;MyAccount\u0026#34;); ... //prepare bundle accountManager.addAccount(ACCOUNT_TYPE, authTokenType, null, bundle, (Activity) context, new AccountManagerCallback\u0026lt;Bundle\u0026gt;() { @Override public void run(AccountManagerFuture\u0026lt;Bundle\u0026gt; future) { Bundle udata = null; try { udata = future != null ? future.getResult() : null; if (udata != null) { String accountName =udata.getString(KEY_ACCOUNT_NAME); String accountType= udata.getString(KEY_ACCOUNT_TYPE); Account account= new Account(accountName,accountType); boolean result= accountManager.addAccountExplicitly( account, null, udata); if(result){ callback.OnSuccess(\u0026#34;Account Added Successfully\u0026#34;); } else { callback.OnFailure(\u0026#34;Account can not be added\u0026#34;); } } else { callback.OnFailure(\u0026#34;Error! No user added\u0026#34;); } } catch (Exception e) { callback.OnFailure(\u0026#34;Error! \u0026#34; + e.getMessage()); } } }, new Handler(Looper.getMainLooper())); } catch (Exception e) { callback.OnFailure(\u0026#34;Error! \u0026#34; + e.getMessage()); } }\n Remove User Account — This method removes an account from android AccountManager. public static void removeAccount(Context context, final IResponseCallBack callnack) { final AccountManager accountManager = AccountManager.get(context); Account[] accounts = accountManager.getAccountsByType(ACCOUNT_TYPE); if(accounts.length \u0026gt; 0) { Account account = accounts[0]; try { boolean result = accountManager.removeAccountExplicitly( account); if(result){ callnack.OnSuccess(\u0026#34;Account removed successfully!\u0026#34;); } else { callnack.OnFailure(\u0026#34;Error! Could not remove account\u0026#34;); } } catch (Exception e) { callnack.OnFailure(\u0026#34;Error! +\u0026#34; + e.getMessage()); } } else { callnack.OnFailure(\u0026#34;Error! No account found\u0026#34;); } } Get Login In User Account — This method fetches logged in user data. public static void getLoggedInUser(Context context, final IResponseCallBack responseCallBack) { final AccountManager accountManager = AccountManager.get(context); Account[] accounts = accountManager.getAccountsByType(ACCOUNT_TYPE); try { if(accounts.length \u0026gt; 0) { Account account = accounts[0]; final JSONObject jsonObject = new JSONObject(); jsonObject.put(KEY_USER_NAME, accountManager.getUserData(account, KEY_USER_NAME)); ... prepare the response. responseCallBack.OnSuccess(jsonObject.toString()); } else { responseCallBack.OnFailure(\u0026#34;Error! No logged user of the type\u0026#34; + ACCOUNT_TYPE); } } catch (Exception e) { responseCallBack.OnFailure(\u0026#34;Error!\u0026#34; + e); } } Source code for android library can be found at https://github.com/thinkuldeep/xr-user-authentication\nUnity Authenticator App Let’s customize the unity app that we have created in the last article, as follows, build the android library and include it in the unity project.\nAdd methods to ButtonController.cs\nLogin — Launch WebView for Login On clicking Login button, the following logic opens the WebViewActivity implemented in the Android library and load the provided URL.\nprivate void OnLoginClicked() { AndroidJavaClass WebViewActivity = new AndroidJavaClass(\u0026#34;package.WebViewActivity\u0026#34;); _statusLabel = \u0026#34;\u0026#34;; AndroidJavaObject context = getUnityContext(); WebViewActivity.CallStatic(\u0026#34;launchActivity\u0026#34;, context, authUri, redirectUri, new LoginResponseCallback( this)); } make sure of the redirectURI. The webview will intercept this redirectURI and return code, which we use for getting the access_token.\nprivate class LoginResponseCallback : AndroidJavaProxy { private ButtonController _controller; public LoginResponseCallback(ButtonController controller) : base(\u0026#34;package.IResponseCallBack\u0026#34;) { _controller = controller; } public void OnSuccess(string result) { _statusLabel = \u0026#34;Code received\u0026#34;; _statusColor = Color.black; _controller.startCoroutineGetToken(result); } public void OnFailure(string errorMessage) { _statusLabel = errorMessage; _statusColor = Color.red; } } Login — Fetch the Access Token Once we receive the authorization code, we need to fetch access_token by the HTTP token URL.\nvoid startCoroutineGetToken(string code) { StartCoroutine(getToken(code)); } IEnumerator getToken(string code) { WWWForm form = new WWWForm(); form.headers[\u0026#34;Content-Type\u0026#34;] = \u0026#34;application/x-www-form-urlencoded\u0026#34;; form.AddField(\u0026#34;code\u0026#34;, code); form.AddField(\u0026#34;client_id\u0026#34;, clientId); form.AddField(\u0026#34;grant_type\u0026#34;, \u0026#34;authorization_code\u0026#34;); form.AddField(\u0026#34;redirect_uri\u0026#34;, redirectUri); var tokenRequest = UnityWebRequest.Post(tokenUri, form); yield return tokenRequest.SendWebRequest(); if (tokenRequest.isNetworkError || tokenRequest.isHttpError) { _statusLabel = \u0026#34;Login failed - \u0026#34; + tokenRequest.error; _statusColor = Color.red; } else { var response = JsonUtility.FromJson\u0026lt;TokenResponse\u0026gt;(tokenRequest.downloadHandler.text); _statusLabel = \u0026#34;Login successful! \u0026#34;; _statusColor = Color.black; TokenUserData userData = decodeToken(response.access_token); AddAccount(userData, tokenResponse); } } Login — Add Account to Android Accounts Once we get the access_token, we can decode access_token get some user details from it.\nprivate class TokenUserData { public string name; public string given_name; public string family_name; public string subscriptionId; } private TokenUserData decodeToken(String token) { AndroidJavaClass Base64 = new AndroidJavaClass(\u0026#34;java.util.Base64\u0026#34;); AndroidJavaObject decoder = Base64.CallStatic\u0026lt;AndroidJavaObject\u0026gt;(\u0026#34;getUrlDecoder\u0026#34;); var splitString = token.Split(\u0026#39;.\u0026#39;); var base64EncodedBody = splitString[1]; string body = System.Text.Encoding.UTF8.GetString(decoder.Call\u0026lt;byte[]\u0026gt;(\u0026#34;decode\u0026#34;, base64EncodedBody)); return JsonUtility.FromJson\u0026lt;TokenUserData\u0026gt;(body); } Once we have all the detail for the account, we call the add account to android accounts using the library APIs.\nprivate void AddAccount(TokenUserData userData, TokenResponse tokenResponse) { AndroidJavaClass AccountManager = new AndroidJavaClass(\u0026#34;pakcage.UserAccountManager\u0026#34;); UserAccountData accountData = new UserAccountData(); accountData.userName = userData.given_name +\u0026#34; \u0026#34; + userData.family_name ; accountData.subscriptionId = userData.subscriptionId; accountData.tokenType = tokenResponse.token_type; accountData.authToken = tokenResponse.access_token; AccountManager.CallStatic(\u0026#34;addAccount\u0026#34;, getUnityContext(), JsonUtility.ToJson(accountData), new AddAccountResponseCallback( this)); } Get Logged-in User On Get LoggedIn User, call a respective method from the android library.\nprivate void OnGetLoggedUserClicked() { AndroidJavaClass AccountManager = new AndroidJavaClass(\u0026#34;package.UserAccountManager\u0026#34;); AccountManager.CallStatic(\u0026#34;getLoggedInUser\u0026#34;, getUnityContext(), new LoginResponseCallback(false, this)); } Logout — Remove Account to Android Accounts On logout, call remove the account.\nprivate void OnLogoutClicked() { AndroidJavaClass AccountManager = new AndroidJavaClass(\u0026#34;poackage.UserAccountManager\u0026#34;); AccountManager.CallStatic(\u0026#34;removeAccount\u0026#34;, getUnityContext(), new LogoutResponseCallback()); } Run and test Lets now test all this. Export the unity project/run in the device. Try login, get logged in user, and logout API. In the next section, we will try to access the logged-in user outside of this authenticator app.\nSharing Account with other XR apps Once you logged into the XR Authenticator App and added an account in android accounts, it can be accessed in any other app using Android APIs. Let\u0026rsquo;s create another unity project on similar lines. AccountViewer XR App.\nThe cube is supposed to rotate only if a user is logged in the android system, and see the logged-in user details.\nGet User Account from Android Accounts It does not require any native plugin to be included. you can get the shared account directly via android APIs - accountManager.getAccountsByType(“com.tw.user”)\nprivate UserAccountData getLoggedInAccount() { UserAccountData userAccountData = null; var AccountManager = new AndroidJavaClass(\u0026#34;android.accounts.AccountManager\u0026#34;); var accountManager = AccountManager.CallStatic\u0026lt;AndroidJavaObject\u0026gt; (\u0026#34;get\u0026#34;, getUnityContext()); var accounts = accountManager.Call\u0026lt;AndroidJavaObject\u0026gt; (\u0026#34;getAccountsByType\u0026#34;, \u0026#34;com.tw.user\u0026#34;); var accountArray = AndroidJNIHelper.ConvertFromJNIArray\u0026lt;AndroidJavaObject[]\u0026gt; (accounts.GetRawObject()); if (accountArray.Length \u0026gt; 0) { userAccountData = new UserAccountData(); userAccountData.userName = accountManager.Call\u0026lt;string\u0026gt; (\u0026#34;getUserData\u0026#34;, accountArray[0], \u0026#34;userName\u0026#34;); ... userAccountData.tokenType = accountManager.Call\u0026lt;string\u0026gt; (\u0026#34;getUserData\u0026#34;, accountArray[0], \u0026#34;tokenType\u0026#34;); userAccountData.authToken = accountManager.Call\u0026lt;string \\. (\u0026#34;getUserData\u0026#34;, accountArray[0], \u0026#34;authToken\u0026#34;); } return userAccountData; } Get profile using logged in user’s token Read the token and get the profile data from the HTTP URL https://graph.microsoft.com/beta/me/profile/names with JWT token.\nIEnumerator getProfile(UserAccountData data) { UnityWebRequest request = UnityWebRequest.Get(getProfileUri); request.SetRequestHeader(\u0026#34;Content-Type\u0026#34;, \u0026#34;application/json\u0026#34;); request.SetRequestHeader(\u0026#34;Authorization\u0026#34;, data.tokenType + \u0026#34; \u0026#34; + data.authToken); yield return request.SendWebRequest(); if (request.isNetworkError || request.isHttpError) { _labels[indexGetProfile] = \u0026#34;Failed to get profile\u0026#34; + request.error; _colors[indexGetProfile] = Color.red; } else { var userProfile = JsonUtility.FromJson\u0026lt;UserProfileData\u0026gt;(request.downloadHandler.text); _labels[indexGetProfile] = \u0026#34;DisplayName: \u0026#34; + userProfile.displayName + \u0026#34;\\nMail: \u0026#34; + userProfile.mail + \u0026#34;\\nDepartment: \u0026#34; + userProfile.department + \u0026#34;\\nCreated Date: \u0026#34; + userProfile.createdDateTime; _colors[indexGetProfile] = Color.black; } } Check if a user is logged Even application focus, check if a user exists in android accounts. if the user exists then load the profile data.\nprivate void OnApplicationFocus(bool hasFocus) { if (hasFocus) { CheckIfUserIsLoggedIn(); } } private void CheckIfUserIsLoggedIn() { UserAccountData data = GetLoggedInAccount(); if (data == null) { _isUserLoggedIn = false; _labels[indexWelcome] = \u0026#34;Welcome Guest!\u0026#34;; _labels[indexGetProfile] = \u0026#34;\u0026#34;; } else { _isUserLoggedIn = true; _labels[indexWelcome] = \u0026#34;Welcome \u0026#34; + data.userName + \u0026#34;!\u0026#34;; StartCoroutine(getProfile(data)); } } On update make sure you rotate the cube only if a user is logged in.\nprivate void Update() { if (_isUserLoggedIn) { cube.transform.Rotate(Time.deltaTime * 2 * _rotateAmount); } requestLoginButton.gameObject.SetActive(!_isUserLoggedIn); welcomeText.text = _labels[indexWelcome]; getProfileResponse.text = _labels[indexGetProfile]; getProfileResponse.color = _colors[indexGetProfile]; } Run and test Login on the Account Authenticator XR App, as in the last step. Open the Account Viewer XR App, you should see the Cube Rotating and Profile details of the logged-in user.\nRequest Login from external XR app Let’s implement the request login from the external unity app\nAdd Request Login Add button for request login in the Account Viewer app. It will find an intent for the Account Authenticator app running in package com.tw.threed.authenticator. If the intent is available, means the XR authenticator app is installed in the device, then it will start the intent with a string parameter external_request.\nprivate void OnRequestLoginClicked() { AndroidJavaObject currentActivity = getUnityContext(); AndroidJavaObject pm = currentActivity.Call\u0026lt;AndroidJavaObject\u0026gt;(\u0026#34;getPackageManager\u0026#34;); AndroidJavaObject intent = pm.Call\u0026lt;AndroidJavaObject\u0026gt;(\u0026#34;getLaunchIntentForPackage\u0026#34;, \u0026#34;com.tw.threed.authenticator\u0026#34;); if (intent != null) { intent.Call\u0026lt;AndroidJavaObject\u0026gt;(\u0026#34;putExtra\u0026#34;,\u0026#34;external_request\u0026#34;, \u0026#34;true\u0026#34;); currentActivity.Call(\u0026#34;startActivity\u0026#34;, intent); labels[requestlogin] = \u0026#34;Send Intent\u0026#34;; colors[requestlogin] = Color.black; } else { labels[requestlogin] = \u0026#34;No Intent found\u0026#34;; colors[requestlogin] = Color.red; } } Handle external login request in the Account Authenticator App There are multiple ways to receive externally initiated intent in Unity, but most of them need a native plugin to extend UnityPlayerActivity class. Here is another way, handle on focus event of any GameObject and access the current activity and check if intent has the external_request parameter. Call the OnLoginClicked action if it found the parameter.\nprivate bool inProcessExternalLogin = false; private void OnApplicationFocus(bool hasFocus) { if (hasFocus) { AndroidJavaObject ca = getUnityContext(); AndroidJavaObject intent = ca.Call\u0026lt;AndroidJavaObject\u0026gt;(\u0026#34;getIntent\u0026#34;); String text = intent.Call\u0026lt;String\u0026gt;(\u0026#34;getStringExtra\u0026#34;, \u0026#34;external_request\u0026#34;); if (text != null \u0026amp;\u0026amp; !inProcessExternalLogin) { _statusLabel = \u0026#34;Login action received externally\u0026#34;; _statusColor = Color.black; inProcessExternalLogin = true; OnLoginClicked(); } } } We need to quit the app after successfully adding the account, so it can go in. background.\nprivate class AddAccountResponseCallback : AndroidJavaProxy { ... void OnSuccess(string result) { _statusLabel = result; _statusColor = Color.black; if (_controller.inProcessExternalLogin) { Application.Quit(); } } ... } Run and test Open Account Viewer App and request login (it will be visible only if the user is not logged in) It will open the Account Authenticator XR app and invoke web login. Once you log in it will add a user account in accounts, and close the Account Authenticator XR app. On returning to the Account Viewer app, you will the rotating cube rotating and user profile details. Now go back to the Account Authenticator app, and logout (ie remove the account). On the Account Viewer app, the cube is no more rotating, and no user profile data is shown. This completes the tests. You can found complete source code here :\n Android Library: https://github.com/thinkuldeep/xr-user-authentication Account Authenticator App: https://github.com/thinkuldeep/3DUserAuthenticator Account Viewer App: https://github.com/thinkuldeep/3DAccountViewer Conclusion In this article, we have learned how to do single sign-on in the android based XR device, and share the user context with other XR applications. Single sign-on is one of the key requirements for the enterprise-ready XR device and applications.\nIt was originally published at XR Practices Medium Publication\n","id":139,"tag":"xr ; ar ; vr ; mr ; enterprise ; android ; unity ; tutorial ; technology","title":"User Account Management In XR","type":"post","url":"/post/user-account-management-in-xr/"},{"content":"The web has grown from a read-only web (Web 1.0) to interactive web (Web 2.0) and then moved into the semantic web which is more connected and more immersive. 3D content on the web is there for quite some time, the most web-browsers support rendering 3D content natively. The rise for XR technologies and advancements in devices and the internet is leading the demand for Web-based XR. It will be a norm for web-based development.\nWe are going through the Covid19 time, and most of us are working from home. These days, one message is quite popular on social media, asking to keep your kids engaged, and let them bring Google 3D animals in your bedroom\nHowever, this feature was there for some time now. Google search pages have provision to show models in 3D in your space, you need a compatible phone for making it work.\nThis is WebXR. Experiencing XR directly on a website without a need for special apps, or downloads. Generally, XR application development needs a good amount of effort and cost on platforms such as Unity 3D, Vuforia, Wikitude. The content is delivered in the form of an app for XR enabled smartphones or head-mounted devices such as Hololens, Oculus, NReal, or similar. With WebXR technology, we can build applications directly with our favorite web technology (HTML, JavaScript) and deliver the content on our website.\nThere are few WebXR libraries popular these days such as babylon.js, three.js, and A-frame. WebVR is quite stable but WebXR is experimental at this stage in most of the libraries. In addition to Google and Mozilla, organizations like 8th Wall, XR+ are also putting great effort into making the WebXR stable. At ThoughtWorks, we have built some cool projects for our clients utilizing available WebXR libraries.\nIn this post, we will say hello to WebXR, and also takes you through the classic XR development in Unity 3D. The first part of the article is for people who are new to Unity 3D, so they need to say hello to the Unity 3D platform, then we will go to the WebXR world, and understand how we can co-relate both the world.\nNote: — We will discuss the WebAR part for android here, you may relate it to WebVR and for other devices, the concept is more or less the same. Follow the respective libraries documentation if you are more interested in the WebVR part or for other devices.\nHello XR World! If you haven’t yet gone through Unity 3D, please read below a cool article by my colleague Neelarghya Mondal -\n Well let me tell you this won’t be the end all be all… This is just the beginning of a series of Journal log styled… Let’s get started with Unity…\n Then set up the first XR project using AR Core — [the google guidelines](the google guidelines), Import the ARCore SDK package into Unity.\n Remove the main camera, light and the add prefabs “ARCore Device” and “Environment Light” available in ARCore SDK. Create a prefab of Cube with rotating script public class RotateCube : MonoBehaviour { private Vector3 rotation = new Vector3(20, 30, 0); void Update() { transform.Rotate(rotation * Time.deltaTime); } } Create an InputManager class that instantiates the cube prefab at the touch position. public class InputManager : MonoBehaviour { public Camera camera; public GameObject cubePrefab; // Update is called once per frame void Update() { Touch touch; if (Input.touchCount \u0026gt; 0 \u0026amp;\u0026amp; (touch = Input.GetTouch(0)).phase == TouchPhase.Began) { CreateCube(touch.position); } else if (Input.GetMouseButtonDown(0)) { CreateCube(Input.mousePosition); } } void CreateCube(Vector2 position) { Ray ray = camera.ScreenPointToRay(position); Instantiate(cubePrefab, (ray.origin + ray.direction), Quaternion.identity); } } You can test it directly on an Android Phone (ARCore compatible). Connect the phone via USB, and test it with ARCore Instant preview. It needs some extra services running on the device. Alternatively, you can just build and run the project for android, and it will install the app to the device, and you can play with your XR app. This completes the first part, and now we know how a typical AR app is built and delivered. Find the source code here Hello Web XR! Let’s say hello to Web XR. We will try to implement a similar example that we have implemented in the last section, but this time without the Unity 3D platform and without building an app for the device. We will implement a simple web page that can do the same, using WebXR library https://threejs.org\nCreate a web page Create a work folder and open it in the editor — I use Intellij but you may use any text editor. Create an HTML file and host it in web-server — index.html, Intellij has built it web-server which is just a click away, but you may use other easy ways You may download three.js dependencies on your own or just download via some package manager — npm install three Follow the steps described by three.js and update the index.html page — Creating-a-scene For XR enabled scene keep renderer.xr.enabled = true; Now, index.html looks like below with an On Touch logic which adds a cube on touching the screen. \u0026lt;script type=\u0026#34;module\u0026#34;\u0026gt; import * as THREE from \u0026#39;./node_modules/three/build/three.module.js\u0026#39;; import { ARButton } from \u0026#39;./node_modules/three/examples/jsm/webxr/ARButton.js\u0026#39;; var container; var camera, scene, renderer, lastmesh; var controller; init(); animate(); function init() { container = document.createElement( \u0026#39;div\u0026#39; ); document.body.appendChild( container ); scene = new THREE.Scene(); camera = new THREE.PerspectiveCamera( 70, window.innerWidth / window.innerHeight, 0.01, 20 ); var light = new THREE.HemisphereLight( 0xffffff, 0xbbbbff, 1 ); light.position.set( 0.5, 1, 0.25 ); scene.add( light ); renderer = new THREE.WebGLRenderer( { antialias: true, alpha: true } ); renderer.setPixelRatio( window.devicePixelRatio ); renderer.setSize( window.innerWidth, window.innerHeight ); renderer.xr.enabled = true; container.appendChild( renderer.domElement ); document.body.appendChild( ARButton.createButton( renderer ) ); var geometry = new THREE.BoxGeometry( 0.1, 0.1, 0.1 ); function onSelect() { var material = new THREE.MeshNormalMaterial(); var mesh = new THREE.Mesh( geometry, material ); mesh.position.set( 0, 0, - 0.3 ).applyMatrix4( controller.matrixWorld ); mesh.quaternion.setFromRotationMatrix( controller.matrixWorld ); scene.add(mesh); lastmesh = mesh; } controller = renderer.xr.getController( 0 ); controller.addEventListener( \u0026#39;select\u0026#39;, onSelect ); scene.add( controller ); window.addEventListener( \u0026#39;resize\u0026#39;, onWindowResize, false ); } function onWindowResize() { camera.aspect = window.innerWidth / window.innerHeight; camera.updateProjectionMatrix(); renderer.setSize( window.innerWidth, window.innerHeight ); } function animate() { renderer.setAnimationLoop( render ); } function render() { if(lastmesh){ lastmesh.rotation.x += 0.1; lastmesh.rotation.y += 0.2; } renderer.render(scene, camera ); } \u0026lt;/script\u0026gt; Host and test the web page Publish the page in the web-server and test it. The page is now hosted in the inbuilt web-server of IntelliJ, make sure external access is enabled. http://localhost:8001/hello-webxr/index.html. It is still not ready to be used on browsers, we need to do some customizations on the browsers to make it work.\nOn PC, we need to install WebXR emulators in the browser to make it work. VR emulator is great but the AR emulator still doesn’t work well.\nOn Mobile, open chrome browser and open chrome://flags page. Search for XR and enable all the experimental features. Make sure The device must have ARCore support. Now we want to test the page which is running on PC/Laptop in the chrome browser on the android phone. There are multiple ways to do that (port forwarding, hotspot, etc). Simplest is, come on the same wifi network and access the page via IP instead of localhost. Type in http://192.168.0.6:8001/hello-webxr/index.html\nOnce the web page is loaded, it will ask to start XR experience, and also ask for the user consent for accessing the camera.\nOnce approved, it scans the area and you can start placing the cubes.\nSource code is available here\nTry this link in your android phone right now, and experience the WebXR. https://thinkuldeep.github.io/hello-webxr/\nI tried, a similar implementation can be done with Babylon JS but “immersive-ar” mode is not stable there when we host webpage, however, “immersive-vr” mode works great. So these webpages can be opened in any supported browser on the XR devices, you get all advantage of Web, and server the XR content simply from your website. This completes our introduction with WebXR.\nConclusion We have got an introduction to XR as well as WebXR. There are some features in XR which may not be achieved by WebXR but it is still a great option to take the technology to a wider audience. There are already great examples at the respective sites of these libraries, the community is also submitting some good projects samples and games. Try those, and stay tuned to WebXR, it is coming.\nIt is originally published at XR Practices Medium Publication\n","id":140,"tag":"xr ; ar ; vr ; unity ; webxr ; immersive-web ; spatial web ; general ; tutorial ; technology","title":"WebXR - the new Web","type":"post","url":"/post/webxr-the-new-web/"},{"content":"In Part I of the article, we summarized our observations of eXtended reality’s foundational blocks - AR, VR and MR. We also discussed the popular and most relevant tools and implementations of the new technology. In Part II of the article, we will be looking at a few accessible and novel implementations of the tech.\nAt ThoughtWorks, we have delivered both XR solutions and components of XR SDK (Software Development Kit) on the back of which XR solutions are built. We have explored Unity 3D technology beyond the gaming bucket. Infact, ThoughtWorks’ Technology Radar states,\n “\u0026hellip;Unity has become the platform of choice for VR and AR application development because it provides the abstractions and tooling of a mature platform\u0026hellip;Yet, we feel that many teams, especially those without deep experience in building games, will benefit from using an abstraction such as Unity\u0026hellip; it offers a solution for the huge number of devices.”\n We have also built an XR incubation center that lets us experiment with XR experiments while collaborating on agile development practices like TDD and CI/CD for XR.\nAll this work with XR tech has helped us understand the space that much better. And, I have taken this opportunity to round up some really interesting use cases for the tech that will be ‘good-to-know’ for business leaders who want to explore the tech’s potential.\nUse case 1: training and maintenance XR tech has a direct impact on the improved efficiency and effectiveness of training, and maintenance and repair use cases.\nVR tech can generate and simulate environments that are either hard to reproduce or involve huge costs and/or risk. Examples of such scenarios are factory training sessions or those on fire safety or ‘piloting’ an aircraft.\nAR, on the other hand, is a great way to provide technicians with contextual support in a work environment. For instance, a technician wearing a pair of AR-enabled smart glasses can view digital content that’s superimposed on their work environment. This could be in the form of step-by-step instructions for a task or to sift through relevant manuals and videos while they are on the assembly line/or on field performing a task or it could be used to issue a voice command. The AR device is enabled to also take a snapshot of the work environment for compliance checks. Additional features include the technician being able to stream the experience to a remote expert, who can annotate the live environment and review the work.\nHere’s an interesting case study from ThoughtWorks of how XR tech is transforming airline operations.\nUse case 2: locate and map Autonomous vehicles leverage techniques like Simultaneous Localization and Mapping (SLAM) to build virtual maps and locate objects in an environment. Similar technology empowers XR-enabled devices to detect unique environments and track device positions within.\nThis Virtual Positioning System can help build indoor positioning applications for large premises like airports, train stations, warehouses, factories and more. Such a system also allows users to mark, locate and navigate to specific target points.\nAt ThoughtWorks, we recently implemented a concept for indoor positioning at our offices. It’s a mobile app called Office Explorer, and when used with an AR-enabled smartphone or tablet, configures the entire office on a 2D map that visitors can use to explore the space.\nThis usecase was also quoted at Google DevFest\nWe have built office-explorer #DevFest2019 pic.twitter.com/0yRz7Euo8J\n\u0026mdash; Kuldeep Singh (@thinkuldeep) September 10, 2019 Visitors or new ThoughtWorkers can follow the augmented path on the app to reach meeting rooms and work stations. This app was built using Unity 3D, ARCore and ARKit. The backend services and web portal were built on React, Kotlin, MongoDB and Micronaut.\nUse case 3: product customization and demonstration Virtual, CAD models or paper-drawn prototypes usually precede final physical products. What’s more, the former also allows for affordable customizations in an XR environment. Consumers have an opportunity to try 3D models of products before they buy it.\nBelow is a quick concept that was created at ThoughtWorks, where one could customize a particular virtual model of a vehicle with respect to colour, interiors and fabrics. They could also step inside the virtual car and experience the personalizations, evaluate how their car looks and fits in their garage or in front of the house.\nThoughtWorkers experimenting with AR-enabled customization of a vehicle. Our app used Unity 3D and ARCore to detect the plane surface and place the product’s 3D model before commencing customizations. The same technology can be used to build an entire driving experience - a great sales tool! An additional benefit of this approach is one of the most popular features when demonstrating a product; the complete 360° visualization of it.\nHere is how Thoughtworks helped Daimler build innovative tools to support the latters’ futuristic sales experience.\nUse case 4: contextual experiences Gartner states that 46% of global retailers planned to deploy either AR or VR solutions to meet customer service experience requirements. The global research and advisory firm also foretells that no less than 100 million consumers will use AR to shop both online and in-store in the future.\nBusinesses are being disrupted by both the sheer amount of data they have at their disposal and the way they engage with that date. This data is in the form of buying history, product reviews, product-health, recommendations, usages statistics, features level analysis, comparative studies, social sentiment analysis for the product and more.\nA lot of customers look to such data as they make decisions they can trust and be confident in. This gives businesses to present (hopefully unbiased) data within the right context, empowering consumers to make their decisions quickly.\nThoughtWorks’ TellMeMore app displays relevant data for a book An implementation called TellMeMore is an app that we designed at ThoughtWorks. It aims at providing a better in-store product experience by overlaying information like book reviews, number of pages and comparative prices on/beside the product. This app was created using Vuforia and Unity3D.\nThis is illustrative of opportunities to enhance customer experiences to the extent that without opening a product, customers have access to product insights and even experience detailed 3D models of the same. Add to this, the option of amplifying seemingly boring static objects such as event posters or product advertisements when viewed through an AR companion app.\nThoughtWorks XR day’s paper poster turned into an AR experience: Use case 5: customer engagement 2016’s fad, Pokémon GO is perhaps AR’s most famous champion. Historically, gamification and VR have proven to better engage users. Today, shopping malls and business conferences leverage VR experience studios to engage kids and customers alike. Today, we have kids riding a roller coaster without even sitting on the actual structure.\nThoughtWorkers have implemented a smartphone-based AR concept called LookARound that can involve visitors at expos and events. The flow of activity is this -\n Users scan a visual cue (like a QR code) within the event area Users have to find a 3D goodie that\u0026rsquo;s hidden close to the visual cue. It will be superimposed (Pokémon GO style) on the 3D environment around the user. The app appraises users of the distance between them and the 3D goodie Users can collect the goodie and compete with other players. LookARound was a huge hit at a two day internal office-wide event, ‘India Away Day 2019.’ 1700 ThoughtWorkers explored different points of interests at the venue, such as break-out sessions, tech stalls and event areas while engaging with AR.\nSnapshot of the LookAround App at ThoughtWorks India Away Day 2019 At the Away Day, 500 ThoughtWorkers competed with each other at a time. They were encouraged by 3D leaderboards showcased on Microsoft Hololens and Oculus Quest.\nConclusion XR tech is evolving at such a rate that the untapped potential boggles the mind. And, we believe the software’s evolution should be matched by the hardware. The latter needs to be user-friendly in terms of field of view and lightweight with immense processing and battery power.\nThe software on the other hand, needs to take a more informed advantage of AI and ML to better estimate diverse environments, making it accessible to any and every possible scenario. Businesses also need to proactively build as many use cases as possible, fix shortcomings and contribute to XR development practices. VR is a reality and has us primed for the pronounced impact of AR and MR tech in the future.\nThis article was originally published on ThoughtWorks Insights and XR Practices Publication\n","id":141,"tag":"xr ; ar ; vr ; mr ; usecases ; thoughtworks ; technology","title":"eXtending reality with AR and VR - Part 2","type":"post","url":"/post/extending_reality_with_ar_and_vr-2/"},{"content":"The way we interact with technology today is not too far from what was predicted by futuristic, sci-fi pop culture. We are living in a world where sophisticated touch and gesture based interactions are collaborating with state of the art wearables, sensors, Augmented Reality (AR), Virtual Reality (VR) and Mixed Reality (MR). And, this is eXtending user experiences to be extremely immersive.\nAdd to this, Gartner’s expectation that 5G and AR/VR implementation will transform not only customer engagement but brands’ entire product management cycles as well. Infact, according to IDC, spending on AR and VR is expected to reach $18.8 billion US dollars this year apart from enjoying more than a 75% compound annual growth rate through to 2023.\nEven world famous tech conferences like CES 2020 are not immune to the allure of AR/VR products. Infact, CES recognized Human Capable’s AR Norm Glasses as the best of innovation at the event. We are also seeing a rise in several new players and devices like NReal,RealMax and North.\nThe Seismic Shifts For over two decades, ThoughtWorks has been steering organizations through the dynamic world of tech innovation, right from AI to ML to the above mentioned AR and VR. The Seismic Shifts report is the company’s broad round up of upcoming tech trends and a ‘how-to’ playbook.\nFor the purpose of this article, I’d like to draw your attention to one particular shift, Evolving Interactions. ThoughtWorks’ report on the Seismic Shifts states,\n “We must evolve our thinking—and our capabilities—beyond the keyboard and the screen. Mixed reality (MR) will become the norm.”\n If nothing else, this is indicative of the radical evolution of people’s interaction with technology.\nTypes of Reality To get a better handle of what ‘beyond the keyboard and the screen’ means, let’s look at the different terms used when discussing reality tech -\nAugmented reality AR overlays digital content on to a real environment. Here is a list of the multiple tools that enhance AR implementation in the market today -\nSmart glasses project digital content right in front of the user\u0026rsquo;s eye. The last 5 years have seen the progress of small glasses from bulky gadgets to lightweight glasses packed with more processing power, better battery life, a wider field of view and consistent connectivity. Google Glass, Vuzix, Epson, NReal are some of the popular devices in this segment.\nAR capable smartphones and tablets that run AR applications are the most affordable link to augmented reality use cases. The eco-system that supports AR application development for Android and iOS devices is rich with several tools and techniques like ARCore, ARKit, Unity AR Foundation. Smartphone manufacturers are also powering their more recent phones with AR-compatible hardware. For example, here is the ever growing list of ARCore supported devices from Google.\nProjection-based display devices allow digital content to be projected onto a glass surface. For example, Mercedes’ Head-Up Display, HUD casts content onto the vehicle’s windshield glass. Drivers can view augmented information like maps, acceptable speed limits and more.\nVirtual Reality VR is an immersive experience that shuts out the physical world and takes one into a virtual or computer-generated world.\nHead Mounted Display (HMD) devices on the market include Oculus Quest, Oculus Go, HTC Vive, Sony Playstation, Samsung GearVR etc. Our bi-annual report that provides nuanced advice around technologies, tools, techniques, platforms and languages, also called the Technology Radar puts a spotlight on Oculus Quest stating,\n “\u0026hellip;changes the game, becoming one of the first consumer mass-market standalone VR headsets that requires no tethering or support outside a smartphone”. Furthermore, Oculus has recently come up with a hand-tracking solution on Oculus Quest which is a great market differentiator for the device.\n Smartphone based VR is an economical choice to experience VR tech. Google Cardboard, and similar variants are quite popular in this segment. All one needs is a smartphone to fit into a virtual box. Media players and media content providers like YouTube have adapted and introduced the capability of consuming content in a 360-degree VR environment.\nMixed Reality MR is a combination of both AR and VR, where the physical world and digital content interact. Users can interact with the content in 3D. Besides, MR devices recognize and remember environments for later, while also keeps track of the device’s specific location within an environment.\neXtended Reality, XR The familiarity with the foundational blocks allows us to move on to the bigger trend that businesses have an opportunity to explore - eXtended Reality. It’s an umbrella term for scenarios where AR, VR, and MR devices interact with peripheral devices such as ambient sensors, actuators, 3D printers and internet enabled smart-devices such as home appliances, industrial connected machines, connected vehicles and more.\nThe term XR is soon becoming common parlance thanks to the synergy of AR, VR and MR tech to create 3D content, and cutting edge development tools and practices. The industry is examining investment opportunities in XR platforms that allow shared services across devices, development platforms and enterprise systems like ERP, CRM, IAM etc.\nFor instance, Lenovo ThinkReality Platform and HoloOne Sphere are providing device independent services, SDK (Software Development Kit) and cloud based central portal, which further integrates with enterprise systems. WebVR, a web browser-based XR experience has also been mentioned in ThoughtWorks Technology Radar,\n “WebVR is an experimental JavaScript API that enables you to access VR devices through your browser. If you are looking to build VR experiences in your browser, then this is a great place to start.”\n Additionally, Immersive Web from Google and WebXR from Mozilla are other prominent examples.\nDevelopment platforms like Unity 3D have gone ahead and crafted cross-device features. This is evident in how Unity AR Foundation supports both platforms - Android devices that use Google ARCore and iOS devices that use ARKit to build AR experiences.\nConclusion An overview of the foundational blocks give us a small insight into the myriad possibilities of how we might interact with the world around us. In Part II of this series on eXtended reality, we will be deep diving into interesting use cases of the tech and explore realities that were only once possible in the movies.\nThis article was originally published on ThoughtWorks Insights and XR Practices Publication\n","id":142,"tag":"xr ; ar ; vr ; mr ; fundamentals ; thoughtworks ; technology","title":"eXtending reality with AR and VR - Part 1","type":"post","url":"/post/extending_reality_with_ar_and_vr-1/"},{"content":"Covid19 has challenged the way we eat, the way we commute, the way we socialize, and the way we work. In short, it impacted the way we live. More than a million people got infected and the number is growing exponentially, it has impacted the whole world. We need to find new ways of life and change our focus and priorities to save nature and humanity. Technology can play a great role in coping with it. eLearning, eCommerce, robots in healthcare and contactless technologies are already helping.\nRemote working is the new normal, it is getting accepted by many industries and institutions beyond IT and consulting firms. Even traditional educational institutions started live classes. My 8-year-old son Lakshya’s new academic session started yesterday, I and my wife joined the orientation session and he met all his classmates over video conferencing, and today onward he will attend his school from home.\nI am part of an XR development team at ThoughtWorks, we develop software for a couple of XR devices. The government of India has to lock-down the whole country to strictly implement social distancing, and we all are working from home now. In this article, I am sharing, how the 100% remote working impacted me and my team and how it has impacted the business continuity.\nDo read the tips of working from home by ThoughtWorkers\n https://medium.com/@leovikrant78/the-positive-side-how-work-from-home-can-improve-quality-of-life-b08ab1f981df https://www.thoughtworks.com/remote-work-playbook https://martinfowler.com/articles/effective-video-calls.html My Day like This section describes how does my day look like, and how much it is different and how much is the same in comparison to earlier.\nMorning Energiser Workout The day now does not start with a morning walk but an indoor workout session with colleagues. ThoughtWorks, Gurgaon office colleagues have started morning energizer workout session, with a basic workout of stretching and yoga. Earlier, at this time, we used to start getting ourselves prepared for office and kids for school, and some of us may be in the traffic during this period or in the next two hours. Earlier, all the effort need to put into dropping kids at the school bus stop and reach office in time.\nSupport my kid to attend school from home He got a schedule from school to join sessions starting at 9 am.\nHe did a rehearsal with me on video conferencing tool Zoom and now equipped with doing zoom on his own. He is super happy to meet his teachers and students today. He has not seen his friends since the last school session which got completed in first week of March. He got promoted 4th standard. Earlier, I could not support my wife in all such activities after reaching to office. But now, I have some more flexibility and more time to plan things. While she prepares a delicious breakfast, I get my son ready for the school from home.\nTeam Standup and Tech Huddles My team is working from home from different parts of India and distributed across 3 offices in India (Gurgaon, Pune, Coimbatore). There are multiple projects running in this team, and I am shared in a few projects, so attending multiple standups and tech huddles.\nXR Development Standup, with a virtual background :) The day generally starts with standup where we discuss what has been done yesterday, what is the plan for today and if there are any impediments. Then team signup for the task they want to do today, and callout things which they think the team needs to know. Since everyone is working from home, we generally have a full house, no ice-cream meters these days for joining late or stuck in traffic excuses.\nToday many of the team members called out that they are canceling their planned leaves due to Covid19 situations and also looking at the criticality of the work we have signed up. Today we have signed up an interesting piece of work with the client. The team wants to deliver the signed work at all the cost. What a commitment team!\nRemote Pair Development Pair development is very core to ThoughtWorkers, we do things in pairs, and get first-hand feedback immediately and fix things then and there. It helps us producing better quality products and reduce the burden from a single human mind.\nXR development has device dependency, and remote pair development is another challenge, but we have been doing remote pair development from couple of months between team located at Gurgaon and Coimbatore offices, and now we are getting better in remote pair development. Some of my time goes in active development pairing, here is how we solved some of the challenges\n A limited number of devices — Developers use emulators to test the functionality and have written automation functional tests that run on CI environments to further confirm that development build has the right set of functionality. We make sure our QA team is well equipped with real devices to test it further. We follow a concept called “Dev-Box” (aka volleyball test, shoulder check), where a developer pair invites the QA and few team members to check the developed functionality on the developer’s machine before committing the code in the repository. So the developer has a chance to try functionality in real XR device before pushing the code. Showing XR functionality remotely — Some devices support Mixed Reality Capture natively, and some need external tools to stream device experience remotely. We have also build a custom experience streamer to share device experience. Developers sharing the XR streams on video calls Integrations with multiple tools and technologies — Since teams are distributed, it is very important to call out dependencies, and we need to make sure we are not doing duplicate work and duplicate spikes to uncover unknowns. We focus on defining contracts first and then share mock implementation to the dependent team to resolve the dependency well early. Some of my time goes into pairing with different technology leads in defining project architecture. Pairing with UX Designer: User experience is key things in XR development and today some of my time also gone in discussing user experience design for the upcoming product. We discussed, how we can make the design completely hands-free, and the user can control the 3D app just by head movement and gaze. Distance between intractable items and field of view of the device is a big consideration while designing the user experience. I am happy that I am able to share my experience with multiple XR devices with the UX designer of my team.\nDiscussing the concept of the field of view and user interactions on video conferencing was not easy. We figured out an easy way, we created an FoV box with a gaze pointer at the center, then we grouped them, now moving the FoV box also move gaze pointer and it gives similar behavior of how the XR device shows things within FOV.\nPairing with UX designer and business analyst Pairing for Planning and Showcases: Being part of multiple projects, I need to be in sync with project planning and requirements across the projects and I can be helpful for project iteration managers to define dependencies and plan the sprint accordingly. So, some of my time goes into discussing the project plan and planing out the showcases.\n Get buy-in from the client — we have taken buy-in from the client to take the XR devices at home for remote development. Internet availability at home — Every ThoughtWorker got Rs. 5000/- reimbursements to make arrangements at home and can get things like internet routers, inverters, headphones. It helped team members who don’t have good internet speed and prepare for remote work. Trying its best to enable all of us to work from home. MD Townhall Joined Townhall from MD’s office and Head of Techs — Overview on how Covid19 is impacting us and industries, our plan to cope up with uncertainties, and what new coming up in India, new office spaces, projects, and some less painful actions. We have discussed some interesting social initiates going in ThoughtWorks to support the people of india in this Codiv19 situation. Working from home when kids are around My 2.5 year younger son Lovyansh, has never seen me at home every day, and that too working for the whole day (sometimes night ;). He expects more attention from me after seeing me more. He just wants to press keys on the keypad, and touch the laptop, but I just can’t afford to do that in this situation. He has started controlling his curiosity now, sometimes he is happy just by watching what I am doing. I hope you have noticed matching dress, he is learning colors these days :) Spend good time in completing few spikes on integrating Unity Application on Android native application and interoperability around these technologies.\nIntegrated the client’s enterprise Active Directory with the XR app, that we are building for the client. This is a step toward making the XR devices enterprise-ready.\nSome of the spikes I have done are already published at Enterprise XR\nPick up Grocery and Essentials There are online groceries and essential delivery vendors around my house, but none of them can deliver at home due to Covid19, but they are allowed to deliver in the community center following all the hygiene and Govt protocols. The society staff allows people with masks and sensitized hands to pick the delivery. The community center is 10 mins walk in the same society area. I need to get out of my house for some time, depending on the delivery time. Today it was afternoon.\nEvening Energiser Pune team organizes evening energizer to have fun and stretching, and break together. Energizers are important to stay active and focused, going together on a video call to share/show what is happening around- kids, pets are really connecting us better as a team.\nNew Joiner Induction One XR expert joined our team on 31st March, and it is not the best time to join or leave any organization. But at ThoughtWorks, we have taken this challenge and now the new joiners getting all the joining formality done digitally. ThoughtWorks has planned 100% remote induction, immersion workshops and hands-on dev boot-camp for the new joiners. I am his remote buddy, and we keep in touch remotely.\nEvening meetings The client team is primarily distributed in the US and China. We generally have some evening meetings with the client or with our ThoughtWorkers in other timezones. Earlier, these evening meetings were most problematic for me to attend, as if I accept a meeting at 5:30 pm and try to leave between 6–8 pm then I have to pass through a lot of traffic, in that case I may opt to stay in office till 8 pm, and that means everything (dinner, evening walk, sleep time) get delayed.\nNow working from home has the flexibility to attend evening calls, and even attending a late evening is also fine. So we are now more accessible to other timezones.\nAlso earlier, it was a struggle to schedule a meeting in office hours, as we need to check the availability of meeting rooms for the participants across the offices and book them. Most of the ThoughtWorks offices in India are preparing additional spaces, as existing ones are getting crowded, finding a meeting room is a nightmare until we get more spaces.\nKeep multiple video conferencing software ready on your PC. Different clients and users have different preferences. Get hands-on these tools on sharing screen, content, chat while on a video call.\nBreak Time is Family Time I noticed that kids are just waiting for me to come out of the room, and they always have a lot of things to show, and talk. Each break time is now family time. I keep getting instructions from my wife to drink water and walk after some time :)\nWriting this article And of course, after all of the above, spending some time writing this article. I am planning to do some indoor walk after completing it. This covers my day.\nImpact of Full Remote Working In the last section, I have explained how does my day look like, it is not much different than my day in office, and it is also not the same as my day in office. Let’s now see, how does full remote working or work from home has impacted us, the projects, the businesses, and the organization. This is applicable to non-XR projects as well.\nSocial Connect Social distancing is a must to protect ourselves and others from Covid19 infections. As a result, social connect is reducing, and we do miss office working hours and the energetic office environment. We are trying to fill it with virtual meetings, but it has its own pros and cons. Global connect is improving though.\nFamily Connect Family connect is improving, and we have more time for family, and also more time to work.\nAgility Now we have more flexible hours than earlier. The hours we spend preparing to go office and reach office are now added flexible hours which can plan to keep our priority in mind. We now have more control on time, we can plan our meetings in extended hours, and spend quality time with family as well. We are becoming location independent and distributed agile practices are growing naturally. Global-first is coming naturally at org level.\nBusiness Continuity Before going to 100% work from home, we have rehearsed it for couple of days, and we have measured that there is almost no impact on the business deliveries. Now we are 100% working from home, and we have been delivering on all the commitments on time. There is almost no reduction in productive hours and people have also canceled planned leaves, which adds effective productive hours.\nOrganization At the organization level, there needs to be multiple initiates to be planned in addition to move operations, people support and people enablement remotely. The future will soon be here with teleportation meetings, where we can be ported virtually to other locations.\n New security policies may be required The idea of having big office space may not be the future The location-independent global first organization New leave policies Meeting infra for virtual meetings Remote working charter Cost optimizations Measures to face another economic slowdown. Ops 10:30 pm, probably update this section later. Conclusion Covid19 has impacted the world from all the side and forced us to work from home. We are observing a lot of positive sides of working from home without any major impact on business commitments. We hope we will rid of covid19 soon and wish speedy recovery of the ecosystem.\nIt is originally published at XR Practices Medium Publication\n","id":143,"tag":"xr ; ar ; vr ; mr ; work from home ; covid19 ; thoughtworks ; pair-programming ; xp ; agile ; practice","title":"XR development from home","type":"post","url":"/post/xr-development-from-home/"},{"content":"In the previous article, we discussed implementing multi-factor authentication for an andorid application, and in the article we will cover another enterprise aspect, Interoperability.\nInteroperability is key aspect when we build enterprise solutions using XR technologies. The enterprises have digital assets in form of mobile apps, libraries, system APIs, and they can’t just throw away these existing investments, in fact, the XR apps must be implemented in such a way that it is interoperable with them seamlessly. This article describes interoperability between XR apps and other native apps. It takes Unity as the XR app development platform and explains how a Unity-based app can be interoperable with the existing Java-based android libraries.\nRead the below article by Raju Kandaswamy, it describes steps to integrate the XR app built using Unity into any android app.\n Embedding Unity App inside Native Android application While unity provides android build out of the box, there are scenarios where we would like to do a part unity and part… medium.com\n The article below describes interoperability in the opposite of what Raju has described, we will try to integrate an existing Android Library into the Unity-based XR App.\nBuild an Android Library In the last article, we have implemented a user authenticator. Let’s now expose a simple user authenticator implementation as an android library.\n goto File \u0026gt; New \u0026gt; New Module \u0026gt; Select Android Library Implement a UserAuthenticator with simple authentication logic. Each method is accepting an argument context, but currently, it is not being used so you may ignore it for now.\n public class UserAuthenticator { private static UserAuthenticator instance = new UserAuthenticator(); private String loggedInUserName; private UserAuthenticator(){ } public static UserAuthenticator getInstance(){ return instance; } public String login(String name, Context context){ if(loggedInUserName == null || loggedInUserName.isEmpty()){ loggedInUserName = name; return name + \u0026#34; Logged-In successfully \u0026#34;; } return \u0026#34;A user is already logged in\u0026#34;; } public String logout(Context context){ if(loggedInUserName != null \u0026amp;\u0026amp; !loggedInUserName.isEmpty()){ loggedInUserName = \u0026#34;\u0026#34;; return \u0026#34;Logged-Out successfully \u0026#34;; } return \u0026#34;No user is logged in!\u0026#34;; } public String getLoggedInUser(Context context){ return loggedInUserName; } } Go to Build \u0026gt; Make Module “userauthenticator” — It will generate aar file. under build/outputs/aar This is the library that can be integrated with Unity in C#. The next section describes how it can be used in Unity.\nBuild an XR app in Unity Let’s now build the XR app in unity to access the API exposed by the android library implemented in the last step. Unity refers to the external libraries as a plugin.\n Create unity project We have 3 APIs in UserAuthenticator of the android library, let\u0026rsquo;s build a 3D UI layout with buttons, text elements. as follows. The source code is shared at the end of the article. Add a script ButtonController. This script describes the ways to access java classes.\n `UnityEngine.AndroidJavaClass` - Load a given class in Unity, using reflection we may invoke any static method of the class. `UnityEngine.AndroidJavaObject` - Keep the java object and automatically dispose of it once it's done. Using reflection we can invoke any method of the object. public class ButtonController : MonoBehaviour { private AndroidJavaObject _userAuthenticator; ... void Start() { AndroidJavaClass userAuthenticatorClass = new AndroidJavaClass(\u0026#34;com.tw.userauthenticator.UserAuthenticator\u0026#34;); _userAuthenticator = userAuthenticatorClass.CallStatic\u0026lt;AndroidJavaObject\u0026gt;(\u0026#34;getInstance\u0026#34;); loginButton.onClick.AddListener(OnLoginClicked); logoutButton.onClick.AddListener(OnLogoutClicked); getLoggedInUserButton.onClick.AddListener( OnGetLoggedUserClicked); } private void OnLoginClicked() { loginResponse.text = _userAuthenticator.Call\u0026lt;string\u0026gt;(\u0026#34;login\u0026#34;, inputFieldUserName.text, getUnityContext()); } private void OnLogoutClicked() { logoutResponse.text = _userAuthenticator.Call\u0026lt;string\u0026gt;(\u0026#34;logout\u0026#34;, getUnityContext()); } private void OnGetLoggedUserClicked() { getLoggerInUserResponse.text = _userAuthenticator.Call\u0026lt;string\u0026gt;(\u0026#34;getLoggedInUser\u0026#34;, getUnityContext()); } private AndroidJavaObject getUnityContext() { AndroidJavaClass unityClass = new AndroidJavaClass(\u0026#34;com.unity3d.player.UnityPlayer\u0026#34;); AndroidJavaObject unityActivity = unityClass.GetStatic\u0026lt;AndroidJavaObject\u0026gt;(\u0026#34;currentActivity\u0026#34;); return unityActivity; } } Attached the ButtonController script to an empty game object. Link all the buttons, texts, and input field components. Copy the android library in /Assets/Plugins/Android folder. Switch to Android as a Platform if not already done. — Go to File \u0026gt; Build Settings \u0026gt; Select Android as a Platform \u0026gt; Click of Switch Platform.\n Update the player settings — Go to File \u0026gt; Build Settings \u0026gt; Player Settings correct the package name and min API level in “Other Settings”\n Connect your android device, and click on Build and Run. or export the project and test it in Android studio.\n Here is a demo. Type in the user name, and click on login, it will call the Android API from the library and show the value returned by the library in the text next to the button.\nSave way button works, test the logic you have written in the library. \nWith this, we are done with basic integration. The next section describes an approach with callback methods.\nIntegration using Callbacks. Callbacks are important, as the android system may not be always running result synchronously, there are multiple event and listeners which works on callbacks.\n Let’s implement a callback interface in the android library. public interface IResponseCallBack { void OnSuccess(String result); void OnFailure(String erorrmessage); } Update the UserAuthenticator implementation to call the OnSuccess and OnFailure Callbacks. public class UserAuthenticator { ... public void login(String name, IResponseCallBack callBack, Context context){ if(loggedInUserName == null || loggedInUserName.isEmpty()){ loggedInUserName = name; callBack.OnSuccess (loggedInUserName + \u0026#34; Logged-In successfully \u0026#34;); } else { callBack.OnFailure(\u0026#34;A user is already logged in\u0026#34;); } } public void logout(IResponseCallBack callBack, Context context){ if(loggedInUserName != null \u0026amp;\u0026amp; !loggedInUserName.isEmpty()){ loggedInUserName = \u0026#34;\u0026#34;; callBack.OnSuccess (\u0026#34;Logged-Out successfully \u0026#34;); } else { callBack.OnFailure(\u0026#34;No user is logged in!\u0026#34;); } } ... } Build the Android library again and replace it in Unity project Assets/Plugins/Android Update corresponding in ButtonController of Unity. Implement the Java interface using UnityEngine.AndroidJavaProxy. private AuthResponseCallback _loginCallback; private AuthResponseCallback _logoutCallback; void Start() { ... _loginCallback = new AuthResponseCallback(loginResponse); _logoutCallback = new AuthResponseCallback(logoutResponse); } private class AuthResponseCallback : AndroidJavaProxy { private Text _reponseText; public AuthResponseCallback(Text _reponseText) : base(\u0026#34;com.tw.userauthenticator.IResponseCallBack\u0026#34;) { } void OnSuccess(string result) { _reponseText.text = result; _reponseText.color = Color.black; } void OnFailure(string errorMessage) { _reponseText.text = errorMessage; _reponseText.color = Color.red; } } private void OnLoginClicked() { _userAuthenticator.Call(\u0026#34;login\u0026#34;, _loginCallback, inputFieldUserName.text, getUnityContext()); } private void OnLogoutClicked() { _userAuthenticator.Call(\u0026#34;logout\u0026#34;, _logoutCallback, getUnityContext()); } here is the demo to show the failure response in red color.\nThe response callback implementation sets the color in the text fields.\nCode Branch till this point is added here — Android Library, Unity App\n\nIn the next section, we will discuss more complex integration with native components and intents.\nSingle Sign-On Integration using device native WebView In this section, we will integrate the android\u0026rsquo;s web view with the unity app, and also see invocation using Intents.\nRefactor Android Library Let’s refactor the android app we have implemented the last article, and use it as a library. Also, learn to pass the arguments to Intent and gets the response from it.\n Wrap the WebView invocation in an activity — It takes URL, returnURI as input. It intercepts the returnURI and parse the “code” from it and returns in a ResultReceiver callback. public class WebViewActivity extends AppCompatActivity { ... @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); final String interceptScheme = getIntent().getStringExtra(INTENT_INPUT_INTERCEPT_SCHEME); final String url = getIntent().getStringExtra(INTENT_INPUT_URL); webView = new WebView(this); webView.setWebViewClient(new WebViewClient() { public boolean shouldOverrideUrlLoading(WebView view, String url) { Uri uri = Uri.parse(url); if (interceptScheme !=null \u0026amp;\u0026amp; interceptScheme.equalsIgnoreCase(uri.getScheme() +\u0026#34;://\u0026#34; +uri.getHost())) { String code = uri.getQueryParameter(\u0026#34;code\u0026#34;); ResultReceiver resultReceiver = getIntent().getParcelableExtra(INTENT_RESPONSE_CALLBACK); if(resultReceiver!=null) { Bundle bundle = new Bundle(); bundle.putString(\u0026#34;code\u0026#34;, code); resultReceiver.send(INTENT_RESULT_CODE, bundle); } finish(); return true; } view.loadUrl(url); return true; } }); WebSettings settings = webView.getSettings(); settings.setJavaScriptEnabled(true); setContentView(webView); webView.loadUrl(url); } ... } User Authenticators Login method invokes the web-view activity with an SSO URL, and callback. public void login(IResponseCallBack callBack, Context context){ if(accessToken == null || accessToken.isEmpty()){ Intent intent = new Intent(context, WebViewActivity.class); intent.putExtra(WebViewActivity.INTENT_INPUT_INTERCEPT_SCHEME, HttpAuth.REDIRECT_URI); intent.putExtra(WebViewActivity.INTENT_INPUT_URL, HttpAuth.AD_AUTH_URL); intent.putExtra(WebViewActivity.INTENT_RESPONSE_CALLBACK, new CustomReceiver(callBack)); context.startActivity(intent); } else { callBack.OnFailure(\u0026#34;A user is already signed in\u0026#34;); } } CustomReceiver.OnReceivedResult gets an access_token using the code returned by the web-view activity. access_token is then saved in the UserAuthenticator. public class CustomReceiver extends ResultReceiver{ private IResponseCallBack callBack; CustomReceiver(IResponseCallBack responseCallBack){ super(null); callBack = responseCallBack; } @Override protected void onReceiveResult(int resultCode, Bundle resultData) { if(WebViewActivity.INTENT_RESULT_CODE == resultCode){ String code = resultData.getString(INTENT_RESULT_PARAM_NAME); AsyncTask\u0026lt;String, Void, String\u0026gt; getToken = new GetTokenAsyncTask(callBack); getToken.execute(code); } else { callBack.OnFailure(\u0026#34;Not a valid code\u0026#34;); } } } The access_token is used in getting the user name from the Azure Graph API. This completes the refactoring. The source code is attached at the end. public void getUser(IResponseCallBack callBack){ if(loggedInUserName == null || loggedInUserName.isEmpty()){ if(accessToken != null \u0026amp;\u0026amp; !accessToken.isEmpty()){ try { new GetProfileAsyncTask(callBack).execute(accessToken).get(); } catch (Exception e) { callBack.OnFailure(\u0026#34;Failure:\u0026#34; + e.getMessage()); } } else { callBack.OnFailure(\u0026#34;No active user!\u0026#34;); } } else { callBack.OnSuccess(loggedInUserName); } } All the HTTP communications are wrapped in a utility class HttpAuth, this completes the refactoring of the android library. Since we have added a new activity don’t forget to add it in the android manifest.\nUnity App Invoking Native WebView in Android Libary This example is now doing SSO with the organization’s active directory (AD), the username and password will be accepted at the organization’s AD page.\n Let’s change the layout a bit. We don’t need an input field now. There is no major change in ButtonContoller, expect the signature change for the library APIs. Now android handles the intent in a separate thread, and callback in unity may not access the game object elements as they may be disabled when another activity spawned on. Status of unity element may be checked in the main thread. The callbacks now updating few helper variables and the update method updates the unity elements in main there whenever there is any change in the state.\n private void Update() { //set text and color for response lables } private class AuthResponseCallback : AndroidJavaProxy { private int index; public AuthResponseCallback(int index) : base(\u0026#34;com.tw.userauthenticator.IResponseCallBack\u0026#34;) { this.index = index; } void OnSuccess(string result) { labels[index] = result; colors[index] = Color.black; } void OnFailure(string errorMessage) { labels[index] = errorMessage; colors[index] = Color.red; } } Since the Android library now have an Activity, and it requires a dependency “androidx.appcompat:appcompat:1.1.0” in Gradle build. We need to use a custom Gradle template in Unity, Go to File \u0026gt; Build Settings \u0026gt; Player Settings \u0026gt; Publishing Settings. Choose Custom Gradle Template. It will generate a template file in Plugins. Add the following dependency in the Gradle template. Make sure don’t change **DEPS** Build/Export the project and run on Android. You have completed the native integration. Here is a demo of integrated SSO in Unity. \nSource code can be found at following git repo — Android Library, Unity App\nConclusion This article describes the basics of interoperability between XR apps and android native libraries. In the next article, we will how to share the user context across XR apps.\nThis article was original published on XR Practices Publication\n","id":144,"tag":"xr ; ar ; vr ; mr ; integration ; enterprise ; android ; unity ; tutorial ; technology","title":"Enterprise XR - Interoperability","type":"post","url":"/post/enterprise-xr-interoperability/"},{"content":"XR use cases are growing with advancements in the devices, internet and development technologies. There is an ever-growing demand to build enterprise use cases using ARVR technology. Most enterprise use cases eventually require integration with an enterprise ecosystem such as IAM (Identity and Access Management), ERP, CRM, and single sign-on with other systems.\nMost organizations protect digital assets with a single sign-on using Multiple Factor Authentication (MFA). The MFA is generally a web-browser based authentication where the browser redirects to tenant’s authentication page where the user provides their credentials and then the user confirms another factor (PIN, OTP, SMS or mobile notifications), once it succeeds, it gets redirected back to the protected resource. The users generally trust the organization’s AD page to provide credentials. So it is important to include this web-based flow in the XR app development.\nThis series of posts covers end to end flow for building XR application integrated with the organization\u0026rsquo;s web-based authentication flow.\nSetup Active Directory This section describes the AD setup on Microsoft Azure. A similar setup is available on other active directories. Here are the steps you need to follow to configure AD for a mobile app.\n Open Azure Portal with valid credentials, please signup if you don’t have that, https://portal.azure.com (currently there is 28 days free subscription to try out, go and “Add subscription” choose Free Trial) Search for Azure Active Directory Go to App Registration \u0026gt; New Registration — Fill in the details for a new app. Note down the client id and other parameters. Ref [4] in references for more details.\nBuild an Android App integrated with Organization AD There are multiple ways to integrate and build an android app that can communicate with an organization’s AD for authentication. There are proprietary libraries (from Google, Microsoft, Facebook etc.) that can be included in the project and you can directly communicate with respective AD, but it is helpful when Basic Authentication is enabled. Below steps cover OAuth 2 type MFA and web-browser based flow for authentication, it doesn\u0026rsquo;t require any specific library dependency.\nAndroid Project Setup Build your first app if you haven\u0026rsquo;t build any yet — https://developer.android.com/training/basics/firstapp Understand the MainActivity and Android Manifest file.\nOpen Web View In Android App Open authorization URL in a web-view. For the above Azure AD configuration following is the authorization URL. Ref [1]\nhttps://login.microsoftonline.com/common/oauth2/v2.0/authorize?client_id=86501f6d-0498-40ab-b3c9-81ab8ffef1ac\u0026amp;scope=openid\u0026amp;response_type=code Refer to the code below to open a web-view at the start of the main-activity.\npublic class MainActivity extends AppCompatActivity { private static final String TAG = \u0026#34;MainActivity\u0026#34;; //xrapp://auth-response private final String REDIRECT_URL_SCHEME = \u0026#34;xrapp\u0026#34;; private final String REDIRECT_URL_HOST = \u0026#34;auth-response\u0026#34;; private final String AD_AUTH_URL = \u0026#34;https://login.microsoftonline.com/common/oauth2/v2.0/authorize?client_id=86501f6d-0498-40ab-b3c9-81ab8ffef1ac\u0026amp;scope=openid\u0026amp;response_type=code\u0026#34;; private WebView myWebView; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); myWebView = new WebView(this); WebSettings settings = myWebView.getSettings(); settings.setJavaScriptEnabled(true); settings.setDomStorageEnabled(true); setContentView(myWebView); myWebView.loadUrl(AD_AUTH_URL); } } The android-manifest looks like this. Make sure you have the Internet permission\n\u0026lt;manifest xmlns:android=\u0026#34;http://schemas.android.com/apk/res/android\u0026#34; package=\u0026#34;package.userauthenticator\u0026#34;\u0026gt; \u0026lt;uses-permission android:name=\u0026#34;android.permission.INTERNET\u0026#34; /\u0026gt; \u0026lt;application android:allowBackup=\u0026#34;true\u0026#34; android:icon=\u0026#34;@mipmap/ic_launcher\u0026#34; android:label=\u0026#34;@string/app_name\u0026#34; android:roundIcon=\u0026#34;@mipmap/ic_launcher_round\u0026#34; android:supportsRtl=\u0026#34;true\u0026#34; android:theme=\u0026#34;@style/AppTheme\u0026#34;\u0026gt; \u0026lt;activity android:name=\u0026#34;.MainActivity\u0026#34;\u0026gt; \u0026lt;intent-filter\u0026gt; \u0026lt;action android:name=\u0026#34;android.intent.action.MAIN\u0026#34; /\u0026gt; \u0026lt;category android:name=\u0026#34;android.intent.category.LAUNCHER\u0026#34; /\u0026gt; \u0026lt;/intent-filter\u0026gt; \u0026lt;/activity\u0026gt; \u0026lt;/application\u0026gt; \u0026lt;/manifest\u0026gt; Intercept the Redirect URL While configuring AD, we have provided a redirect URL — xrapp://auth-response. Clicking on “Continue” in the last step will redirect to “xrapp://auth-response?code=\u0026lt;value of code\u0026gt;”\nThe android app needs to intercept this URL to read code parameter from it. Following code checks for any URL loading and see if any URL starts with xrapp://auth-response and then fetch code from it.\n... myWebView = new WebView(this); myWebView.setWebViewClient(new WebViewClient() { public boolean shouldOverrideUrlLoading(WebView view, String url) { Uri uri = Uri.parse(url); if (REDIRECT_URL_SCHEME.equalsIgnoreCase(uri.getScheme()) \u0026amp;\u0026amp; REDIRECT_URL_HOST.equalsIgnoreCase(uri.getHost())) { String code = uri.getQueryParameter(\u0026#34;code\u0026#34;); return true; } view.loadUrl(url); return true; } }); ... Fetch the access token In the last step, we read the authorization code, lets now fetch an access-token from AD. Here is the URL to fetch the token for above AD setup https://login.microsoftonline.com/common/oauth2/v2.0/token Ref[2]\nAdd an Async Task to fetch a token once the Authentication code is fetched.\n... AsyncTask\u0026lt;String, Void, String\u0026gt; getToken = new GetTokenAsyncTask(); getToken.execute(code); return true; ... Here is the GetTokenAsyncTask\nprivate class GetTokenAsyncTask extends AsyncTask\u0026lt;String, Void, String\u0026gt; { @Override protected String doInBackground(String... strings) { String code = strings[0]; HashMap\u0026lt;String, String\u0026gt; postParams = new HashMap\u0026lt;\u0026gt;(); postParams.put(\u0026#34;client_id\u0026#34;, CLIENT_ID); postParams.put(\u0026#34;grant_type\u0026#34;, \u0026#34;authorization_code\u0026#34;); postParams.put(\u0026#34;code\u0026#34;, code); return performPostCall(AD_TOKEN_URL, postParams); } protected void onPostExecute(String response) { super.onPostExecute(response); String access_token = getValueFromJSONString(response, \u0026#34;access_token\u0026#34;); finish(); } } Fetch the User Profile Using the Access token Once the access token is fetched, get the user profile data. Microsoft has graph APIs to fetch the profile data Ref [3] eg. https://graph.microsoft.com/beta/me/profile/ It would return whole JSON data for the profile.\n... String access_token = getValueFromJSONString(response, \u0026#34;access_token\u0026#34;); result = new GetProfileAsyncTask().execute(access_token).get(); ... The GetProfileAsyncTask fetches the profile data in the background.\nprivate class GetProfileAsyncTask extends AsyncTask\u0026lt;String, Void, String\u0026gt; { @Override protected String doInBackground(String... strings) { String access_token = strings[0]; return performGetCall(GET_PROFILE_URL, access_token); } } Show the Username from the profile Let\u0026rsquo;s show the user name from the profile data. For this, we may create another intent to display the text and call the intent after getting the user name from the response. Don’t forget to add intent in the android manifest file.\n... result = new GetProfileAsyncTask().execute(access_token).get(); String displayName = getValueFromJSONString(result, \u0026#34;displayName\u0026#34;); displayName = \u0026#34;Welcome \u0026#34; + displayName + \u0026#34;!\u0026#34;; Intent intent = new Intent(MainActivity.this, DisplayResponseActivity.class); intent.putExtra(\u0026#34;response\u0026#34;, displayName); startActivity(intent); finish(); ... \nConclusion This article describes the end to end steps to configure an app on Active Directory and build an integrated android app without any proprietary dependencies. In the next few articles, we will discuss how to handle interoperability and build integrate ARVR app. Next \u0026raquo; Enterprise XR — Interoperability Source code can be found at https://github.com/thinkuldeep/xr-user-authentication/tree/part1\nThis article was original published on XR Practices Publication\nReferences : Oauth2 Authorization EndPoint OAuth2 Token End Point Using Microsoft Graph APIs https://docs.microsoft.com/en-us/azure/app-service/configure-authentication-provider-aad#register ","id":145,"tag":"xr ; ar ; vr ; mr ; cloud ; integration ; enterprise ; android ; mfa ; tutorial ; technology","title":"Enterprise XR - Multi-Factor Authentication","type":"post","url":"/post/enterprise-xr-multi-factor-authentication/"},{"content":"New Date is still to be announced.\nGet ready: Great International Developer Summit is re-scheduled and now virtual summit will be organized.\nCome and talk about the life beyond keyboard and screens, spatial web is taking over get ready\u0026hellip;\nThe talk is about Spatial Web is an umbrella term for the next generation web technologies that will transform every aspect of our life. It will connect the human and machines seamlessly such that the difference between physical and virtual will disappear.\nIn this session, we will discuss how user interactions have evolved with time, why it is the time to go beyond keyboard and 2D screens and interact smartly with the upcoming smarter devices such as IOT wearables, NLP, CV and ARVR.\nHere are the key highlights of this talk:\n Evolutions - Web 1.0 to Spatial Web Thinking beyond keyboard and 2D screen - Technologies, devices, and the use cases Facts and Figures - where are we heading? Impact on life of software development and developers - expected future skills Stay tuned for more details\u0026hellip;\n","id":146,"tag":"xr ; spatialweb ; developer-summit ; talk ; event ; speaker","title":"Speaker - Spatial Web - Life beyond Keyboard and Screen","type":"event","url":"/event/gids_2020_blr_spatialweb/"},{"content":"The XR Day Organized XR Days at ThoughtWorks, India in different cities and conducted awareness sessions and ideation workshop and more..\nThe talks are about How the User interactions evolved with time? Why is it important to think beyond the tradition keyboard and screen? Where is the industry heading? How will it impact us? Detailed Slides Think beyond keyboard and screen xr day 2019 from Kuldeep Singh Event Summary It was well recevied the participats. Here is the event Summary\nCoimbatore Pune ","id":147,"tag":"xr ; ar ; vr ; mr ; fundamentals ; thoughtworks ; event ; speaker","title":"Speaker - XR Day - ThoughtWorks India","type":"event","url":"/event/thougthworks_xrday/"},{"content":"Everyone in India is talking about Jeff Bezos, the richest of the world, who was on India\u0026rsquo;s visit recently. People are talking about the investments he is making in India, talking about his prediction about India\u0026rsquo;s \u0026ldquo;21st Century is going to be the Indian century\u0026rdquo;, and discussing the impact on small businesses. We have seen disruptions lead by Amazon in multiple sectors, now people have a fear that whatever Amazon touch, it becomes Amazon\u0026rsquo;s. This fear may be true, as Amazon always stays ahead of the curve. How could Amazon always be true? what makes Amazon or Jeff Bezos? What is the success mantra of the company like Amazon?\nThe book \u0026ldquo;Success Secrets of Amazon by Steve Anderson\u0026rdquo; talks about 14 principles that lead to amazon\u0026rsquo;s success. Not all the steps Amazon takes are successful or infact Amazon doesn\u0026rsquo;t take a step but takes a journey, which means continuous effort to make the way ahead.\nHere are my top 5 takeaways from the book:\nReturn on Risk (RoR) Evaluate RoR instead of RoI, invest on Risks. The process of evaluating return on a risk improves the risk-taking capability, which is a key factor of the success of a company like Amazon.\nNot doing a thing is equality riskier than the risk of doing that. So it is always better to act and know if that works.\nExperiment more and celebrate the failure Celebrate your failures and come out of them, make them the successful failure. Failure is the bigger teacher of life, learn from them. Plan for the failures, and make the mindset shift. Amazon\u0026rsquo;s failed in the first place ideas like Amazon Auctions as eBay competitor, zShops and Fire Phone but all these failures turned into a business of billions of $ now - amazon shopping, Alexa and more.\nBet small but on the big idea, and follow closely and progressively. Free shipping, AWS, and Kindle are examples of the same. They started small but with a big aim in mind.\nDon\u0026rsquo;t afraid of failures, but focus on making them successful failures. Have a dedicated investment in experimentation, and indulge it part of the culture.\nSpeed up decision making by decentralizing the authority especially for the decisions which can be changed or reversed. It is worth trying them instead of delaying them for multi-layer approvals, and if they don\u0026rsquo;t work then also you have learned from it. Trust your guts and keep trying.\nMove from customer-focused to customer-obsessed business Amazon is not a customer-focused business but it is actually a customer-obsessed business. Understand your customer well, and stay ahead of what they need. Use technology to solve traditional barriers/ways of working, and improve efficiency and effectiveness. Reach beyond the expected.\nThink long-term and own the future To stay ahead of the curve, one needs to think in the long-term and build the businesses for future stakeholders. Align your short-term goals with the long-term plan. People will connect with your long-term goal if you promote ownership with them. Promote ownership as a culture, sharing the long-term benefits with people, make them decision owners, let them be open and collaborative, after all, they are the future business owners.\nSet the bar high Save your belief and organization culture, no matter what, keep the high standards and don\u0026rsquo;t dilute the culture on the name of the scale. Organic growth is not always viable, thus we need to go and include people from outside, so hire best or develop the best. Measure what make sense, financial data is not only the measurement of success.\nConclusion Here I talked about my top 5 takeaway from Amazon\u0026rsquo;s success stories. Read out the book, letters of Bezos to understand more about the success mantra of Amazon.\nIt is was originally published at My Linked Profile\n","id":148,"tag":"general ; selfhelp ; success ; book-review ; takeaways ; motivational","title":"Success Mantra","type":"post","url":"/post/success_mantra_amazon/"},{"content":"Travel and transportation have faced and continue facing disruptions. Many of the traditional ways are no more in the choice. Earlier our parents used to warn us for taking rides with strangers, but now Uber/Ola/Lyft/Didi, etc are the preferred ways. Owning a car may be an absurd idea now. We are now dreaming of Autonomous vehicles, Flying Cars, Hyperloop, and Space travel in the near future. Do read, how the user interactions have been evolved with time in my earlier post.\nFaster travel vs transportation without travel Disruption from faster travel is not the only thing, but what if we don’t need to travel at all, instead a virtual model of us get ported at the destination and serves the purpose of travel. Possibilities of virtual model transportation are not too far, Ref: Microsoft Holoportation, Ref: Teleportation Conferences, so the travel industries will not just get disrupted by faster, convenient and eco-friendly mode of travel but also by the No-Travel or Virtual Travel mode. It will just be like what (electronic/mobile)-shopping did to the window-shopping. However, window-shopping still co-exists, and a lot of investment is being done into the user experience and solutions which makes a smoother transition across e/m/w-shopping. In this post, we will discuss some ARVR use-cases for the travel and transportation industry which may exist in the coming future as disruptors or enablers. but before that let’s take a look at the tools available in this space.\nTools for Extending the Reality: Heads Up Display (HUD) It extended information in front of the drivers/workers as per the context, so that they can make quick decisions and stay aware of the situations. Other vehicle sensors (GPS, Camera, Speed, IMU) helps in building the context environment. HUD is there in the industry for long, and due to its presence and it has the potential to augment more information. HUD — Photo Credit: Amazon.in\nHead-Mounted Displays and Smart Glasses Smart glasses and head-mounted displays can help the workers while their hands are busy performing the job. Smart glasses by Google Glass, Vuzix, Epson, and etc are good for displaying multimedia information right in front of the user’s eye, however, Head Mounted Devices like Microsoft Hololens, Magic Leap and Lenovo ThinkReality A6 are more capable devices and they detect and interact with the environment and provide the immersive experience to the users. Normal looking glasses are being built, and soon we have smart glasses in the normal form factor. Photo Credit: https://www.lenovo.com/ww/en/solutions/thinkreality\nVR Headsets Virtual reality headset creates a virtual environment and provides an immersive experience to the end-users in that environment. It could be a training environment, product demonstrations or a simulation of a real environment.\nAR Capable Smartphone A smartphone will still play a key role in the industry due to its omnipresence, AR applications on phones are the cheapest and easiest way to adapt the extended reality use cases. The applications built on WebXR are also well demonstratable on the phone.\nLook at the more tools such as connected wearables, holographic projections in the attached slides.\n Think beyond keyboard and screen - dev fest 2019 Use-cases using Extended Reality Tech Entertainment — Make Journey not Destination Currently, most of the travel businesses are focusing on reaching a destination faster and efficient but looking at the future disruptions, the focus would shift towards making the journey interesting.\nA lot of data is collected and used while a vehicle is moving from pickup to drop locations, this data can be used in making the journey interesting with\nARVR technology, sharable cloud AR anchors may be tagged with GPS and can be augmented in the environment when the user watching using the AR-enabled devices. Based on positioning anchors multiple users may also collaborate and participate in social AR networks, where user-specified AR anchors may be managed. So businesses around AR anchors storage would have great potential in the future, it can be monetized with ads etc. PC: https://www.vrnerds.de/google-i-o-2018-cloud-anchors-und-ar-maps/\nFM radio or Video Streaming of a favorite web series will not be the only options\nThe future vehicles may be seen as an embedded VR studio, where travel from your home to the airport may be experienced as a journey of a theme park, or a journey by helicopter, a journey on mars or any journey you may think of. One of the concepts is as Holoride PC: Holoride\nLocate and Map We are talking about self-driving cars and autonomous vehicles, which uses techniques like Simultaneous Localization and Mapping(SLAM) to build the virtual map and locate objects in the environment. A moving vehicle also generates and consumes a lot of data, which can be augmented on the environment just like ARMaps from Google, a virtual positioning system on GPS.\nThe ARMaps can be used for various use-cases for AR localization — Exactly locate the passenger(driver app), plan/auto adjust the routes, locate my car in AR (passenger app), etc. In the ride-sharing platform such as Uber, it is not always easy to find your car in a crowded space, it would be great if we have a feature where we can see our car highlighted in an augmented way, and also driver sees the passenger highlighted (may bigger than usual). PC: https://www.businessinsider.in/tech\nThis can be extended when a passenger is given an option to experience the car, where (s)he can see the 3D model and information of the car \u0026amp; the driver, and even they can try the car by getting inside the car in ARVR before it comes to pick up.\nMore and more AR services for map and location can be built, when we integrate it with other data. Eg. find nearest charging stations, service stations, fuel stations, etc and visualize them on Visual Positioning System.\nAugmented Safety Notifications Using the ARVR tech, we may locate/augment information for signboards, directions, and the way finders, also instruct the users, drivers, and passengers about the safety on their AR-enabled devices such as HUDs on car, or smart glasses. Drivers can see red lights and traffic situations in the HUDs.\nWith advanced Computer Visions and AI/ML techniques, the user may be instructed about potential emergency situations ahead based on the speed, temperature, weather, location, traffic, battery status, etc.\nSee what Nissan unveiled — I2V (Invisible to Visible), uncovering the blind spaces and making the drive safer and smoother.\n Nissan unveils Invisible-to-Visible technology concept at CES YOKOHAMA, Japan - At the upcoming CES trade show, Nissan will unveil its future vision for a vehicle that helps drivers… global.nissannews.com\n Service and Maintenance There are some great uses of ARVR in service and maintenance assistance especially when the worker’s hands are occupied and he/she needs assistance. The travel and transportation industry may also use the technology for the below use cases.\nA stepwise task workflow could be implemented from task assignment to task completion, where a worker gets a task to perform in the Headset with detailed steps in form of text, audio, video, animations or augmented anchors/markers on the real-world objects. It not only guides the worker right in the working context but also may record the worker’s progress. Task completion can be plugged with proof of completion in the form of a checklist and a snapshot. It improves both efficiencies as well as the accuracy of the process. PC: https://sdtimes.com/softwaredev/the-reality-of-augmented-reality/\nIf they still need assistance, they may get remote assistance from the experts and share what they see with the expert (via front camera feed). Experts can assist the worker with voice, and also can draw/annotate on the environment of the worker. A worker may browse detailed annotated augmented manuals right in front of their eyes by scanning the objects/codes or by talking to the devices.\n Hyundai makes owner\u0026rsquo;s manuals more interesting with augmented reality Augmented reality showrooms are one thing, but Hyundai using that tech to make learning about your new car more… www.engadget.com\n Augmented manuals may provide internals details of the object without getting inside, and may also guide about the possible defects and anomalies just looking at the objects.\nProduct demonstrations and Setup may also utilize the technology at scale. watch the Hyundai\u0026rsquo;s an AR sales tool, demonstrating the features.\nSimulation-based training - Driver training to handle different driving situations in a simulated VR environment may improve safety and efficiency.\nVR Control Centers Control centers gather a lot of data from the fleet they operate, they need to visualize the data in multiple forms to get different aspects of the situations. There are multiple people with different roles interested in different types of data. Multi-View Virtual Desktops — A VR data visualizer could be a great help to create virtual views based on the dynamic need at the control center, with the need for extra hardware. It would help is in taking decisions faster and get more clarity about the situation.\nPrivate Desktop — Virtual visualizer also addresses privacy concerns, as the data is visible only to the user who is wearing the headset.\nConclusion This post covers the use-cases of Extended Reality in the Travel and transportation industry. XR (AR-VR-MR) will be one of the key disruptors as well as enablers for the future businesses of the industry.\nThis article is originally published on XR Practices Publication\n","id":149,"tag":"xr ; ar ; vr ; mr ; travel ; transportation ; usecases ; enterprise ; technology","title":"Augmenting the Travel Reality","type":"post","url":"/post/augmenting_the_travel_reality/"},{"content":"This article is in continuation of Static Code Analysis for Unity3D — Part 1 where we talked about setting up a local sonar server, sonar scanner and running the analysis for a Unity3D project.\nIt this article, we will discuss setting up the Static Code Analysis with SonarCube in IDE — Rider. We are using Rider as the “External Script Editor” in Unity. Configure Rider here in Unity\u0026gt; Preferences \u0026gt; External Tools \u0026gt; External Script Editor.\nInstall SonarCube Plugin For Rider Go to JetBrains Rider \u0026gt; Preferences \u0026gt; Plugins \u0026gt; Marketplace \u0026gt; Search SonarQube You will find “SonarCube Community Plugin” \u0026gt; Click Install. It will ask you to restart the IDE to complete the installation. On successful restart you should see it installed. Configure SonarCube Plugin We need to connect to the sonar server, to perform local analysis. Here are the steps to follow.\n Go to JetBrains Rider \u0026gt; Preferences \u0026gt; SonarQube — Click Add and provide details of the local sonar server. Ref to the previous post for more details. Click on +icon for SonarQube resources\n Click Download Resources and Select the Sonar Project that we have added. — UnityFirst For local analysis, we need to provide local script that will be called when ‘Inspect Code’ is performed on IDE. So create a script as follows covering all 3 steps that we discussed in the last post.\n Add Local analysis script — give it a name, path of script and output file. We are now ready to analysis the code.\nStatic Code Analysis in Rider Open the unity project in Rider. We have ButtonController.cs created in last post, let’s analyze that.\n Go to JetBrains Rider \u0026gt; Code \u0026gt; Run Inspection By Name (Option+Command+Shift+I) \u0026gt; Search for Sonar\n Select “SonarQube (new Issues)” \u0026gt; Select Inspection Scope as Whole Project It will run the code analysis. Just like we did in from the command line in the last post. To analyze the issues inline in the files, Go to JetBrains Rider \u0026gt; Code \u0026gt; Run Inspection By Name (Option+Command+Shift+I) \u0026gt; Select “SonarQube” \u0026gt; Select Inspection Scope as Whole Project Issues are available in Sonar Inspection windows and inline. We can further configure rules on sonar to include and exclude files and much more. Each of the analysis is also getting pushed to local sonar server Conclusion In this post, we have learned how to use SonarCube on a unity project in IDE.\nThis article was originally published on XR Practices Publication\n","id":150,"tag":"xr ; unity ; sonarqube ; static code analysis ; ide ; tutorial ; practice ; technology","title":"Static Code Analysis for Unity3D — Part 2","type":"post","url":"/post/static_code_analysis_unity_2/"},{"content":"Static code analysis is a standard practice in software development. There are code scanner tools, which scans the code to find vulnerabilities. There are some nice tools for visualizing and managing code quality. One of the most used tool is SonarQube, supports 25+ languages and flexible configurations of the rules.\nThere are not enough resources talking about static code analysis for Unity3D. This post covers steps to configure SonarQube and use it for scanning Unity projects.\nSonarQube Server Setup SonarQube requires a server setup where it manages code quality analysis, configuring rules and extensions. Follow the below steps to install and configure Sonar for local use. Make sure you have Java 8+ installed on your PC.\n Download SonarQube — https://www.sonarqube.org/downloads/ - Download Community Edition Unpack the zip sonarqube-8.0.zip as Directory SONAR_INSTALLATION/sonarqube OS-specific installations are available in the bin directory For Unix based OS provide permissions execute permission on chmod +x SONAR_INSTALLATION/sonarqube/bin/\u0026lt;os-specific-folder\u0026gt; Start the Sonar Server — eg. SONAR_INSTALLATION/sonarqube/bin/macosx-universal-64/sonar.sh console Sonar Server is ready to be used at http://localhost:9000 with credentials admin/admin Set up your first project on Sonar Qube. — Click create +on top right It will ask you for the token which may be used to securely run the analysis on the sonar server. For now, leave it at this step, we will use user credentials admin/admin for simplicity. This project is created with default rules sets and quality gates. Remember the project key. Sonar Scanner Setup Sonar scanner needed to statically analyze the code against the rules on the sonar server and then push the reports to the sonar server. Follow the steps below to set up Sonar Scanner Ref : https://docs.sonarqube.org/latest/analysis/scan/sonarscanner-for-msbuild/\n Download Sonar Scanner — https://github.com/SonarSource/sonar-scanner-msbuild/releases/download/4.7.1.2311/sonar-scanner-msbuild-4.7.1.2311-net46.zip\n Unpack the zip as Directory SONAR_SCANNER_INSTALLATION/sonarscannermsbuild\n For UNIX based OS give execute permissions —\nchmod +x SONAR_SCANNER_INSTALLATION/sonarscannermsbuild/sonar-scanner-\u0026lt;version\u0026gt;/bin/* Sonar setup is ready, let\u0026rsquo;s analyze a Unity Project.\n Analyze Unity Project Create a Unity Project. Below is a simple Unity project with button which toggles its color on every click. Let’s statically analyze this project. Follow the below steps :\n Goto project root — Start Pre-Processing for with Sonar Scanner — on windows we can directly run SonarScanner.MSBuild.exe begin /k:\u0026quot;project-key\u0026quot; comes with Sonar Scanner, but on Mac we need run it with mono as follows.\nmono /Applications/sonarscannermsbuild/SonarScanner.MSBuild.exe begin /k:”UnityFirst” /d:sonar.host.url=”http://localhost:9000\u0026#34; Rebuild Project — MSBuild.exe \u0026lt;path to solution.sln\u0026gt; /t:Rebuild\nOn mac : Post-processing — push report to Sonar Server Windows : SonarScanner.MSBuild.exe end Mac: SONAR_SCANNER_INSTALLATION/sonarscannermsbuild/SonarScanner.MSBuild.exe end Analyze code on Sonar Server — http://localhost:9000/dashboard?id=UnityFirst Dashboard Analyse the issues\n Conclusion In this post, we have learned setting up Sonar Server and Sonar Scanner and using it for Unity Projects. Also, see its usage on Mac. The next post talks about setting it up for IDE and perform inline code analysis\nThis article was originally published on XR Practices Publication\n","id":151,"tag":"xr ; unity ; sonarqube ; static code analysis ; tutorial ; practice ; technology","title":"Static Code Analysis for Unity3D — Part 1","type":"post","url":"/post/static_code_analysis_unity_1/"},{"content":"Presented At : Thoutworks\u0026rsquo;s Awayday 2019, where the entire ThoughtWorks India travelled to celebrate togetherness.\nI and my colleague Raju Kandaswamy presented this talk in the breakout sessions in category Seismic Shifts Around 2000 Thoghtworkers get together, and enjoyed fusion of technology showcases, cultural event and more. Glimpse here\n The talk is about What is SLAM (Simultaneous Localization And Mapping)? The use cases of SLAM? How does it work? ARVR and SLAM? Video here Slides here Slam aware applications from Kuldeep Singh ","id":152,"tag":"xr ; slam ; event ; thoughtworks ; speaker","title":"Speaker - AwayDay 2019 - SLAM Aware Applications","type":"event","url":"/event/thougthworks_awayday_2019_slam_aware_apps/"},{"content":"Inception exercise is very common in software development, where we gather the business requirements and define the delivery plan for the MVP(Minimum Viable Product) on a shared understanding. There are a set of tools and practices available to define the MVP requirements and its delivery plan. Famous tools are Lean Value Tree, Business Model Canvas, Plane Map, Elevator Pitch, Futurespective, Empathy Maps, Value Proposition Canvas, User Journey Map, Gamestorming, Process Diagram, MoSCoW, Communication Plan, Project Plan etc.\nWe have recently concluded an Inception Workshop at ThoughtWorks, by Akshay Dhawle, Smita Bhat and Pramod Shaik. It was a 3 days workshop, full of exposure to different tools, exercises and role-plays. Thanks to the mentors and participants, for making it so smooth and fun. We learned a lot of different aspects of the inception and situations. The inception may go wrong if we don\u0026rsquo;t care about the intricacies, does not matter how equipped you or your team is.\nHere are my key takeaways from the workshop :\n Choose the tool wisely, the tool should to simply the process and help us uncover the hidden aspects. A wrong tool selection may take the exercise completely off the track. The inception team must be comfortable with the selected tool and practices, it is better to do a rehearsal using the selected tool or practice. It is advisable that the inception team members should work together for atleast a week or so to understand each other and build synergies among themselves. Always go prepared with detailed inception plan and objective on each day, do a rehearsal with the inception team before facing the client. Do retrospection even if you think that things are going well, to understand the team\u0026rsquo;s performance and improvise on time. Have a backup plan for almost everything, from people/resource unavailability to selection of the tools. Be flexible, and get ready to adapt to the new plans. Define drivers in the inception of different exercises, as well as overall inception, who owns the facilitation. Define the roles and expectations upfront, however as stated earlier alway have a backup. If we stuck with tool/situations than driver needs to push the team forward. Inception is a collaborative exercise, the decisions should be taken with common understanding with client\u0026rsquo;s inception team, there should not be any last-minute surprise for the stakeholders. Utilize the client\u0026rsquo;s time optimally, and try to finish it in the agreed time. Do homework, and go with your well-thought understanding of the problem. Have a domain SME in the team. Have a diverse team for inception. Parallelise the inception activities, whenever required. Get the final presentation deck reviewed with the client stakeholders before presenting it to a larger audience. Record meeting notes and build/update the agreed artifacts after the sessions. Don\u0026rsquo;t delay building showcase/artifacts till the end. Divide and conquer. Share the notes with all the involved stakeholders. Have review checkpoints with the client at a different stage of the inception. While presenting the outcome to the client, it is recommended to present the sections which you were part of, otherwise, get a knowledge dump from the authors before the presentation. Don\u0026rsquo;t over commit to the client, be honest and transparent. Clearly call out pre and post inception activities. Don\u0026rsquo;t ignore any inputs from the team, keep it in the parking lot, review it often. Timebox the exercises, don\u0026rsquo;t stick on one exercise. If one exercise does not complete/get optimal output in a reasonable time, then have a plan for alternatives. Conclusion Inception is planned to bring stakeholders on same understanding of the product and delivery road map. The focus should be on inception goal, rather than on the tool selection. The tool is just a medium, use the simplest tool in which the team is comfortable in. Facilitate the inception as a collaborative exercise, where all the participants can comfortability participate and agree upon.\nThis article was originally published on My LinkedIn Profile\n","id":153,"tag":"thoughtworks ; inception ; workshop ; takeaways ; technology","title":"Inception Workshop | Key Takeaways","type":"post","url":"/post/inception_workshop_key_takaways/"},{"content":"Presented At : Google DevFest 2019, by Google Developer\u0026rsquo;s Group, GDG Cloud Delhi .\nThe talks is about How the User interactions evolved with time? Why is it important to think beyond the tradition keyboard and screen? Where is the industry heading? How will it impact us? Intro here LinkedIn\nWe have built office-explorer #DevFest2019 pic.twitter.com/0yRz7Euo8J\n\u0026mdash; Kuldeep Singh (@thinkuldeep) September 10, 2019 Detailed Slides Think beyond keyboard and screen - dev fest 2019 from Kuldeep Singh Feedback It was well recevied the participats.\nWalls have ears Doors have eyes 😂😂 Amazing talk by @thinkuldeep on AR/VR#DevFest19 #DevFest\n\u0026mdash; Lakshya Bansal (@lakshya__bansal) September 29, 2019 ","id":154,"tag":"xr ; fundamentals ; devfest ; google ; general ; talk ; event ; speaker","title":"Speaker - Google Devfest 2019 - Think Beyond Keyboard and Screen","type":"event","url":"/event/google_devfest_2019_think_beyond_keyboard_and_screen/"},{"content":"Technology defines our interactions with the ambiance, and the way we want to interact with the ambiance further defines the technology. The evolution of technology and user interactions tightly depends on each other.\n \u0026ldquo;Walls have ears!\u0026rdquo;, is very much true today, and not just that, now the walls can see you, feel your presence and talk to you.\n In this post, we will discuss how the user interactions evolved with time and what is next!\nEvolution of Technology and User Interactions Technology evolves to provide better user interactions, and the users always expect more from technology. Technology has evolved by thinking the user interactions, beyond the boundaries of traditional technologies. Speed of technology evolution is increasing day by day, and so is the user interactions. See below chart. With all the technological advancements, the user interactions have been evolved from almost no interactions to the immersive interactions. Following are the different stages of interactions evolution.\nNo Interactions We have started our journey with almost no interactions in the Batch Age, where software programs fed as the punch cards into the computer (such as UNIVAC, IBM029), and you get to know the results after a couple of hours or days. It continued till the early 60s.\nAttentive Interactions With little more enhancements in technology, the display screen and keyboard used to provide better user experience, and in this Command Line Interface (CLI) Age, users used to get attention whenever any alert/information to be displayed or input needs to be taken for next set of processing. Apple and Xerox invested heavily, and most of this era\u0026rsquo;s product was not commercially successful, but they have created a path for next technology evolutions. Microsoft\u0026rsquo;s DOS became a great software base in the CLI age.\nWYSIWYG Interactions Graphics User Interface(GUI) Age started from the early 80s when technology was focusing on the window-based interface, and form/editor based UI which is called What You See Is What You Get (WYSIWYG) interactions.\nXerox and Apple tried a few costly attempts as Xerox Star, Apple Lisa. GUI was popularity by Microsoft and Apple in the mid-80s with Windows 1.0 and Macintosh, and the race continued. Software and hardware technologies improved with time and GUI age continued for multiple screen sizes.\nSeamless transition After nearly 1.5 decades of technology enhancements in GUI age, the mobile phone and laptop started taking the shape. Internet Age started, and user interaction started shifting from desktop-based GUI to the web or mobile app based GUI, especially after AJAX, which promised desktop like partial GUI rendering.\nPC : http://www.mynseriesblog.com/category/nseries-site-news/\nA need for seamless transitions of GUI across devices was the increasing focus of this era. The mobile device was getting powerful and became a commodity, so the need for mobile-first GUI became popular. Most of the devices were relying on keypad and display screen. Nokia, Motorola were the pioneers in the cell phones. QWERTY keypad was also quite popular by Blackberry. Laptop and Tablets with external keyboards have almost replaced the PC market.\nTouch and Feel Interactions Touch screen interface was popularized by Apple iPhone in 2007, Samsung followed and race of touch and feel still continued, with lot many players.\nPC: https://www.pastemagazine.com/tech/gadgets/the-20-best-gadgets-of-the-decade-2000-2009/?p=2#7-iphone-2007-\nThe winner was again \u0026ldquo;Ease of doing\u0026rdquo;, touch screen interactions surpassed physical keypads phones, while Nokia and Blackberry were still relying on haptic feeling of a physical button. Internet Age continues with better internet, better devices, better interactions with the help of technology like the Internet Of Thing and Cloud. The phone became a smartphone in this era.\nThe Internet Age was transitioning into Interactions Age, the devices getting closer to human, such that they can wear them and feel them. The smartphone became the enabler to other interactions with wearable and other sensing devices. The smartphone serves as the processing unit, controller and internet gateway to the wearable devices. Lightweight communication protocols are invented (MQTT, BLE, etc) to communicate with these IoT/Wearable devices. We have seen some successful Smart Glass and SmartWatch products.\nIn this Interactions Age, we trust the technology and use it so heavily that all our interactions with devices getting recorded and data is increasing multifold. Data will become the next power. In this age, traveling with unknowns is no more a concern, we trust Uber, OLA more than it was before, as we know data is being recorded somewhere, and we feel technology is inducing transparency in the system. We rely more on the text then on phone calls, social media is getting powerful due to this.\nImmersive Interactions Interactions Age will continue, and interactions are going to be immersive, where we interact with the virtual elements of the ambiance, assuming they are real, we feel immersive in the virtual environment.\n Conversational UI - Conversational interfaces, where we talk/text to virtual bots, they respond just like a real human. We don\u0026rsquo;t need to go to a website and find our needed services on a bulky portal/ or on an app, rather we just send text or voice to a bot, and the bot would respond appropriate result. Devices like Google Home, Amazon Eco are getting popular with just voice interface. IoT empowers CUI to build a lot of enterprise use cases.\n Virtual Reality - VR interactions happen in a completely computer-generated environment, which is a real like environment. VR is getting popular in the education and training simulation domain. Travel and Real-estate are also utilizing it for sales and marketing, where users can try the services in the virtual environment before buying them. VR Therapy is getting popular in the healthcare sector. Google, Samsung, Sony, HTC, Facebook are building heavily on it. Augmented Reality (AR) - Augment Reality, augments the information on the real world. The technology can assist the worker by providing needful information while his/her hands are busy, a worker can talk to the head mounted device/smart glass like Google Glass, and get the assistance, it is called Assisted Reality. When an AR device generates a computer-generated environment on top of the real world, and let us interact with these computer-generated elements in the context of a real word, then it is called Mixed Reality (MR). Microsoft Hololens and Magic Leap are some popular devices MR devices, but these are quite costly. With increasing AR support on the mobile phone, AR mobile apps on are getting popular, it will outpace the normal app development. Future mobile apps will be AR enabled. Google is building VPS (AI powered Virtual Position System) to empower its Google Map with Augmented Reality. PC : Microsoft HoloLens 2\n Extended Reality - In the Mixed Reality environment, when we interact with the real-world environment then we call it extended reality interactions. For example, look at the switch, and blink your eye to switch it on.\n Brain-Computer Interface (BCI) - Multiple types of research are going in the area of the brain-computer interfaces, where a computer can directly feed into the brain or vice-versa. Facebook is researching on thought based interfaces. A blind person\u0026rsquo;s mind can be fed with a camera stream to see thru the world and many more. VR and AI will play a great role here.\n Artificial intelligence is enabling immersive interactions to look and feel more real. Hardware is getting foldable, regeneratable (3D printable), and people will wear the devices, it is getting so small that it can be taken as a pills, and can be controlled from outside. In short, technology is going closer to its context and its application. For example, BLE enabled contact lenses for eyes, printed sticker for monitoring BP, heart rate, smart shoes to direct your foot to move in the target direction, smart glasses to record what you view, smart gloves to help your work, a smart shirt to make you comfortable and control your body temperature, and the list is endless.\nFuture will we age of Artificial Interactions, where even interactions itself can be virtuals. An authorized machine will do virtual interactions on your behalf,\nThoughtWorks has published \u0026ldquo;Evolving Interactions\u0026rdquo; as one of the seismic shift. Do read it.\n We must evolve our thinking—and our capabilities— beyond the keyboard and the screen. Mixed reality will become the norm\n Conclusion We have discussed, how we have evolved our interactions with machines, and how technology is filling the need of the time. A lot of experiments and investments have gone into this journey, every other disruptive technology has killed the previous innovation and investments, but it brings a lot of new opportunities. Industries and individuals must keep experimenting and investing in new technologies, to stay relevant, otherwise, someday you will be abandoned.\nRef and Piture Credits : Microsoft HoloLens 2, and other.\nThis article was originally published on My LinkedIn Profile and XR Practices Publication\n","id":155,"tag":"xr ; ar ; vr ; mr ; evolution ; transformation ; ux ; user-interactions ; general ; technology","title":"Evolution of Technology and User Interactions","type":"post","url":"/post/evolution_of_technology_and_user_interactions/"},{"content":"IT Modernization is one of the key factors of today\u0026rsquo;s digital transformation. Most of the enterprises trying to define digital transformation strategy. IT organizations trying to redesign the organizations to meet the changing business need, and uplift the technology and people. Non-IT organizations are also trying to setup digital department to take care of companies digital assets and modernize them. However, IT modernization does not always result in a profit, it may lead to revenue loss if it does not fit to the organization design.\nHere is an example statement that you may get from your client:\n We have rewamped our old mobile site to new prograssive web app, and we are observing 6% drop in conversion rate, impacting the revenue.\n Assume that the client is a non-IT organization, and they have a digital department taking care of the digital assets (Website, App, Mobile Site etc), and also trying to upgrade assets to the latest technologies.\nHere are the general patterns and observations, which may be the major cause of the revenue drop even after the tech modernization.\nIts a matter of changing the technology IT modernization is generally considered just as a matter of changing technology. But it first requires people who accept the change and take it forward. We have observed missing ownership from the stakeholders, and the modernization was looked as enforced on the teams then it was embraced. Most of the team was new in the digital department and the existing people were holding a lot of history and knowledge, which they do not want to expose, in a fear of devaluing.\nSometimes the organizations try to project itself as an agile organization, aims to change anything at any time, but this becomes the anti-pattern.\nActivity Oriented Teams The teams are structured on activity and not on the outcome; eg - Product Design, Development, Test, and DevOps are working silos. It is hard to sync up on the quality outcome. Teams structure should be based on outcome and business value. Ref to my earlier post.\nFor example, you noticed that 10-15 AWS instances were getting spawned in every 24 hours, and the existing instances were getting killed after heavy memory utilization. It went unnoticed for long because, as per DevOps SLA auto-scaling was working fine and the site never goes down, and the memory utilization issue is a headache of the development team. The development team is not interested in monitoring production health as it comes under the DevOps purview.\nCross-functional knowledge is missing in such teams, and there is a lot of people dependency, which hinders taking a feature from business analysis to the deployment.\nInadequate Development Practices It is a pattern when the standard development practices are missing, then the feedback cycle to developers are too long. Most of the issues are captured in the production. Test automation and CICD pipe are not up to date, so developers do not trust and maintain the test suites. It will become a practice to bye-pass the test suite and deploy directly to pre-production or production. It some times a result from a lot of pressure for quantitive delivery from the top management than on quality delivery.\nYou will see that the client would have a history of uncaught broken functionalities for a longer period of time, which would also be a cause to not trust the web site, and eventually impact the conversion rate and revenue.\nProduct and Business Observations We have noticed the following observations related to business and product perspective.\n No business continuity - It is observed that the application is not stable enough, and there is no mechanism to keep the user engaged if the user has lost the product order journey. User experience - There are some major usability issues on the new app such as; a lot of clicks require to customize the product and order, redundant information placement. Integration Issues - History of issues with 3rd party gateways. It is taking too long to identify such issues. Define ways to measure 3rd party system\u0026rsquo;s performance and held them responsible for revenue loss if they do not follow the agreed SLAs. Technology Observations Following are kind of technology observations, and these are mainly due to taking technology decisions without proper ownership and knowledge. Technology choices are followed for the sake of changing to a new concept. Eg: Server-Side Rendering on PWA was not well evaluated but applied.\n Memory Leaks - As mentioned in the earlier example, AWS instances are getting killed after getting out of memory. The exceptional increase in NodeJS server memory, which is leading to crashes every 2-5 hours. It is mainly due to, a) Too many global variables declared at the node level, b) Unwanted setTmeouts written to cancel each fetch request. After fixing the leaks the node instances may run in just 300 MB of memory, which earlier may be taking 20GB. Older Tech Stack - NodeJS, NextJS and React are at the older versions. Request Traceability across app is missing. There was a load on API servers due to unnecessary calls from the UI. Incorrect Progress Web App Strategy Issues in Cloud front caching strategy Incorrect Auto-scaling strategy. Instead of fixing the memory leak issue, an unnecessary auto-scaling configuration was done, which gracefully scales a new instance after every 10000 requests to a node. Missing Infra Health Monitoring alerts Conclusion The success of IT modernization is closely linked to the organization design, it may not always lead to success if the changes it may require are not accepted and owned by people. Just by changing the technology will not lead to IT modernization. The new technology comes with great promises but it also comes with its own challenges. We must always choose a new technology with the right tooling to address the challenges.\nThis article is originally published on My LinkedIn Profile\n","id":156,"tag":"general ; cloud ; digital ; modernization ; technology ; transformation ; experience ; troubleshooting ; agile","title":"Can IT modernization lead to revenue loss?","type":"post","url":"/post/can_it_modernization_lead_to_revenue_loss/"},{"content":"I just finished a book \u0026ldquo;Life\u0026rsquo;s Amazing Secrets\u0026rdquo; by Gaur Gopal Das. It covers the topic of balance and purpose in life nicely. Few of the concepts from the book I can very well relate to my current organization\u0026rsquo;s focus on cultivation and collaboration culture.\nIt\u0026rsquo;s now more than 6 months of my journey with Thoughtworks, an organization which started as a social experiment more than 25 years ago, and believes in balancing 3 pillars in life ; sustainability, excellence and purpose. It makes the organization as a living entity, and wherever there is life, it has to face the changes and challenges, it\u0026rsquo;s on us how we overcome and excel the art of changing.\nHere are my takeaways from the book, which help us grow through the life, build social capital, and help the people centric organization grow.\n Live by the principle of gratitude, which allow us to see the positivity and believe in the trust-first approach. Start making a gratitude logs. Our attitude towards life affects our social image, the speaking sensitivity is the key while giving or taking feedback. Life can never be perfect, too much corrective feedback may spoil the relationships, if one don\u0026rsquo;t know the art. Feedback without correct motive and purpose has no meaning. Think higher purpose, look beyond the situations and practise forgiveness but we must maintain social decorum of rules and regulations of society/community or organization. Uplift relationship by give and take; exchange of thoughts, values and beliefs Be your own hero, every time you compete with yourself, you will be better than before. Promote culture of healthy competition. Each one is different and not comparable. Go on a journey of introspection, know yourself and find the purpose of life. Develop good character by belief, actions and conduct. Good character has the ability to change lives. We have 2 ears and 1 mouth, so give them chance in that proportion; Listen before you speak and choose to respond (not react). The book presents a great ideology of the ice-cream, the candle and the oxygen mask, to guide us a journey from selfish to selfless. A view from family first to serving the nation, and how serving others can be Joy of life. It also talks about role of spirituality in our life. Overall it is a great book, and I recommend it. This article is originally published on My LinkedIn Profile\n","id":157,"tag":"selfhelp ; book-review ; thoughtworks ; takeaways ; change ; motivational","title":"Embibe purpose and balance in life","type":"post","url":"/post/embibe_purpose_and_balance_in_life/"},{"content":"Recently got a chance to judge and mentor india\u0026rsquo;s young innovators at Smart India Hackathon 2019, world\u0026rsquo;s largest hackathon with 11000 shortlisted participants, hosted by 48 centers across India. It was organized by AICTE and MHRD, Govt of India. I was invited to Judge the event at Panipat Institute of Engineering \u0026amp; Technology, Panipat. I am delighted to contribute in the journey of India\u0026rsquo;s transformation, and see innovative ideas from the new generation. It was an experience, which I would like to share in this post. Thanks to my friend Suresh K Jangir for introducing me with AICTE, and to all the judges, PIET students and organisers for making my experience memorable.\nAll set for next 36 hours The event was started on time, and followed by a well defined packed schedule. We have been introduced with our next 36 hours duties and responsibilities as Judge and Mentors, by AICTE\u0026rsquo;s Amit Parshetti. There were 4 categories in which participants were trying to solve the problems; Sports and Fitness, Sustainable Environment, Smart Textile and Smart Vehicles. Each of us, needed to do 3 mentoring rounds and 4 evaluation rounds in next 36 hours, day and night. The Judges panel consists of Dietician, Professors, Entrepreneurs, and Industry experts. It was great interacting with such a diverse team, and learning from each other\u0026rsquo;s experience.\nWe started our initial mentoring and judging rounds, and found some interesting raw ideas, but they were far from realisation. These idea needed some polishing and mentoring to make them presentable.\nWe have given our initial suggestions to the teams, and tried to let them think other parameters of making their product/idea successful. Most of the ideas were around software using IoT and AI. I was able to contribute well due to my relevant industry experience. The mentoring and judging round continued, and teams were working for the whole night to make their idea a reality.\nThe Josh moment - Student interaction with PM Modi The young innovators were getting encouraged not only by the mentors, judges and coordinators but also by the Govt of India. It was a Josh moment for students when they get a chance to interact with the Prime Minister of India at 10 PM.\nPM Modi interacted at multiple nodal centers, and discussed and appreciated their ideas. It was a great moment to see india transforming, see such a nice initiatives driven by Govt. Participants were also addressed by Shri Prakash Javadekar, Union Minister, MHRD.\nBack to college days I stayed in the college guest house, and staying there was like going back to college days, year 2K to 2K4 of NIT kurukshetra. I got chance to interact with Prof.(Dr.) Shakti Kumar, Director, PIET, who have taught my college professor, Prof. Jitender Kumar Chhabra.\nWe shared, how Chhabra sir\u0026rsquo;s programming concepts are still deep into my brain. I also met few NITK alumni as PIET faculty. My roll number at college was 2K020 and my friend Deepak Arora, 2K021, partner in crime, is from Panipat. We met there after 14 years, as he was luckily in the same city during this.\nSome of the greatest ideas Next day, remaining mentoring and evaluation rounds continued, and we were at the stage of selecting 8 teams for final round. We have consolidated the scores and point of views on each team, and their progress so far in the hackathon. Here are some interesting ideas that got selected for final round:\n Pot Hole Detection System - IOT, GPS Bike starts only if rider wear helmet - IOT Accident Prevention - Auto adjust beam to low when cars crossing each other - IOT GPS tagged factory licences and pollution control - IOT Virtual Physical Training - The app detects the body posture, ask to correct if not as per the ML model - CV, AI Virtual Fit - Upload your image, and app will predict your body size, then you can order clothings customized to your size, and the tailors will get the exact designs. - CV, AI Detection of Air-filter choking - IOT, AI Create a social network of people works for good causes, build people\u0026rsquo;s social profile. Supply chain portals to promote art and craft from rural areas. The shortlisted teams were asked to present and demonstrate for the final round.\nThe Final Round Time has come, and the final round was headed by guest Mr. Vinay Kumar Jain, Principal Technology Architect at Infosys Ltd. All the shortlisted teams presented and demonstrated what they have done in the hackathon. We were to select top 3 ideas. Results were locked. Hearts were beating high. It was a tough time for the judges as well to take out only 3 ideas, as students really worked hard to get their ideas implemented in just 30 hours. Concluding Ceremony We are again at Auditorium where the event was started. PIET students entertained the audience by their great dance performances. Judges and guests were also felicitated.\nMr. Rajesh Agarwal, MD Micromax Informatics Ltd, was the chief guest of the concluding ceremony. His speech was really inspiring, he shared some of the challenges he faced as an entrepreneur. He gave a message to the future entrepreneur to not stop, continue.\nEveryone was waiting for the results, and here comes the\n Second runner up - Virtual Physical Training - Rs. 50000 First runner up - Bike starts only if rider wear helmet - Rs. 75000 Winner - Virtual Fit - Rs. 100000 Congratulations to the winners, best of luck to the rest of learners!\nThank you!\nThis article is originally published on My LinkedIn Profile\n","id":158,"tag":"sih ; experience ; takeaways ; event ; hackathon ; judge ; mentor","title":"Judge and Mentor - Smart India Hackathon 2019","type":"event","url":"/event/sih_2019_an_experience/"},{"content":"Blockchain or DLT (Distributed Ledger Technology) is getting good traction in the IT world these days. Earlier, this technology was being mostly explored by banks and other finance-related institutions, such as Bitcoin and Ethereum. Now, it is getting explored for other use cases for building distributed applications. Blockchain technology comes with a decentralized and immutable data structure that maintains a connected block of information. Each block is connected using a hash of the previous block, and every new block on the chain is validated (mined) before adding and replicating it. This post is about my learnings and challenges while building an enterprise blockchain platform based on Quorum blockchain technology.\nBlockchain-Based Enterprise Platform We have built a commodity trading platform that matches the trades from different parties and store-related trade data on smart contracts. Trade operations are private to the counterparties; however, packaging and delivery operations are performed privately to all the blockchain network partners. The delivery operation involves multiple delivery partners and approval processes, which are safely recorded in the blockchain environment. The platform acts as a trusted single source of truth and, at the same time, keeping data distributed in the client\u0026rsquo;s autonomy.\nThe platform built on Quorum, which is an Ethereum-based blockchain technology supporting private transactions on top of Ethereum. A React-based GUI is backed by the API layer of Spring Boot + Web3J. Smart contracts are written in Solidity. Read more here to start on a similar technology stack and know the blockchain.\nThe following diagram represents the reference architecture: Blockchain Development Learnings Blockchain comes with a lot of promises to safely perform distributed operations on a peer-to-peer network. It keeps data secure and makes them immutable so that data written on a blockchain can be trusted as the source of truth. However, it comes with its own challenges; we have learned a lot while solving the challenges and making the platform production-ready. These learnings are based on the technology we used; however, it can be co-related any of the similar DLT solutions.\nThe Product, Business, and Technology Alignment Business owners, analysts, and product drives need to understand the technology and its limitation — this will help in proposing a well-aligned solution supported by the technology.\n Not for the real-time system — Blockchain technology supports eventual consistency, which means that data (a block) will be added/available eventually on the network nodes, but it may not be available in real-time. This is because it is an asynchronous system. Products/ applications built using this technology may not be a real-time system where end-users expect the immediate impact of the operation. We may end up building a lot of overhead to make real-time end-user interfaces by implementing polling, web-sockets, time-outs, event-bus, on smart-contract events. The ideal end-user interface would have a request pipeline where the user can see the status of all its requests. Once a request is successful, then only the user will expect the impact. There is no rollback on the blockchain due to its immutability, so atomicity across transactions is not available implicitly. It is a good practice to keep the operation as small as possible and design the end interface accordingly. Blockchain’s transaction order is guaranteed, thus making it more of a sequential system, if a user is submitting multiple requests then they will go in sequentially to the chain. (Private Blockchain) Each user/network node has its own smart contract data for which the node has participated into the transaction on the start contract, so adding new node/user to the contact would not see the past data available on the contract. It may have an implication on business exceptions. Backward compatibility is anti-pattern on the blockchain. It would be better if we may bind business features to the contract/release versions, new features will be only available in the new contract, otherwise implementing backward compatibility takes a huge effort. Architecture and Technology Learnings Think multi-tenancy from the start — Blockchain system is mostly about operations/transactions between multiple parties and multiple organizations. Most enterprise platforms have multiple layers of applications, such as end-user client layer (web, mobile, etc.), API layers, authentications and authorizations, indexes, and integrations with other systems. It is wise to think of multi-tenancy across the layers from the start of the project and design the product accordingly, instead of introducing it at the later stage. Security does not come on its own — Blockchain-based systems generally introduce another layer of integration between the API and data layer. We still need to protect all the critical points at multiple network nodes owned by different participants. So, defining the security process and practices are even more important and complex for blockchain-based solutions than that of the classic web application. Make sure these practices are followed by all the participant\u0026rsquo;s node. A hybrid solution — We might need to keep data index/cache (SQL or NoSQL) to meet the need multiple users read, as reading every time from a smart contract may not meet the performance criteria. The index/cache building module may subscribe to the smart contract transaction event and build data index/cache to serve as reading a copy of blockchain data. Each participant\u0026rsquo;s node will have its own index and end-users\u0026rsquo; interface of that participant organization. It will definitely add complexity to maintain the index, but until blockchain technology becomes as fast as reading cache, we have this option to go Hybrid way (blockchain + read indexes). We can replay a smart-contract transaction event as and when required; it helps in rebuilding indexes/cache in case we lost the index. Carefully add business validation in the smart-contract — we may add business data validation in the smart contract itself, but it has a transaction cost (performance cost), and more complex validation may lead to out of gas issues, also getting a detailed exception from the contract is not fully supported. If we are maintaining indexes, then we may do validation before making the contract transitions by reading from indexes, but it is also not a proof solution, as there might be race conditions. The ideal way is to perform business validation before, and if race condition occurs, just override/ignore the operation and choose the approach wisely based on the business impact. Fear of Losing Transactions — What if we can not process a transaction due to an error? Depending on the case, we either need to wait for the problem to be fixed and halt all the further events or just log the error and proceed. As mentioned in the previous section, choose to override/ignore wisely; it may have a significant business impact. Smart Contract Limitation — Solidity has a limitation on a number of parameters in a method, stack depth, string handling, contract size, gas limits. So, design contract methods accordingly. Tracing — Think of application tracing from web/mobile \u0026lt;-\u0026gt; API server \u0026lt;-\u0026gt; blockchain \u0026lt;-\u0026gt; indexes from the start. Each request should be traceable. Quorum doesn’t support any tracing/message header kind of mechanism to send co-relation identifier; however, we may link the application a trace id to the transaction hash. Configurability/Master Data Management — Every enterprise system needs some master data, which can be used across the organizations since participant\u0026rsquo;s nodes are decentralized. So, this data needs to be synchronized across nodes. Device out a mechanism to replicate this data across the nodes. Any inconsistency may result in the failure of transactions. Smart Contract Backward Compatibility — This is the most complex part of the contract. We need to write a wrapper contract on the older contract and delegate a call to the old one. Note that we cannot access the private data of the existing contract, so we need to implement all the private data in the wrapper contract and conditional delegate call to the old contract. Managing event listeners for different versions are also complex; we may need to keep multiple versions in event listeners in the code base to support multiple versions of the contracts. We also need to make sure all the participants are on the same version of contracts. It would be better if we bound business features with this version. In that case, any new transaction operation may not be available on the old contract. Scaling the API/Client Layer — Since the blockchain processes the transactions in a sequential manner, scaling the API/client layer is complex; we need to implement some locking mechanism to avoid the same transaction getting performed from multiple instances of the API layer. Deployment is another challenge here. Feature toggle on the contract side is also quite complex Testing Learnings Solidity unit tests take a lot of gas as the functionality grows, so it is better to use JS API-based tests for contract unit tests. Define contract migration testing approach and it\u0026rsquo;s automation up front. Automate environment preparation as it needs multiple tenants\u0026rsquo; nodes to interact to test the functionality. API integration tests need to poll the other participants\u0026rsquo; API to see the expected impact. It increases the API test development times as well as execution time. Define approach for test automation around index/cache rebuilding The version-based feature toggles in tests artifacts are complex to maintain. Automation of meta-data management across the nodes Testing of CI/CD scripts as the scripts grow, so complex with time and impact of any issue is critical. Continuous Integration and Deployment Define the blockchain node deployment strategy upfront (private/public cloud); you may not change it once it is deployed without losing data. Securely storing secrets on the cloud. Have a process for access management reviews. Analyze the impact of secret rotation on blockchain node setup and automate the process. Backup and restore strategy for blockchain Making sure all the nodes are on the same version of blockchain and software artifact High availability is a challenge when new deployment has to happen, as we need to stop service completely before deploying the new version to avoid corrupt indexes/blockchain due to the intermediate state of operations (multi-transaction operation). Project Management The blockchain is still a growing technology. It is better to have a dedicated sub-team that takes care of blockchain research-related work and define best practices for the project and industry. Dedicated infra team is needed to manage and improve the CI/CD environment. Security testing resources and audits. Legal point of view on GDPR and storing data with blockchain. Coordination with participant’s infra team Support SLA with each participant. This article was originally published on Dzone\n","id":159,"tag":"blockchain ; java ; experience ; learnings ; micro-services ; architecture ; spring boot ; technology","title":"Lessons Learned From Blockchain-Based Development","type":"post","url":"/post/learnings-from-blockchain-based-development/"},{"content":"The book \u0026ldquo;Agile IT Organization Design - For digital transformation and continuous delivery\u0026rdquo; by Sriram Narayan explains software development as a design process. Following statement from the book means a lot.\n \u0026ldquo;Software development is a design process, and the production environment is the factory where product takes place. IT OPS work to keep the factory running..\u0026rdquo;\n It questions the way we traditionally think of operation support, maintenance, production bugs, rework, their costs and budgets and the focus in software development life.\nThinking software development as a design process helps the organisations, focus on today\u0026rsquo;s need of agile development. Here are some thought points, when anything before production deployment is considered as software product design, and the \u0026ldquo;software product\u0026rdquo; is getting produced from the production environment (factory):\n Dev + OPS - A product design which doesn\u0026rsquo;t fit in the process of factory can not be produced by the factory, in other words A factory which do have sufficient resources needed to produce given design can\u0026rsquo;t produce the product. The product designer, and factory operators needs to be in the close collaborated team, to improvise the design, and do production. Continuous Delivery - There is no point of thinking about factory after the product design is done, we may not be able to build the factory optimally. Think of the production factory while design the product. Have continuous feedback in place by continuous integration and delivery. Value driven and outcome oriented teams - A factory can produce the product on a shop floor/production line with checks, balances and integration at each step. It implies to the software development as well, value driven projects, and outcome oriented teams are more helpful in making product successful over the plan driven projects and activity orientated teams. The book covers it very well. Sorter cycle time - Software product design has no value until it is produced, and reached to the people. Sooner we produce, better we get the value - so cycle time for product design must be as small as possible. Luckily in software development we have tools to simulate factory environment/test labs, and try to produce the models well before the actual production. Measure the product quality more than the design quality - It is important to measure product quality than that of the product design (software before prod environment). So metrics which can really measure product quality (software running in production environment) such as velocity in term of \u0026ldquo;Value\u0026rdquo;. Measuring burn-up/burn-down, bugs counts, defect density, code review quality are all product design metrics, it may not matter how good these numbers are if software is not producing value on the production environment. Conclusion The book covers an agile organization design with a great breadth and depth on structural, cultural, operational, political and physical aspects. We can relate many of these aspects while thinking the software development as a design process.\nThis article was originally published on My LinkedIn Profile\n","id":160,"tag":"software ; book-review ; process ; design ; organization ; takeaways ; agile ; technology","title":"Software Development as a Design Process","type":"post","url":"/post/software-development-as-design-process/"},{"content":"Most web applications are hosted behind a load balancer or web-server such as Nginx/HTTPD, which intercepts all the requests and directs dynamic content requests to the application server, such as Tomcat. Correlating requests traversing from the front-end server to the backend servers are general requirements. In this post, we will discuss tracing the request in the simplest way in an Nginx and Spring Boot-based application without using an external heavyweight library like Slueth.\nAssign an Identifier to Each Request Coming Into Nginx Nginx keeps request identifier for HTTP request in a variable $request_id, which is a 32 haxadecimal characters string. $request_id can be passed to further downstream with request headers. Following configuration passes the $request_id as X-Request-ID HTTP request header to the application server.\nserver { listen 80; location / { proxy_pass http://apiserver; proxy_set_header X-Request-ID $request_id; } } Log the Request Identifier in Front-end Access Logs Include the $request_id in the log format of the Nginx configuration file as follows.\nlog_format req_id_log \u0026#39;$remote_addr - $remote_user [$time_local] $request_id \u0026#34;$request\u0026#34; \u0026#39; \u0026#39;$status $body_bytes_sent \u0026#34;$http_referer\u0026#34; \u0026#34;$http_user_agent\u0026#34; \u0026#39; \u0026#39;\u0026#34;$http_x_forwarded_for\u0026#34;\u0026#39;; access_log /dev/stdout req_id_log; It will print the access logs in the following format:\n172.13.0.1 - - [28/Sep/2018:05:28:26 +0000] 7f75b81cec7ae4a1d70411fefe5a6ace \u0026#34;GET /v0/status HTTP/1.1\u0026#34; 200 184 \u0026#34;http://localhost:80/\u0026#34; \u0026#34;Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.100 Safari/537.36\u0026#34; \u0026#34;-\u0026#34; Intercept the HTTP Request on the Application Server Every HTTP request coming into the application server will now have the header X-Request-ID, which can be intercepted either in the interceptor or servlet filter. From there, it can be logged along with every log we print in the logs, as follows.\n####Define a Request Filter Using MDC Define the following request filter, which reads the request header and puts it in the MDC (read about MDC here). It basically keeps the values in the thread local.\n... @Component public class RegLogger implements Filter { @Override public void doFilter(ServletRequest servletRequest, ServletResponse servletResponse, FilterChain filterChain) throws IOException, ServletException { try { MDC.put(\u0026#34;X-Request-ID\u0026#34;, ((HttpServletRequest)servletRequest).getHeader(\u0026#34;X-Request-ID\u0026#34;)); filterChain.doFilter(servletRequest, servletResponse); } finally { MDC.clear(); } } @Override public void init(FilterConfig filterConfig) throws ServletException {} @Override public void destroy() {} } Configure the Logger to Print the Request id I have used logback, which can read MDC variables in the %X{variable_name} pattern. Update the logback pattern as follows:\n\u0026lt;?xml version=\u0026#34;1.0\u0026#34; encoding=\u0026#34;UTF-8\u0026#34;?\u0026gt; \u0026lt;configuration\u0026gt; \u0026lt;appender name=\u0026#34;STDOUT\u0026#34; class=\u0026#34;ch.qos.logback.core.ConsoleAppender\u0026#34;\u0026gt; \u0026lt;encoder class=\u0026#34;ch.qos.logback.classic.encoder.PatternLayoutEncoder\u0026#34;\u0026gt; \u0026lt;Pattern\u0026gt;%d{HH:mm:ss.SSS} [%thread] %X{X-Request-ID} %-5level %logger{36} - %msg%n\u0026lt;/Pattern\u0026gt; \u0026lt;/encoder\u0026gt; \u0026lt;/appender\u0026gt; \u0026lt;root level=\u0026#34;info\u0026#34;\u0026gt; \u0026lt;appender-ref ref=\u0026#34;STDOUT\u0026#34; /\u0026gt; \u0026lt;/root\u0026gt; \u0026lt;/configuration\u0026gt; It will print the following logs:\n17:28:26.011 [http-nio-8090-exec-7] 7f75b81cec7ae4a1d70411fefe5a6ace INFO c.d.e.controllers.StatusController - received request This way, you can see the request id in all the logs from the origin to the end. We can configure ELK to aggregate the logs and trace them in a nice user interface to help with troubleshooting.\nGet the Generated Request id Back in the Response Nginx can also add the request id in the response header with the following configuration:\nserver { listen 80; add_header X-Request-ID $request_id; # add to response header .... As you see here: Conclusion We have explained a simple way to assign a unique identifier to each request, then trace the request end-to-end. You can get more details here.\nThis article was originally published on Dzone\n","id":161,"tag":"request-tracing ; troubleshooting ; java ; web-server ; micro-services ; spring boot ; technology","title":"Request Tracing Using Nginx and Spring Boot","type":"post","url":"/post/request-tracing/"},{"content":"Agile software development methodology is the key selling point for most of the IT organisations, however many of the organisations have gone by the literal meaning of the word \u0026ldquo;Agile\u0026rdquo;. They follow agile practices without really being agile. If we go by Manifesto for Agile Software Development, it focus more on human factor (people, interaction, collaboration) which produces working software as per the need of time, than on processes, tools and strict planning. As described in the book Agile IT Organization Design, organisation culture plays key role in the in becoming true agile organisation. Agile culture is more around collaboration and cultivation than a controlling and competence culture. This post describe, how the test first paired development can boost organisation agility and culture. This article is also my takeaways from the book.\nThe Test First Pair Development: When a pair of experts think about, how a function will be used before developing the function, is the test first development. The book \u0026lsquo;Extreme programming explained: embrace change\u0026rsquo; [3] talks a lot about TDD(Test Driven Development), pair programming, tools, principles and practices. The term test first pair development is generalised from TDD and Pair Programming, however it is not restricted only to developers but also applicable to other software development roles (BA, QA, TL, TA, etc) and cross roles. Following are the key concepts :\nTest First Development First-Fail-Test : Think of a very basic test for a business functionality that you want to implement. Implement/define the test first. Just-Pass-Test : Write code/artifact which is necessary to pass the test. Don\u0026rsquo;t write unnecessary code to satisfy your assumptions and intuition. Next-Fail-Test : Keep writing test until next fail test and then perform step 2. Keep doing this until you reach at just sufficient outcome. Refactor/beautify the code/artifact and make sure all tests are passing. Do it in between 2 and 3. Get the outcome reviewed, collect feedback and follow above steps again to improvise the feedback. Pair Development: Pair sign up - Two person sign up for one outcome. Pair rotation - Pair should be rotated on an interval. Collaboration - Pair must work together. One person is reviewing, while other is doing the task, and then exchange their position when needed. Development environment and seating should be such that both can work together. Pair with diversity - Make a pair of diverse personality in term of role, experience and skills, so that everyone learn from each other and respect. eg : Pair with QA to write automated acceptance tests, Pair with Infra Expert to fix DevOps issues. Combining Together: Pair follows the test first development practice for example :\nOne person think and write of a test and other person implement the functionality needed to just pass the test. Other person think and write the next test and the first one will implement the functionality to make it pass. Pair with QA/Tester/PO during development to review the outcome before releasing it to QA.\nBenefits of Test First Pair Development: Following are 5 major boosts for organisation for using the approach.\n1. Fail Early We get feedback well early in the development life cycle. The artifact we produce is getting reviewed at very first place of development in two ways. First, it is getting tested by the test artifacts and second, it is getting reviewed by peer as and when it is written. So it will produce batter quality software. As described in test pyramid [4], we build an organisation practice to strengthen the base to testing pyramid.\nIt is not just in term of product feedback but also we get feedback on people for alignment towards agile.\n2. People and Interaction Test first pair development need a great interaction between the pair, and gives equal opportunity to each one. Pair gets rotated after some time, which helps in building the team bonding. People start believing in the word \u0026ldquo;Team\u0026rdquo; and understand that if one expect support from other then, he/she need to support others. In such a team, people build trust and respect among other. It promotes collaboration culture in organisation, which is the key for any real agile organisation.\n3. Outcome Oriented Teams Test first pair development works well with outcome oriented team, where people can cross pair (pair across roles) to achieve desired outcome efficiently. When people cross pair, the understand each other\u0026rsquo;s pain, and better work for each other\u0026rsquo;s success, eventually lead to better business outcome. Test first methodology promote to build software to fulfil immediate business need, and avoid building unnecessary source code.\n4. Change with Confidence and Satisfaction In traditional way, it is always a feared delivery when changing any existing code, we never know what will break, and sometime we get the feedback from the end user. But here, people are more confident in making changes in the existing code which is well covered by the tests. We get immediate feedback by the failing tests, if all existing tests are working then we are good to go, with passing tests for changed functionality. Another eye is always there to watch you. With this approach, people are more satisfied with what they are doing, as they are more interacting, getting more support from team, and are more focused.\n5. Overall cost effectiveness People feel that pair development is a double effort then the traditional development, as two person are assigned to a task which can be done by one person. It is not true for most of the cases due to following reasons:\n Quick on-boarding - Paired development don\u0026rsquo;t need huge on-boarding cycle, new joiner can start contributing from the day one and will learn on the job, while working with the pair. It is pair\u0026rsquo;s responsibility to make the new comer comfortable, it is for his own advantage. No lengthy review cycle - We don\u0026rsquo;t need special review cycle for the code. Almost 10% of implementation effort is assumed for reviews which is be saved. Less defect rework - Since we get feedback well early, so we can fix the issues as and when they found. Rework is proportional to defect age ( defect detected date - defect induced date). Around 20% overall effort is assumed for defect fixes rework. we may save good amount here. Less testing effort - Test first approach promote more of automation tests, and tries to maintain the test pyramid, thus lessor effort for testing Effectiveness - There are less wastage at various level, team is focus on the delivering right business requirement at right time in iterations, inefficiencies can be removed well early based on the feedback at various levels. Conclusion We have discussed the test first pair development methodology, which is about marrying TDD and Pair Programming. We described key concepts of the test first and the pair programming, and then we described the key benefits to the organisation.\nReferences :\n[1] - http://agilemanifesto.org/\n[2] - https://info.thoughtworks.com/PS-15-Q1-Agile-IT-Organization-Design-Sriram-Narayan_Staging.html\n[3] http://ptgmedia.pearsoncmg.com/images/9780321278654/samplepages/9780321278654.pdf\n[4] https://martinfowler.com/articles/practical-test-pyramid.html\nThis article was originally published on My LinkedIn Profile\n","id":162,"tag":"tdd ; takeaways ; book-review ; xp ; practice ; agile ; pair-programming ; thoughtworks","title":"Boost your organisation agility by test-first-pair development","type":"post","url":"/post/test-first-pair-development/"},{"content":"Java 11\u0026rsquo;s release candidate is already here, and the industry is still roaming around Java 8. Every six months, we will see a new release. It is good that Java is evolving at a fast speed to catch up the challengers, but at the same time, it is also scary to catch its speed, even the Java ecosystem (build tools, IDE, etc.) is not catching up that fast. It feels like we are losing track. If I can\u0026rsquo;t catch up with my favorite language, then I will probably choose another one, as it is equally as good to adapt to the new one. Below, we will discuss some of the useful features from Java 8, 9 and 10 that you need to know before jumping into Java 11.\nBefore Java 8? Too Late! Anyone before Java 8? Unfortunately, you will need to consider yourself out of the scope of this discussion — you are too late. If you want to learn what\u0026rsquo;s new after Java 7, then Java is just like any new language for you!\nJava 8: A Tradition Shift Java 8 was released four years ago. Everything that was new in Java 8 has become quite old now. The good thing is that it will still be supported for some time in parallel to the future versions. However, Oracle is already planning to make its support a paid one, as it is the most used and preferred version to date. Java 8 was a tradition shift, which made the Java useful for today and future applications. If you need to talk to a developer today, you can\u0026rsquo;t just keep talking about OOP concepts — this is the age of JavaScript, Scala, and Kotlin, and you must know the language of expressions, streams, and functional interfaces. Java 8 came with these functional features, which kept Java in the mainstream. These functional features will stay valuable amongst its functional rivals, Scala and JavaScript.\nA Quick Recap Lambda Expression: (parameters) -\u0026gt; {body} Lambda expressions opened the gates for the functional programming lovers to keep using Java. Lambda expressions expect zero or more parameters, which can be accessed in the expression body and returned with the evaluated result.\nRead my earlier article on Introduction to Lambda Expressions\nComparator\u0026lt;Integer\u0026gt; comparator = (a, b) -\u0026gt; a-b; System.out.println(comparator.compare(3, 4)); // -1 Functional Interface: an Interface With Only one Method The lambda expression is, itself, treated as a function interface that can be assigned to the functional interface, as shown above. Java 8 has also provided a new functional construct shown below:\nBiFunction\u0026lt;Integer, Integer, Integer\u0026gt; comparator = (a, b) -\u0026gt; a-b; System.out.println(comparator.apply(3, 4)); // -1 Refer to the package java.util.function for more functional constructs: Function, Supplier, Consumer, Predicate, etc. One can also define the functional interface using @FunctionalInterface.\nInterfaces may also have one or more default implementations for a method and may still remain as a functional interface. It helps avoid unnecessary abstract base classes for default implementation.\nStatic and instance methods can be accessed with :: operator, and constructors may be accessed with ::new, and they can be passed as a functional parameter, e.g. System.out::println.\nStreams: Much More Than Iterations Streams are a sequence of objects and operations. A lot of default methods have been added in the interfaces to support forEach, filter, map , and reduce constructs of the streams. Java libraries, which were providing collections, now support the streams. e.g. BufferredReader.lines(). All the collections can be easily converted to streams. Parallel stream operations are also supported, which distributes the operations on the multiple CPUs internally.\nRead my earlier article on Introduction to Steam APIs\nIntermediate Operations: the Lazy Operation For intermediate operations performed lazily, nothing happens until the terminating operation is called.\n map (mapping): Each element is one-to-one and converted into another form. filter (predicate): filter elements for which the given predicate is true. peek () , limit(), and sorted () are the other intermediate operations. Terminating Operations: the Resulting Operations forEach (consumer): iterate over the each element and consume the element reduce (initialValue, accumulator): It starts with initialValue and is iterated over each element and kept updating at a value that is eventually returned. collect (collector): this is a lazily evaluated result that needs to be collected using collectors, such as java.util.stream.Collectors, including toList(), joining(), summarizingX(), averagingX(), groupBy(), and partitionBy(). Optional: Get Rid of the Null Programming Null-based programming is considered bad, but there was hardly any option to avoid it earlier. Instead of testing for null, we can now test for isPresent() in the optional object. Read about it — there are multiple constructs and operations for streams as well, which returns optional.\nJVM Changes: PermGen Retired The PermGen has been removed completely and replaced by MetaSpace. Metaspace is no more part of the heap memory, but of the native memory allocated to the process. JVM tuning needs different aspects now, as monitoring is required, not just for the heap, but also for the native memory.\nSome combinations of GCs has deprecated. GC is allocated automatically based on the environment configurations.\nThere were other changes in NIO, DateTime, Security, compact JDK profiles, and tools like jDeps, jjs, the JavaScript Engine, etc.\nHashMap Improvements Java 8 has improved the HashMap collisions, have a look the HashMap Performance benchmarking\nJava 9: Continue the Tradition Java 9 has been around us for more than a year now. Its key feature module system is still not well adapted. In my opinion, it will take more time to really adopt such features in the mainstream. It challenges developers in the way they design classes. They now need to think more in terms of application modules than just a group of classes. Anyway, it is a similar challenge to what a traditional developer faces through microservice-based development. Java 9 continued adding functional programming features to keep Java alive and also improved JVM internals.\nJava Platform Module System: Small Is Big The most known feature of Java 9 is the Java Platform Module System (JPMS). It is a great step towards the real encapsulation. Breaking a bigger module in small and clear modules consists of closely related code and data. It is similar to an OSGi bundle, where each bundle defines dependencies it consumes and exposes things on which other modules depend.\nIt introduces an assemble phase between compile and runtime that can build a custom runtime image of JDK and JRE. Now, JDK itself consists of modules.\n ~ java --list-modules java.activation@9.0.2 java.base@9.0.2 java.compiler@9.0.2 java.corba@9.0.2 ... These modules are called system modules. A jar loaded without a module information is loaded in an unnamed module. We can define our own application module by providing the following information in file module-info.java:\n requires — dependencies on other modules exports — export public APIs/interfaces of the packages in the module opens — open package for reflection access uses — similar to requires. To learn more, here is a quick start guide.\nHere are the quick steps in the IntelliJ IDE:\n Create Module in IntelliJ: Go to File \u0026gt; New \u0026gt; Module - \u0026ldquo;first.module\u0026rdquo;\n Create a Java class in /first.module/src\npackage com.test.modules.print; public class Printer { public static void print(String input){ System.out.println(input); } } Add module-info.java : /first.module/src \u0026gt; New \u0026gt; package module first.module { exports com.test.modules.print; // exports public apis of the package. } Similarly, you need to create another module main.module and Main.java:\nmodule main.module { requires first.module; } package com.test.modules.main; import com.test.modules.print.Printer; public class Main { public static void main(String[] args) { Printer.print(\u0026#34;Hello World\u0026#34;); } } IntelliJ automatically compiles it and keeps a record of dependencies and --module-source-path\n To run the Main.java, it needs --module-path or -m:\njava -p /Workspaces/RnD/out/production/main.module:/Workspaces/RnD/out/production/first.module -m main.module/com.test.modules.main.Main Hello World Process finished with exit code 0 So, this way, we can define the modules. Java 9 comes with many additional features. Some of the important ones are listed below\nCatching up With the Rivals Reacting Programming — Java 9 has introduced reactive-streams, which supports React, like async/await communication between publisher and consumers. It added the standard interfaces in the Flow class.\n JShell – the Java Shell - Just like any other scripting language, Java can now be used as a scripting language.\n Stream and Collections enhancement: Java 9 added a few APIs related to \u0026ldquo;ordered\u0026rdquo; and \u0026ldquo;optional\u0026rdquo; stream operations. of() operation is added to ease up creating collections, just like JavaScript.\n Self-Tuning JVM G1 is made the default GC, and there have been improvements in the self-tuning features in GC. CMS has been deprecated.\nAccess to Stack The StackWalker class is added to lazy access to the stack frames, and we can traverse and filter into it.\nMulti-Release JAR Files: MRJAR One Java program may contain classes compatible with multiple versions. To be honest, I am not sure how useful this feature might be.\nJava 10: Getting Closer to the Functional Languages Java 10 comes with the old favorite var of JavaScript. You can not only declare types of free variables but you can also construct the collection type free. The following are valid in Java:\nvar test = \u0026#34;9\u0026#34;; var set = Set.of(5, \u0026#34;X\u0026#34;, 6.5, new Object()); The code is getting less verbose and the magic of scripting languages is getting added in Java. It will definitely bring the negatives of these features to Java, but it has given a lot of power to the developer.\nMore Powerful JVM This was introduced in parallelism in the case that full GC happens for G1 to improve the overall performance.\nHeap allocation can be allocated on an alternative memory device attached to the system. It will help prioritize Java processes on the system. The low priority one may use a slow memory as compared to the important ones.\njava 10 also Improved thread handling in handshaking the thread locally. Ahead-Of-Time compilation (experimental) was also added. Bytecode generation enhancement for loops was another interesting feature with Java 10.\nEnhanced Language In Java 10, we Improved Optional, unmodifiable collections API’s.\nConclusion We have seen the journey from Java 8 to Java 10 and the influence of other functional and scripting languages in Java. Java is a strong object-oriented programming language, and at the same time, now it supports a lot of functional constructs. Java will not only bring top features from other languages, but it will also keep improving the internals. It is evolving at a great speed, so stay tuned — before it phases you out! Because, Java 11, 12 are on the way!\nThis article was originally published on DZone\n","id":163,"tag":"upgrade ; java ; java 11 ; java 8 ; java 9 ; java 10 ; technology","title":"A Quick Catch up Before Java 11","type":"post","url":"/post/before_java11/"},{"content":"The change is inevitable and constant; it is a part of our life. Sooner or later, we have to accept the change. 13 years ago, I joined a small IT organisation of around 100 people, as a developer. I have grown with the organisation; a 40 times growth journey. I really thank the organisation for giving me opportunity, and transforming me, what I am today. Now, the time has come to accept a change, and start a new journey. I am writing this post to share my point of view about, when to accept the change.\nWhen to change the organisation? Intention behind this post is not to answer this question, but to address, what you get and what you lose while staying long in the same organisation. I have also taken inputs from many senior folks who joined us or left us after the long stay. You may have different opinion. Please ignore this post incase, it hurts your sentiments in any way. When to change is really an individual’s choice. If you are aware of the pros and cons of staying long, then you may probably work on the cons and stay relevant even longer, or don\u0026rsquo;t change at all.\nWhat do you get? When you join a small software organisation and stay long, say more than 10 years, and grow with the organisation\u0026rsquo;s pace, you get following which keeps you attached to the same organisation\n You are tried and tested - You become a tried and tested resource of the organisation, many stakeholders may like to take you in their group. You don’t need to prove every-time to get into any nice opportunity in the organisation. You know the beats - Every organisation grows in its own style and nurture its own culture. You know the beats of the organisation, you know how the X thing is done in a certain way, you know the history. At time you become a consultant for many things in the organisation. People believe in your inputs, you become an encyclopaedia of org information. Since you have participated in almost every things, you see yourself in every part of organisation, be it hiring, training, process or the system. Breadth and Depth - You may get chance to work on different departments and chance to setup groups and lead multiple initiatives. This gives you a chance to gain the breadth and depth, and a rich experience to lead the organisation. Attractive role and appreciations - You may be working on really attracting role at even lessor experience in terms of years. You may be recognised and appreciated well. Emotional connect - After these many years, your relationship with organisation is no more a professional but it becomes an emotional one. Organisation becomes a family, and you will have a lots of friends. Comfort zone - You are in your comfort zone, and can work very confidently in the same zone. Stability - Industry may see you as a stable resource, who don’t change too often. What do you lose? Staying long will accumulate following, and you may keep building on top of these, unknowingly. You may work to overcome these and stay prepared to go even longer in the journey.\n Lack of diversity - Since you have worked in the same organisation for such a long period, you start thinking that certain things can happen only in a certain way. These certain things could be dealing with processes, people, resource and even technology solutions. You know the things well, which worked well and which not for you in that organisation. Thinking beyond your area of easy imagination does not come obvious. It may start blocking your path and at time may become frustrating. Resistant to the change - After staying so long, you become resistant to the change in the processes, people and the system. It becomes hard to accept the change, if that is not according to you. Out of comfort zone - It becomes difficult to work beyond your comfort zone. You may start running away from the situation, which takes you out the comfort zone. Risk taking capability may also reduced, as you worked in really safe and secure zone for long. You may have not learned to deal with the failures, and have not been exploited and exposed well since long. We learns well when we get exposed time to time. You are for granted - You may be treated for granted by your peers of similar experience and by management, they know all the pluses and minuses of yours, as you have grown in front of them. They may use you the way they want and wherever they want. It will be hard to change the perception of people around you. You may also be a target for the new comers, as they don’t really know the history and achievements that you might have. Energy and satisfaction - After so many years, you may not have the energy to fight against the changes, which may kill the culture you are addicted to. Remember that change is necessary, to scale the organisation, to sustain and to go beyond the barriers, but you might not be aligned with the changes. You may not be satisfied with the situations and with what you get. In such a situation, you start creating a negative environments, which is not good for both you and your organisation. Confidence in Industry - as the years progress, jobs become limited at your level. Market outside may not be ready to accept you, in the way you are, you need to adapt to the need outside. You come to know your value and level, only when you try outside. If you keep trying outside time to time, you gain lot of confidence, and that can help in your current job as well. Conclusion I have shared pros and cons of a long stay in the same IT organisation, it may help you in identify the losses you are accumulating year by year after certain years. You may plan to overcome the losses, and stay useful to the same organisation you are in, or take the next step.\nIt is originally published at My LinkedIn Profile\n","id":164,"tag":"general ; change ; nagarro ; thoughtworks ; motivational","title":"Accept the change!","type":"post","url":"/post/accept_the_change/"},{"content":"Microservices-based development is happening all around the industry; more than 70% are trying development of microservice-based software. Microservices simplify integration of the businesses, processes, technology, and people by breaking down the big-bang monolith problem to a smaller set that can be handled independently. However, it also comes with the problem of managing relations between these smaller sets. We used to manage fewer independent units, so there was less operation and planning effort. We need different processes, tools, training, methodology, and teams to ease microservices development.\nOur Microservices-Based Project We have been developing a highly complex project on microservice architecture, where we import gigs of observation data every day and build statistical models to predict future demand. End users may interact to influence the statistical model and prediction methods. Users may analyze the demand by simulating the impact. There are around 50+ bounded contexts with 100+ independent deployment units communicating over REST and messaging. 200+ process instances are needed to run the whole system. We started this project from scratch with almost no practical experience on microservices and we faced lots of issues in project planning, training, testing, quality management, deployment, and operations.\nLearnings I am sharing the top five lessons from my experience that helped us overcome those problems.\n1. Align Development Methodology and Project Planning Agile development methodology is considered best for microservice development, but only if aligned well. Monolithic development has one deliverable and one process pipeline, but here, we have multiple deliverables, so unless we align the process pipeline for each deliverable, we won\u0026rsquo;t be able to achieve the desired effectiveness of microservice development.\nWe also faced project planning issues, as we could not plan the product and user stories well, which can produce independent products, and we could apply the process pipeline. For seven sprints, we could not demonstrate the business value to the end user, as our product workflow was ready only after that. We used to have very big user stories, which sometimes go beyond multiple sprints and impact the number of microservices.\nConsider the following aspects of project planning:\n Run parallel sprint pipelines for Requirement Definition, Architecture, Development, DevOps, and Infrastructure. Have a Scrum of Scrum for common concerns and integration points. Keep few initial sprints for Architecture and DevOps, and start the Development sprint only after the first stable version of the architecture and DevOps is setup. Architectural PoCs and Decision tasks should be planned a couple of sprints before the actual development script. Define metrics for each sprint to measure project quality quantitatively. Clearly call out architectural changes in the backlog and prioritize them well. Consider their adaptation efforts based on the current size of the project and impact on microservices. Have an infrastructure resource (expert, software, hardware, or tool) plan. Configuration management. Include agile training in the project induction. Include multiple sprint artifact dependencies in the Definition of Ready (DoR) and Definition of Done. Train the product owner and project planner to plan the scrums for requirement definition, architecture, etc such that they fulfill the DoR. Have smaller user stories, making sure the stories selected in a sprint are really of the unit size which will impact very few deployment unit. If a new microservice is getting added in a particular sprint, then consider the effort for CI/CD, Infrastructure, DevOps. 2. Define an Infrastructure Management Strategy In a monolithic world, infrastructure management is not that critical in the start of a project, so infra-related tasks may get delayed until stable deliveries start coming, but, in microservices development, the deployment units are small; thus, they start coming early, and the number of deployment units is also high, so a strong infrastructure management strategy is needed.\nWe delayed defining the infrastructure management strategy and faced a lot of issues getting to know the appropriate capacity of the infrastructure and getting it on time. We had not tracked the deployment/uses of infra components well, which caused a delay in adapting the infra, and we ended up having less knowledge of the infrastructure. We had to put lot of effort into streamlining the infra components in the middle of the project, and that had a lot of side effects on the functional scope getting implemented.\nInfrastructure here includes cross-cutting components, supporting tools, and hardware/software needed for running the system. Things like service registry, discovery, API management, configurations, tracing, log management, monitoring, and service health checks may need separate tools. Consider at least the following in infrastructure management:\n Capacity planning – Do capacity planning from the start of the project, and then review/adjust it periodically. Get the required infrastructure (software/hardware/tools) ahead of time and test them well before the team adopts them. Define a Hardware/Software/Service onboarding plan which covers details of the tools in different physical environments, like development testing, QA testing, performance testing, staging, UAT, Prod, etc. Consider multiple extended development testing/integration environments, as multiple developers need to test their artifacts, and their development machine may not be capable of holding required services. Onboard an infrastructure management expert to accelerate the project setup. Define a deployment strategy and plan its implementation in the early stages of the project. Don’t go for intermediate deployment methodology. If you want to go for Docker and Kubernetes-based deployment, then do it from the start of the project — don’t wait and delay its implementation. Define access management and resource provisioning policies. Have automated, proactive monitoring on your infrastructure. Track infrastructure development in parallel to the project scope. 3. Define Microservices-Based Architecture and Its Evolutions Microservices can be developed and deployed independently, but in the end, it is hard to maintain standards and practices throughout development across the services. A base architecture that needs to be followed by the microservices and then let the architecture evolve may help here.\nWe had defined a very basic architecture with a core platform covering logging, boot, and a few common aspects. However, we considered lot of things to come in evolutions such as messaging, database, caching, folder structures, compression/decompression, etc. and it resulted in the platform being changed heavily in parallel to the functional scope in microservices. We had not given enough time to the core platform before jumping to the functional scope sprints.\nConsider the following in the base architecture, and implement it well before the functional scope implementation. Don’t rely too much on the statement “Learn from the system and then improvise.” Define the architecture in advance, stay ahead of the situation, and gain knowledge as soon as possible.\n Define a core platform covering cross-cutting concerns and abstractions. The core platform may cover logging, tracing, boot, compression/decompression, encryption/decryption, common aspects, interceptors, request filters, configurations, exceptions, etc. Abstractions of messaging, caching, and database may also be included in the platform.\n Microservice structure – Define a folder and code structure with naming conversions. Don’t delay it. Late introduction will cost a lot.\n Build a mechanism for CI/CD – Define a CD strategy, even for the local QA environment, to avoid facing issues directly in the UAT/pre-UAT environment.\n Define an architecture change strategy – how architecture changes will be delivered and how they will be adapted.\n A version strategy for Source Code, API, Builds, Configurations, and documents.\n Keep validating the design against NFRs.\n Define a Test Architecture to cover the testing strategy.\n Document the module architecture with clearly defined bounded contexts and data isolations.\n 4. Team Management The microservice world needs a different mindset than the monolithic one. Each microservice may be considered independent, so developers of different microservices are independent. It brings a different kind of challenge: we want our developers to manage code consistency across units, follow the same coding standards, and build on the top of the core platform, and at the same time, we want them not to trust other microservices\u0026rsquo; code, as it was developed by some other company’s developer.\nConsider the following in your team management:\n Define the responsibility of “Configuration Management” to a few team members who are responsible for maintaining the configuration and dependencies information. They are more of an information aggregator, but can be considered a source of truth when it comes to configuration. Define a “Contract Management” team consisting of developers/architects who are responsible for defining the interaction between microservices. Assign module owners and teams based on bounded context. They are responsible for everything related to their assigned module, and informing the “Configuration Management” team of public concerns. Team seating may be considered module-wise; developers should talk to each other only via contract, otherwise they are a completely separate team. If any change is needed in the contract, then it should come via “Contract Management.” Define the DevOps team from development team. One may rotate people so everybody gets the knowledge of Ops. Encourage multi-skilling in the team. Self-motivated team. Continuous Training Programs. 5. Keep Sharing the Knowledge Microservices are evolving day by day, and a lot of new tools and concepts are being introduced. Teams need to be up to date; due to microservice architecture, you may change the technology stack of a microservice if needed. Since teams are independent, we need to keep sharing the learning and knowledge across teams.\nWe faced issues where the same/similar issues were being replicated by different teams and they tried to fix them in different ways. Teams faced issues in understanding bounded context, data isolationss etc.\nConsider the following:\n Educate teams on domain-driven design, bounded context, data isolation, integration patterns, event design, continuous deployment, etc.\n Create a learning database where each team may submit entries in the sprint retrospection.\n Train teams to follow unit testing, mock, and integration testing. Most of the time, the definition of a “unit” is misunderstood by developers. “Integration testing” is given the lowest priority. It must be followed; if taken correctly, it should be the simplest thing to adhere to.\n Share knowledge of performance engineering — for example:\n Don’t over loop Use cache efficiently Use RabbitMQ messaging as a flow, not as data storage Concurrent consumer and publishers Database partitioning and clustering Do not repeat Conclusion Microservices are being adopted at a good pace and things are getting more mature with time. I have shared a few of the hard lessons that we experienced in our microservice-based project. I hope this will be beneficial for your project to avoid mistakes.\nIt was originally published at DZone and also shared on LinkedIn\n","id":165,"tag":"micro-services ; learnings ; experience ; java ; architecture ; spring boot ; technology","title":"5 Hard Lessons From Microservices Development","type":"post","url":"/post/lessons_from_microservices_development/"},{"content":"This article is for the Java developer who wants to learn Apache Spark but don\u0026rsquo;t know much of Linux, Python, Scala, R, and Hadoop. Around 50% of developers are using Microsoft Windows environment for development, and they don\u0026rsquo;t need to change their development environment to learn Spark. This is the first article of a series, \u0026ldquo;Apache Spark on Windows\u0026rdquo;, which covers a step-by-step guide to start the Apache Spark application on Windows environment with challenges faced and thier resolutions.\nA Spark Application A Spark application can be a Windows-shell script or it can be a custom program in written Java, Scala, Python, or R. You need Windows executables installed on your system to run these applications. Scala statements can be directly entered on CLI \u0026ldquo;spark-shell\u0026rdquo;; however, bundled programs need CLI \u0026ldquo;spark-submit.\u0026rdquo; These CLIs come with the Windows executables.\nDownload and Install Spark Download Spark from https://spark.apache.org/downloads.html and choose \u0026ldquo;Pre-built for Apache Hadoop 2.7 and later\u0026rdquo; Unpack download artifact - spark-2.3.0-bin-hadoop2.7.tgz in a directory. Clearing the Startup Hurdles You may follow the Spark\u0026rsquo;s quick start guide to start your first program. However, it is not that straightforward, andyou will face various issues as listed below, along with their resolutions.\nPlease note that you must have administrative permission to the user or you need to run command tool as administrator.\nIssue 1: Failed to Locate winutils Binary Even if you don\u0026rsquo;t use Hadoop, Windows needs Hadoop to initialize the \u0026ldquo;hive\u0026rdquo; context. You get the following error if Hadoop is not installed.\nC:\\Installations\\spark-2.3.0-bin-hadoop2.7\u0026gt;.\\bin\\spark-shell 2018-05-28 12:32:44 ERROR Shell:397 - Failed to locate the winutils binary in the hadoop binary path java.io.IOException: Could not locate executable null\\bin\\winutils.exe in the Hadoop binaries. at org.apache.hadoop.util.Shell.getQualifiedBinPath(Shell.java:379) at org.apache.hadoop.util.Shell.getWinUtilsPath(Shell.java:394) at org.apache.hadoop.util.Shell.\u0026lt;clinit\u0026gt;(Shell.java:387) at org.apache.hadoop.util.StringUtils.\u0026lt;clinit\u0026gt;(StringUtils.java:80) at org.apache.hadoop.security.SecurityUtil.getAuthenticationMethod(SecurityUtil.java:611) at org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:273) at org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:261) at org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:791) at org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:761) at org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:634) at org.apache.spark.util.Utils$anonfun$getCurrentUserName$1.apply(Utils.scala:2464) at org.apache.spark.util.Utils$anonfun$getCurrentUserName$1.apply(Utils.scala:2464) at scala.Option.getOrElse(Option.scala:121) This can be fixed by adding a dummy Hadoop installation. Spark expects winutils.exe in the Hadoop installation \u0026ldquo;\u0026lt;Hadoop Installation Directory\u0026gt;/bin/winutils.exe\u0026rdquo; (note the \u0026ldquo;bin\u0026rdquo; folder).\n Download Hadoop 2.7\u0026rsquo;s winutils.exe and place it in a directory C:\\Installations\\Hadoop\\bin\n Now set HADOOP_HOME = C:\\Installations\\Hadoop environment variables. Now start the Windows shell; you may get few warnings, which you may ignore for now.\nC:\\Installations\\spark-2.3.0-bin-hadoop2.7\u0026gt;.\\bin\\spark-shell 2018-05-28 15:05:08 WARN NativeCodeLoader:62 - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Setting default log level to \u0026#34;WARN\u0026#34;. To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel). Spark context Web UI available at http://kuldeep.local:4040 Spark context available as \u0026#39;sc\u0026#39; (master = local[*], app id = local-1527500116728). Spark session available as \u0026#39;spark\u0026#39;. Welcome to ____ __ / __/__ ___ _____/ /__ _\\ \\/ _ \\/ _ `/ __/ \u0026#39;_/ /___/ .__/\\_,_/_/ /_/\\_\\ version 2.3.0 /_/ Using Scala version 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_151) Type in expressions to have them evaluated. Type :help for more information. scala\u0026gt; Issue 2: File Permission Issue for /tmp/hive Let\u0026rsquo;s run the first program as suggested by Spark\u0026rsquo;s quick start guide. Don\u0026rsquo;t worry about the Scala syntax for now.\nscala\u0026gt; val textFile = spark.read.textFile(\u0026#34;README.md\u0026#34;) 2018-05-28 15:12:13 WARN General:96 - Plugin (Bundle) \u0026#34;org.datanucleus.api.jdo\u0026#34; is already registered. Ensure you dont have multiple JAR versions of the same plugin in the classpath. The URL \u0026#34;file:/C:/Installations/spark-2.3.0-bin-hadoop2.7/jars/datanucleus-api-jdo-3.2.6.jar\u0026#34; is already registered, and you are trying to register an identical plugin located at URL \u0026#34;file:/C:/Installations/spark-2.3.0-bin-hadoop2.7/bin/../jars/datanucleus-api-jdo-3.2.6.jar.\u0026#34; 2018-05-28 15:12:13 WARN General:96 - Plugin (Bundle) \u0026#34;org.datanucleus.store.rdbms\u0026#34; is already registered. Ensure you dont have multiple JAR versions of the same plugin in the classpath. The URL \u0026#34;file:/C:/Installations/spark-2.3.0-bin-hadoop2.7/bin/../jars/datanucleus-rdbms-3.2.9.jar\u0026#34; is already registered, and you are trying to register an identical plugin located at URL \u0026#34;file:/C:/Installations/spark-2.3.0-bin-hadoop2.7/jars/datanucleus-rdbms-3.2.9.jar.\u0026#34; 2018-05-28 15:12:13 WARN General:96 - Plugin (Bundle) \u0026#34;org.datanucleus\u0026#34; is already registered. Ensure you dont have multiple JAR versions of the same plugin in the classpath. The URL \u0026#34;file:/C:/Installations/spark-2.3.0-bin-hadoop2.7/bin/../jars/datanucleus-core-3.2.10.jar\u0026#34; is already registered, and you are trying to register an identical plugin located at URL \u0026#34;file:/C:/Installations/spark-2.3.0-bin-hadoop2.7/jars/datanucleus-core-3.2.10.jar.\u0026#34; 2018-05-28 15:12:17 WARN ObjectStore:6666 - Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 1.2.0 2018-05-28 15:12:18 WARN ObjectStore:568 - Failed to get database default, returning NoSuchObjectException org.apache.spark.sql.AnalysisException: java.lang.RuntimeException: java.lang.RuntimeException: The root scratch dir: /tmp/hive on HDFS should be writable. Current permissions are: rw-rw-rw-; at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:106) at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:194) at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:114) at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:102) You may ignore plugin\u0026rsquo;s warning for now, but \u0026ldquo;/tmp/hive on HDFS should be writable\u0026rdquo; has to be fixed.\nThis can be fixed by changing permissions on \u0026ldquo;/tmp/hive\u0026rdquo; (which is C:/tmp/hive) directory using winutils.exe as follows. You may run basic Linux commands on Windows using winutils.exe.\nc:\\Installations\\hadoop\\bin\u0026gt;winutils.exe ls \\tmp\\hive drw-rw-rw- 1 TT\\kuldeep\\Domain Users 0 May 28 2018 \\tmp\\hive c:\\Installations\\hadoop\\bin\u0026gt;winutils.exe chmod 777 \\tmp\\hive c:\\Installations\\hadoop\\bin\u0026gt;winutils.exe ls \\tmp\\hive drwxrwxrwx 1 TT\\kuldeep\\Domain Users 0 May 28 2018 \\tmp\\hive c:\\Installations\\hadoop\\bin\u0026gt; Issue 3: Failed to Start Database \u0026ldquo;metastore_db\u0026rdquo; If you run the same command \u0026ldquo;val textFile = spark.read.textFile(\u0026quot;README.md\u0026quot;)\u0026rdquo; again you may get following exception :\n2018-05-28 15:23:49 ERROR Schema:125 - Failed initialising database. Unable to open a test connection to the given database. JDBC url = jdbc:derby:;databaseName=metastore_db;create=true, username = APP. Terminating connection pool (set lazyInit to true if you expect to start your database after your app). Original Exception: ------ java.sql.SQLException: Failed to start database \u0026#39;metastore_db\u0026#39; with class loader org.apache.spark.sql.hive.client.IsolatedClientLoader$anon$1@34ac72c3, see the next exception for details. at org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown Source) at org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown Source) at org.apache.derby.impl.jdbc.Util.seeNextException(Unknown Source) at org.apache.derby.impl.jdbc.EmbedConnection.bootDatabase(Unknown Source) This can be fixed just be removing the \u0026ldquo;metastore_db\u0026rdquo; directory from Windows installation \u0026ldquo;C:/Installations/spark-2.3.0-bin-hadoop2.7\u0026rdquo; and running it again.\nRun Spark Application on spark-shell Run your first program as suggested by Spark\u0026rsquo;s quick start guide.\nDataSet: \u0026lsquo;org.apache.spark.sql.Dataset\u0026rsquo; is the primary abstraction of Spark. Dataset maintains a distributed collection of items. In the example below, we will create Dataset from a file and perform operations on it.\nSparkSession: This is entry point to Spark programming. \u0026lsquo;org.apache.spark.sql.SparkSession\u0026rsquo;.\nStart the spark-shell.\nC:\\Installations\\spark-2.3.0-bin-hadoop2.7\u0026gt;.\\bin\\spark-shell Spark context available as \u0026#39;sc\u0026#39; (master = local[*], app id = local-1527500116728). Spark session available as \u0026#39;spark\u0026#39;. Welcome to ____ __ / __/__ ___ _____/ /__ _\\ \\/ _ \\/ _ `/ __/ \u0026#39;_/ /___/ .__/\\_,_/_/ /_/\\_\\ version 2.3.0 /_/ Using Scala version 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_151) Type in expressions to have them evaluated. Type :help for more information. scala\u0026gt; Spark shell initializes a Windowscontext \u0026lsquo;sc\u0026rsquo; and Windowssession named \u0026lsquo;spark\u0026rsquo;. We can get the DataFrameReader from the session which can read a text file, as a DataSet, where each line is read as an item of the dataset. Following Scala commands creates data set named \u0026ldquo;textFile\u0026rdquo; and then run operations on dataset such as count() , first() , and filter().\nscala\u0026gt; val textFile = spark.read.textFile(\u0026#34;README.md\u0026#34;) textFile: org.apache.spark.sql.Dataset[String] = [value: string] scala\u0026gt; textFile.count() res0: Long = 103 scala\u0026gt; textFile.first() res1: String = # Apache Spark scala\u0026gt; val linesWithSpark = textFile.filter(line =\u0026gt; line.contains(\u0026#34;Spark\u0026#34;)) linesWithSpark: org.apache.spark.sql.Dataset[String] = [value: string] scala\u0026gt; textFile.filter(line =\u0026gt; line.contains(\u0026#34;Spark\u0026#34;)).count() res2: Long = 20 Some more operations of map(), reduce(), collect() .\nscala\u0026gt; textFile.map(line =\u0026gt; line.split(\u0026#34; \u0026#34;).size).reduce((a, b) =\u0026gt; if (a \u0026gt; b) a else b) res3: Int = 22 scala\u0026gt; val wordCounts = textFile.flatMap(line =\u0026gt; line.split(\u0026#34; \u0026#34;)).groupByKey(identity).count() wordCounts: org.apache.spark.sql.Dataset[(String, Long)] = [value: string, count(1): bigint] scala\u0026gt; wordCounts.collect() res4: Array[(String, Long)] = Array((online,1), (graphs,1), ([\u0026#34;Parallel,1), ([\u0026#34;Building,1), (thread,1), (documentation,3), (command,,2), (abbreviated,1), (overview,1), (rich,1), (set,2), (-DskipTests,1), (name,1), (page](http://spark.apache.org/documentation.html).,1), ([\u0026#34;Specifying,1), (stream,1), (run:,1), (not,1), (programs,2), (tests,2), (./dev/run-tests,1), (will,1), ([run,1), (particular,2), (option,1), (Alternatively,,1), (by,1), (must,1), (using,5), (you,4), (MLlib,1), (DataFrames,,1), (variable,1), (Note,1), (core,1), (more,1), (protocols,1), (guidance,2), (shell:,2), (can,7), (site,,1), (systems.,1), (Maven,1), ([building,1), (configure,1), (for,12), (README,1), (Interactive,2), (how,3), ([Configuration,1), (Hive,2), (system,1), (provides,1), (Hadoop-supported,1), (pre-built,1... scala\u0026gt; Run Spark Application on spark-submit In the last example, we ran the Windows application as Scala script on \u0026lsquo;spark-shell\u0026rsquo;, now we will run a Spark application built in Java. Unlike spark-shell, we need to first create a SparkSession and at the end, the SparkSession must be stopped programmatically.\nLook at the below SparkApp.Java it read a text file and then count the number of lines.\npackage com.korg.spark.test; import org.apache.spark.sql.Dataset; import org.apache.spark.sql.SparkSession; public class SparkApp { public static void main(String[] args) { SparkSession spark = SparkSession.builder().appName(\u0026#34;SparkApp\u0026#34;).config(\u0026#34;spark.master\u0026#34;, \u0026#34;local\u0026#34;).getOrCreate(); Dataset\u0026lt;String\u0026gt; textFile = spark.read().textFile(args[0]); System.out.println(\u0026#34;Number of lines \u0026#34; + textFile.count()); spark.stop(); } } Create above Java file in a Maven project with following pom dependencies :\n\u0026lt;project xmlns=\u0026#34;http://maven.apache.org/POM/4.0.0\u0026#34; xmlns:xsi=\u0026#34;http://www.w3.org/2001/XMLSchema-instance\u0026#34; xsi:schemaLocation=\u0026#34;http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\u0026#34;\u0026gt; \u0026lt;modelVersion\u0026gt;4.0.0\u0026lt;/modelVersion\u0026gt; \u0026lt;groupId\u0026gt;com.korg.spark.test\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;spark-test\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;0.0.1-SNAPSHOT\u0026lt;/version\u0026gt; \u0026lt;name\u0026gt;SparkTest\u0026lt;/name\u0026gt; \u0026lt;description\u0026gt;SparkTest\u0026lt;/description\u0026gt; \u0026lt;dependencies\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.apache.spark\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;spark-core_2.11\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;2.3.0\u0026lt;/version\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;!-- Spark dependency --\u0026gt; \u0026lt;groupId\u0026gt;org.apache.spark\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;spark-sql_2.11\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;2.3.0\u0026lt;/version\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;/dependencies\u0026gt; \u0026lt;/project\u0026gt; Build the Maven project it will generate jar artifact \u0026ldquo;target/spark-test-0.0.1-SNAPSHOT.jar\u0026rdquo;\nNow submit this Windows application to Windows as follows: (Excluded some logs for clarity)\nC:\\Installations\\spark-2.3.0-bin-hadoop2.7\u0026gt;.\\bin\\spark-submit --class \u0026#34;com.korg.spark.test.SparkApp\u0026#34; E:\\Workspaces\\RnD\\spark-test\\target\\spark-test-0.0.1-SNAPSHOT.jar README.md 2018-05-29 12:59:15 WARN NativeCodeLoader:62 - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2018-05-29 12:59:15 INFO SparkContext:54 - Running Spark version 2.3.0 2018-05-29 12:59:15 INFO SparkContext:54 - Submitted application: SparkApp 2018-05-29 12:59:15 INFO SecurityManager:54 - Changing view acls to: kuldeep 2018-05-29 12:59:16 INFO Utils:54 - Successfully started service \u0026#39;sparkDriver\u0026#39; on port 51471. 2018-05-29 12:59:16 INFO SparkEnv:54 - Registering OutputCommitCoordinator 2018-05-29 12:59:16 INFO log:192 - Logging initialized @7244ms 2018-05-29 12:59:16 INFO Server:346 - jetty-9.3.z-SNAPSHOT 2018-05-29 12:59:16 INFO Server:414 - Started @7323ms 2018-05-29 12:59:17 INFO SparkUI:54 - Bound SparkUI to 0.0.0.0, and started at http://kuldeep.local:4040 2018-05-29 12:59:17 INFO SparkContext:54 - Added JAR file:/E:/Workspaces/RnD/spark-test/target/spark-test-0.0.1-SNAPSHOT.jar at spark://kuldeep.local:51471/jars/spark-test-0.0.1-SNAPSHOT.jar with timestamp 1527578957193 2018-05-29 12:59:17 INFO SharedState:54 - Warehouse path is \u0026#39;file:/C:/Installations/spark-2.3.0-bin-hadoop2.7/spark-warehouse\u0026#39;. 2018-05-29 12:59:18 INFO StateStoreCoordinatorRef:54 - Registered StateStoreCoordinator endpoint 2018-05-29 12:59:21 INFO FileSourceScanExec:54 - Pushed Filters: 2018-05-29 12:59:21 INFO ContextCleaner:54 - Cleaned accumulator 0 2018-05-29 12:59:22 INFO CodeGenerator:54 - Code generated in 264.159592 ms 2018-05-29 12:59:22 INFO BlockManagerInfo:54 - Added broadcast_0_piece0 in memory on kuldeep425.Nagarro.local:51480 (size: 23.3 KB, free: 366.3 MB) 2018-05-29 12:59:22 INFO SparkContext:54 - Created broadcast 0 from count at SparkApp.java:11 2018-05-29 12:59:22 INFO SparkContext:54 - Starting job: count at SparkApp.java:11 2018-05-29 12:59:22 INFO DAGScheduler:54 - Registering RDD 2 (count at SparkApp.java:11) 2018-05-29 12:59:22 INFO DAGScheduler:54 - Got job 0 (count at SparkApp.java:11) with 1 output partitions 2018-05-29 12:59:22 INFO Executor:54 - Fetching spark://kuldeep.local:51471/jars/spark-test-0.0.1-SNAPSHOT.jar with timestamp 1527578957193 2018-05-29 12:59:23 INFO TransportClientFactory:267 - Successfully created connection to kuldeep425.Nagarro.local/10.0.75.1:51471 after 33 ms (0 ms spent in bootstraps) 2018-05-29 12:59:23 INFO Utils:54 - Fetching spark://kuldeep.local:51471/jars/spark-test-0.0.1-SNAPSHOT.jar to C:\\Users\\kuldeep\\AppData\\Local\\Temp\\spark-5a3ddefe-e64c-4730-8800-9442ad72bdd1\\userFiles-0b51b538-6f4d-4ddd-85c1-4595749c09ea\\fetchFileTemp210545020631632357.tmp 2018-05-29 12:59:23 INFO Executor:54 - Adding file:/C:/Users/kuldeep/AppData/Local/Temp/spark-5a3ddefe-e64c-4730-8800-9442ad72bdd1/userFiles-0b51b538-6f4d-4ddd-85c1-4595749c09ea/spark-test-0.0.1-SNAPSHOT.jar to class loader 2018-05-29 12:59:23 INFO FileScanRDD:54 - Reading File path: file:///C:/Installations/spark-2.3.0-bin-hadoop2.7/README.md, range: 0-3809, partition values: [empty row] 2018-05-29 12:59:23 INFO CodeGenerator:54 - Code generated in 10.064959 ms 2018-05-29 12:59:23 INFO Executor:54 - Finished task 0.0 in stage 0.0 (TID 0). 1643 bytes result sent to driver 2018-05-29 12:59:23 INFO TaskSetManager:54 - Finished task 0.0 in stage 0.0 (TID 0) in 466 ms on localhost (executor driver) (1/1) 2018-05-29 12:59:23 INFO TaskSchedulerImpl:54 - Removed TaskSet 0.0, whose tasks have all completed, from pool 2018-05-29 12:59:23 INFO DAGScheduler:54 - ShuffleMapStage 0 (count at SparkApp.java:11) finished in 0.569 s 2018-05-29 12:59:23 INFO DAGScheduler:54 - ResultStage 1 (count at SparkApp.java:11) finished in 0.127 s 2018-05-29 12:59:23 INFO TaskSchedulerImpl:54 - Removed TaskSet 1.0, whose tasks have all completed, from pool 2018-05-29 12:59:23 INFO DAGScheduler:54 - Job 0 finished: count at SparkApp.java:11, took 0.792077 s Number of lines 103 2018-05-29 12:59:23 INFO AbstractConnector:318 - Stopped Spark@63405b0d{HTTP/1.1,[http/1.1]}{0.0.0.0:4040} 2018-05-29 12:59:23 INFO SparkUI:54 - Stopped Spark web UI at http://kuldeep.local:4040 2018-05-29 12:59:23 INFO MapOutputTrackerMasterEndpoint:54 - MapOutputTrackerMasterEndpoint stopped! 2018-05-29 12:59:23 INFO BlockManager:54 - BlockManager stopped 2018-05-29 12:59:23 INFO BlockManagerMaster:54 - BlockManagerMaster stopped Congratulations! You are done with your first Windows application on Windows environment.\nThis article was originally published on Dzone\n","id":166,"tag":"spark ; troubleshooting ; windows ; apache ; java ; scala ; technology","title":"Apache Spark on Windows","type":"post","url":"/post/apache_spark_on_windows/"},{"content":"Conducted a workshop on IoT with IOTNCR. I have spoken on IoT technology landscape, and many of other speaker given enlightening talks.\nIntro here Nagarro invites IOT enthusiasts to Ideation to Production:An Entrepreneurship Workshop,Apr 29 #IOTNCR #Nagarroiotncr https://t.co/kvQi1CARjG pic.twitter.com/QbY94UId6c\n\u0026mdash; Kuldeep Singh (@kuldeepwilldo) April 27, 2017 The talks is about IOT fundamentals and building blocks IOT Nodes IOT Gateway IOT Platforms Visualization (AR \u0026amp; VR) Integrations Detailed Slides IOT Landscape from Kuldeep Singh Social connect Nice to meet you all @Nagarro #iotncr #Nagarroiotncr @kanchanray @mfuloria pic.twitter.com/Gg1IXBLfzD\n\u0026mdash; Kuldeep Singh (@kuldeepwilldo) April 29, 2017 ","id":167,"tag":"iot ; introduction ; fundamentals ; workshop ; nagarro ; iotncr ; talk ; speaker","title":"Speaker \u0026 Organizer - IOTNCR - Ideation to Production Workshop","type":"event","url":"/event/iotncr-ideation-to-production/"},{"content":" IOT 101 A Primer on Internet of Things from Kuldeep Singh The IoT can be described as “Connecting the Things to internet”.\nA comprehensive IoT ecosystem consists of many different parts such as electronic circuitry, sensing and acting capability, embedded systems, edge computing, network protocols, communication networks, cloud computing, big data management and analytics, business rules etcetera. This maze of varied parts can be better classified into 4 broad categories:\nDevices IoT devices are capable of sensing the environment and then act upon the instructions that they receive.\nThese devices consist of sensors and actuators connected to the machines and electronic environment. The electronic environment of device may pre-process data sensed from the sensor and then send it to IoT platform. This electronic environment can often also post-process the data or instruction received from the IoT platform before passing them to actuators for further action.\nConnectivity The second key part of the IoT system is connectivity. This involves connecting the devices to the IoT platform to send data sensed by the devices and receive instructions from the platform.\nElectronic environment of the device has capability to connect over internet directly or via internet gateways. For connecting to the internet directly there are multiple wired and wireless communication protocols, including some that use low powered communication networks. The devices which connect via internet gateway, generally communicate over short range radio frequency protocols or wired protocols. The internet gateway in turn further communicates with IoT platform over long range radio frequency protocols.\nOne of the key elements \u0026amp; challenge of connectivity is security. IoT system introduces lot of data exchange interfaces between IoT devices, gateways, IoT platform, integrated enterprise solutions, and visualization tools. Connection between these interfaces must safeguard the input and output data in transit.\nPlatform IoT platform is the brain of an IoT system. It is responsible for efficiently receiving the data ingested from the devices, then analysing that data in real-time and storing it for history building and for further processing in future. It also provides services to remotely monitor, control and manage the devices.\nThe Platform routes the data to other integrated enterprise systems based on the business rules available in the system and provides the services to visualize the data on multiple connected tools such as web interfaces, mobile and wearables. Finally, it aggregates the information to the context of the users so that the user gets the right information at right time.\nBusiness Model The advent of IoT has the potential to redefine the business models that would open new opportunities for new sources of revenue, improved margins and higher customer satisfaction.\nThere are broadly 5 trends in business model innovation: product \u0026amp; service bundling, assured performance, pay as you go, process optimization, predictive and prescriptive maintenance.\n#Conclusion IoT systems are complex due to the heterogeneity of technology and business needs. We expect this complexity to continue to increase driven by continuous proliferation of more IoT platforms that will seek to provide technology and business use case specific value proposition.\nIrrespective of the nature of the IoT system we believe that a holistic IoT system can be explained as a sum of 4 parts: Devices, Connectivity, Platform and Business Model.\nIt was originally published at Nagarro Blogs and LinkedIn \n","id":168,"tag":"guide ; iot ; cloud ; fundamentals ; nagarro ; introduction ; technology","title":"IOT 101: A primer on Internet of Things","type":"post","url":"/post/iot_simplified/"},{"content":"This article describes the fundamentals of light-weight smart glasses such as Google Glass and Vuzix M100 It also explain the development practices with pros and cons of its usage, limitation and future of these glasses in our lives.\nGoogle Glass Enterprise Edition Google Glass enterprise edition is a plain Android based smart glass that you can wear and performs operations just like a smartphone and with the use of a small screen located in the front of your right eye can perform a decent range of tasks. Basically, they are nothing more than a pair of glasses powered by nothing less than Google cutting edge technology. The team behind Google Glass Geeks managed to put their hands on two pairs of Google Glass enterprise edition.\nThe google glass enterprise edition is built for the Enterprise Industry and is best suited for handsfree use case (for example Automobile Service Assistant).\nUsage The Google glass is equipped with a trackpad on the right side near the display. There are 4 main gestures to operate the glass.\n Slide forward/backwards: Slide forward or backwards to slide through various cards (forward for right and backwards for left). Tap: Tap to select any item. Swipe down: Swipe down to return to previous menu. Swipe down with 2 fingers: Swipe down with 2 fingers to return to home and turn off the display. Specs Glass OS EE (Based on Android) Intel based chipset (OS is 32 bit) 2 GB RAM Sessors - Ambient Light Sensor, Inertial Measurement Units, Barometer, Capacitive Head Sensor, Wink (Eye Gesture) Sensor, Blink (Look-at-Screen) Sensor, Hinge Effect Sensor, Assisted GPS + GLONASS, Magnetometer (Digital Compass) 640 x 360 pixel screen Prism projector(equivalent of a 25 in, 64 cm screen from 8 ft./2.4 m away Wi-Fi and Bluetooth connectivity touchpad/voice commands. Screen can be casted over ADB Price tag at 1500$ US 1.5hours for heavy use and 3.2hours for medium usage, and 2.5hours of charging time Hello World with Google Glass Applications for google glass is built using Android Studio. Follow the steps below to build your first program for google glass.\nBefore getting into development, please understand the design principle of google glass here\n Open Android Studio Go to File\u0026gt; New Project Provide project and package name, and go Next Check the Glass option from check list and uncheck rest of the option and go Next Select Immersion Activity or Live card activity and click Next Edit activity name and package and click Finish Android studio will initialize a project with mentioned activity. It generates following MainActivity Class\n..... /** * An {@link Activity} showing a tuggable \u0026#34;Hello World!\u0026#34; card. * \u0026lt;p\u0026gt; * The main content view is composed of a one-card {@link CardScrollView} that provides tugging * feedback to the user when swipe gestures are detected. * If your Glassware intends to intercept swipe gestures, you should set the content view directly * and use a {@link com.google.android.glass.touchpad.GestureDetector}. * @see \u0026lt;a href=\u0026#34;https://developers.google.com/glass/develop/gdk/touch\u0026#34;\u0026gt;GDK Developer Guide\u0026lt;/a\u0026gt; */ public class MainActivity extends Activity { /** * {@link CardScrollView} to use as the main content view. */ private CardScrollView mCardScroller; /** * \u0026#34;Hello World!\u0026#34; {@link View} generated by {@link #buildView()}. */ private View mView; @Override protected void onCreate(Bundle bundle) { super.onCreate(bundle); mView = buildView(); mCardScroller = new CardScrollView(this); mCardScroller.setAdapter(new CardScrollAdapter() { @Override public int getCount() { return 1; } @Override public Object getItem(int position) { return mView; } @Override public View getView(int position, View convertView, ViewGroup parent) { return mView; } @Override public int getPosition(Object item) { if (mView.equals(item)) { return 0; } return AdapterView.INVALID_POSITION; } }); // Handle the TAP event. mCardScroller.setOnItemClickListener(new AdapterView.OnItemClickListener() { @Override public void onItemClick(AdapterView\u0026lt;?\u0026gt; parent, View view, int position, long id) { // Plays disallowed sound to indicate that TAP actions are not supported. AudioManager am = (AudioManager) getSystemService(Context.AUDIO_SERVICE); am.playSoundEffect(Sounds.DISALLOWED); }}); setContentView(mCardScroller); } @Override protected void onResume() { super.onResume(); mCardScroller.activate(); } @Override protected void onPause() { mCardScroller.deactivate(); super.onPause(); } /** * Builds a Glass styled \u0026#34;Hello World!\u0026#34; view using the {@link CardBuilder} class. */ private View buildView() { CardBuilder card = new CardBuilder(this, CardBuilder.Layout.TEXT); card.setText(R.string.hello_world); return card.getView(); } } Here it runs. Known issues/constraints Following are the list of known issues known to us,\n Glass gets hot while doing resource intensive tasks and becomes unusable. Battery life is not so great. Takes long time to charge. Vuzix m100 Vuzix M100 Smart Glass is an android-based wearable device, it has a monocular display with a wide variety of enterprise applications on it. Vuzix M100 offers hand-free use of the glass with most of the Android smart-phone features.\nUsage The Vuzix M100 4 physical buttons which are used to control the input to the device.\n Forward button: The forward button is placed on the top front portion of the glass and slide through various cards/ menus in the forward direction. Backward button: The backward button is placed on the top middle portion of the glass and slide through various cards/ menus in the backward direction. Select Button: The select button is placed on the top back portion of the glass and helps to select any item or return to the previous menu if long pressed. Power Button: The power button is placed on the bottom back portion of the glass helps to turn the device/display on and off. Specs Android OS 4.0.4 OMAP4430 CPU at 1GHz 1 GB RAM Sensors and actuators - 3 DOF gesture engine (L/R,,N/F), Proximity, Ambient light, 3-degree of freedom head tracking, 3 axis gyro, 3 axis accelerometer, 3 axis mag/integrated compass 400 x 240, 16:9, WQVGA, full color display Wi-Fi and Bluetooth connectivity physical buttons and voice commands inputs. ADB connectivity At price tag 999.99$ Battery - 1hours for heavy use and 3hours for light usage, and 1hour charing time. Hello World with Vuzix M100 Install Vuzix SDK: Open the Android SDK Manager Go to Tools \u0026gt; Manage Add-on Sites\u0026hellip; \u0026gt; User Defined Sites tab. \u0026gt; New\u0026hellip; Enter https://www.vuzix.com/k79g75yXoS/addon.xml and OK Close it and the SDK Manager will refresh with the M100 SDK Add-on in the Android 4.0.3 (API15) and Extras sections. Select the add-on packages you want to install and click Install packages\u0026hellip; Accept the license for each package you want to install and click Install. The SDK Manager will now start downloading and installing the packages. After the SDK Manager finishes downloading and installing the new packages, you will need to restart the android Studio before the Vuzix SDK is available to use for your apps. Steps to use the SDK File \u0026gt; New Project \u0026gt; Select Android \u0026gt; Android Application Project and click Next. Set Minimum Required SDK to an API version no later than API 15, as this is the API version that runs on the M100. Set Target SDK: to API 15: Android 4.0.3 (IceCreamSandwich). Select Vuzix M100 Add-On (Vuzix Corporation) (API 15) from the Compile With: drop-down box. Everything else can be configured as you like. After the project has been created you will have access to all the Vuzix API libraries. Hello World here The following code can be used to create a voice controlled app to create and manipulate circles and squares (graphically).\nMainActivity.java \n// // A simple activity class that allows us to use Vuzix Voice Control // to create and manipulate circles and squares (graphically) // public class MainActivity extends Activity { private ColorDemoSpeechRecognizer mSR; private Canvas mCanvas; private Bitmap mBG; private ArrayList\u0026lt;ShapeInfo\u0026gt; mShapes; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); // set up speech recognizer mSR = new ColorDemoSpeechRecognizer(this); if (mSR == null) { Toast.makeText(this,\u0026#34;Unable to create Speech Recognizer\u0026#34;,Toast.LENGTH_SHORT).show(); return; } loadCustomGrammar(); Toast.makeText(this,\u0026#34;Speech Recognizer CREATED, turning on\u0026#34;,Toast.LENGTH_SHORT).show(); android.util.Log.i(\u0026#34;VUZIX-VoiceDemo\u0026#34;,\u0026#34;Turning on Speech Recognition\u0026#34;); mSR.on(); // Set up drawing canvas mBG = Bitmap.createBitmap(432,200,Bitmap.Config.ARGB_8888); mCanvas = new Canvas(mBG); } @Override public void onDestroy() { if (mSR != null) { mSR.destroy(); } super.onDestroy(); } @Override public void onPause() { android.util.Log.i(\u0026#34;VUZIX-VoiceDemo\u0026#34;,\u0026#34;onPause(), stopping speech recognition\u0026#34;); if (mSR != null) { mSR.off(); } super.onPause(); } @Override public void onResume() { super.onResume(); android.util.Log.i(\u0026#34;VUZIX-VoiceDemo\u0026#34;, \u0026#34;onResume(), restarting speech recognition\u0026#34;); if (mSR != null) { mSR.on(); } } @Override public boolean onCreateOptionsMenu(Menu menu) { getMenuInflater().inflate(R.menu.menu_main, menu); return true; } @Override public boolean onOptionsItemSelected(MenuItem item) { int id = item.getItemId(); //noinspection SimplifiableIfStatement if (id == R.id.action_settings) { return true; } return super.onOptionsItemSelected(item); } private void loadCustomGrammar(){ android.util.Log.i(\u0026#34;VUZIX-VoiceDemo\u0026#34;, \u0026#34;copyGrammar()\u0026#34;); final android.content.res.Resources resources = getResources(); final android.content.res.AssetManager assets = resources.getAssets(); try { final java.io.InputStream fi = assets.open(\u0026#34;drawing_custom.lcf\u0026#34;); byte[] buf = new byte[fi.available()]; while (fi.read(buf) \u0026gt; 0){ } fi.close(); mSR.addGrammar(buf,\u0026#34;Color Demo Grammar\u0026#34;); } catch(java.io.IOException ex) { android.util.Log.e(\u0026#34;VUZIX-VoiceDemo\u0026#34;,\u0026#34;\u0026#34;+ex); } android.util.Log.i(\u0026#34;VUZIX-VoiceDemo\u0026#34;, \u0026#34;Done writing grammar files.\\n\u0026#34;); } } ColorDemoSpeechRecognizer.java\n// our custom extension of the com.vuzix.speech.VoiceControl class. This class // will contain the callback (onRecognition()) which is used to handle spoken // commands detected. public class ColorDemoSpeechRecognizer extends VoiceControl { private int mSelectedItem = 0; public ColorDemoSpeechRecognizer(Context context) { super(context); } protected void onRecognition(String result) { android.util.Log.i(\u0026#34;VUZIX-VoiceDemo\u0026#34;,\u0026#34;onRecognition: \u0026#34;+result); Toast.makeText(MainActivity.this,\u0026#34;Speech Command Received: \u0026#34;+result,Toast.LENGTH_SHORT).show(); } } This program shows are test when it recognizes the speech.\nKnown issues/constraints Following are the list of known issues,\n Device is not well balanced. Display quality is not good enough. Battery life is very short. Conclusion The the smart glass are unique piece of technology with a lot of potential in the future.\nIt can be used in various domains where hands free is the key requirement and it can be used to improve the efficiency and effectiveness.\nThough it suffers quite a few issues due to the lessor available technology in this time, but this is something that will overcome with time.\n","id":169,"tag":"xr ; ar ; google ; enterprise ; android ; vuzix ; smart-glass ; tutorial ; technology","title":"Exploring the Smart Glasses","type":"post","url":"/post/exploring-google-glass/"},{"content":"Organized Nagarro LSD event, on Industry 4.0, demonstrated usecases on connected services concept, and was a panelist for a discussion on\n Smarter Services - Predictive, Always Connected, Personalized Global experts will debate ways to create smarter services based around predictive maintenance, always connected devices and personalized solutions.\n Whether we call Industry 4.0 an evolution or a revolution, there is little doubt that it will have a disruptive impact on the manufacturing industry. This 4 hour event was embracing on Industry 4.0 and learn the nuances of this concept from world class experts.\nIndustry 4.0 Is Industry 4.0 the next wave of opportunity for IT industry professionals OR is it just another fad?\n Learn about the concept of Industry 4.0 and its relevance in the connected world. Discuss real world examples that leverage latest technologies such as sensors, security, cloud, mobile, and analytics to create connected factories. Debate on ways to create smarter services based around predictive maintenance, connected devices, and personalized solutions. Solve an Industry 4.0 use case and test your agility to deliver Industry 4.0 solutions. Explore how Assisted Reality, Augmented Reality and Virtual Reality are poised to play a key role in the Industry 4.0 era. Event Details Embrace Industry 4.0 \u0026amp; learn from world class experts over an exhilarating 4 hr event #smartindustry #industry40 https://t.co/UNsV10FWCi pic.twitter.com/AkhRtoqAvP\n\u0026mdash; Nagarro (@Nagarro) January 11, 2017 We are ready!#lsdbynagarro pic.twitter.com/4bYUEx7rUE\n\u0026mdash; Kuldeep Singh (@kuldeepwilldo) January 19, 2017 Panel Discussion. Video Link https://www.facebook.com/nagarroinc/videos/10154260734290963/\nFeedback and Social Collaboration Matrix redefined @ nagarro #lsdbynagarro https://t.co/Y9C5AwoXzQ\n\u0026mdash; Kuldeep Singh (@kuldeepwilldo) January 19, 2017 We moved a planet from a table to another #HoloLens #lsdbynagarro pic.twitter.com/dYbAjpI5tl\n\u0026mdash; Ajay Yadav (@ajayyadav48) January 19, 2017 ","id":170,"tag":"iot ; industry4.0 ; nagarro ; lsd ; ar ; vr ; cloud ; panel discussion ; speaker","title":"Panelist \u0026 Organizer - Nagarro LSD - Industry 4.0","type":"event","url":"/event/nagarro-lsd-industory-4-0/"},{"content":"Technological evolution causes disruption. It replaces traditional, less optimal, complex, lengthy, and costly technology, people, and processes with economical, innovative, optimal, and simpler alternatives. We have seen disruption by mobile phones, and up next is disruption from IoT, wearables, AR-VR, machine Learning, AI, and other innovations. Software development methodology, tools, and people will also face disruption. I expect that the traditional developer will be replaced by the multi-skilled or full-stack developers.\nIoT Development Challenges Heterogeneous connectivity IoT doesn\u0026rsquo;t just connect Things but also a variety of Internet-enabled systems, processes, and people. IoT solutions involve integrating hardware with software. IoT development requires knowledge of the entire connected environment.\nHuge ecosystem IoT solutions involve multiple technologies ranging from embedded to front end, wearables, and connected enterprise solutions. IoT solution implementation needs multiple specialized skills at different levels, such as native developers (C, C++, embedded), gateway developers (Arduino, C, C++, Python, Node.js), network developers, cloud developers, web developers (.NET, Java, PHP, Python), mobile app developers (Android, iOS, Windows), IoT platform developers (Azure, AWS, Xively, Predix), AR-VR developers (Smart Glass, HoloLens, Unity), Big Data developers, enterprise solutions developers (ERP,CRM, SAP, Salesforce), business analysts, graphics designers, build engineers, architects, project managers, etc.\nDevelopment cost The cost of building IoT solutions is high due to the involvement of multiple people from multiple skillsets. Looking at the huge ecosystem, it is a difficult and costly affair to coordinate and maintain consistency among multiple people from different backgrounds and experience levels. With the traditional developer mindset, we would need at least 20 people to build an end-to-end IoT PoC, which is a huge cost. Global development is also difficult due to unavailability of real environments, hardware, and extreme coordination needs. IoT solutions require stakeholders to be more connected to each other, which results in higher costs as more and more people are needed.\nSolving the Chaos The traditional developer mindset, which binds people to Java or .NET or an otherwise specific technology will be discouraged in the near future. People with multiple skills or full-stack knowledge—who work with multiple technologies, tools, and processes—will reduce the cost of IoT development, and thus, they will replace the traditional workers.\nIt was originally published at Dzone and LinkedIn \n","id":171,"tag":"transformation ; iot ; disruption ; multi-skill ; technology","title":"Another IoT Disruption: A Need of Multi-Skilled Devs","type":"post","url":"/post/iot-disruption-multi-skilled-people/"},{"content":"Insects are the most diverse species of animals found on earth. While every species is required to maintain the balance of nature, excessive accumulation of any one type can have devastating effects on humans and environment.\nInsect management relies on the early detection and accuracy of insect population monitoring techniquessensors-12-15801. Integrated data gathering about insect population together with the ecological factors like temperature, humidity, light etc., makes it possible to execute appropriate pest control at the right time in the right place.\nA well-known technique to perform pest control monitoring is based on the use of insect traps conveniently spread over the specified control area. Human operators typically performs periodical surveys of the traps disseminated through the field. This is a labour, time and cost-consuming activity. This paper proposes an affordable, autonomous monitoring system that uses image processing to detect insect population at regular intervals and sends that data to a web server via an IOT protocol (MQTT in this case). A web dashboard visualises the insect density at different time instances, with a warning system if the density crosses a set threshold value.\nSolution Concept Due to the rapid development of digital technology, there is an opportunity for image processing technology to be used for the automation of insect detection. This paper illustrates a system based on a low cost, low power imaging and processing device operated through a wireless sensor network that is able to automatically acquire images of the trapping area and process them to count the insect density. The data is then sent to a web server where it gets stored in a database. The web server hosts a web dashboard through which human operators can remotely monitor the insect traps. The insect monitoring system is developed using open source software on a Linux based machine. The app runs on a small device like Raspberry Pi 2 with the help of Raspbian Jessie OS. The work flow is as such that the application running on the raspberry pi 2 captures few seconds worth of video footage at regular intervals. The frames captured are then processed using openCV image processing library to calculate the insect density during that time frame. Once the data is available, it is then to a web server using the IoT protocol MQTT (Message Queuing Telemetry Transport). The web server receives the data sent and stores it in the database. We have a web dashboard hosted on the web server. It retrieves the data from the database and visualises the insect density at different time instances, with a warning system if the density crosses a set threshold value.\nHere are the key components of the system : OpenCv Image Processing - OpenCV (Open Source Computer Vision) is a library of programming functions mainly aimed at real-time computer vision and image processing. We use the C/C++ interface to capture images from the webcam and process those images to calculate the insect count.\n Mosquitto MQTT Broker - Mosquitto is an open source message broker that implements the MQTT protocol for carrying out messages using publisher/subscriber method. We send out the json messages from our application to the web server.\n Paho MQTT client for python - Paho MQTT python client library implements the MQTT protocol and provides easy to use API which enables out python script to connect to the MQTT broker and receive messages from the raspberry pi application.\n SQLite Database - SQLite provides a lightweight disk-based database that doesn’t require a separate server process and allows accessing the database using a nonstandard variant of the SQL query language. We use the sqlite database to store the data received by the python script.\n Apache web server for the User Web Dashboard - We have used apache2 as our web server to host the web dashboard. The dashboard is written as a python script which we execute as a CGI program in the apache web server.\n Implementation Strategy Our webcam will be located inside the trap, fixed at the top so that it faces the bottom of the trap. It will be connected to the raspberry pi (located somewhere outside the trap) with a USB cable.\nThe C++ application deployed in the raspberry pi accesses the webcam to capture the images of the trap and process them using opencv library to calculate the insect count. When the program is first initialised, it captures initial video frames of the empty insect trap to be used as a reference. After that it captures the change in the scene at regular intervals. The image frames captured are then processed using the opencv functions. In our system, we use their background subtraction algorithm to generate the foreground masks that clearly show the insects trapped. Following figure shows the image captured by the webcam and the foreground mask generated. The foreground masks generated are then processed to calculate the number of insects trapped. A JSON string is created with current timestamp and the insect count. This string is then sent to the web server by publishing it using the Mosquitto broker and its corresponding mosquito_pub client. A python script is executing on the web server that receives this json message we have sent. It uses the paho mqtt client to subscribe to the same topic as our c++ application. The json data received is parsed and stored in the sqlite database on the server. The SQLite database contains only a single table called insect_density. It contains two columns.\n date time: It contains the datetime data and stores the timestamp received in the json string. It also acts as a primary key. density: It contains integer data and stores the number of insects at time instance stored in the date_time column. Finally, the data stored in the database is consumed by the web dashboard that visualises the data into a line chart format. The dashboard is written in a python script that is executed as a CGI program on the apache server and can be accessed using any browser. The dashboard also shows a warning message if the insect density exceeds a set threshold limit.\nBenefits Monitoring of insect infestation relies on manpower, however automatic monitoring has been advancing in order to minimize human efforts and errors. The aim of this paper is to establish a fully computerized system that overcome traditional and classical methods of monitoring and is environmentally and user friendly. The main advantages of our proposal are:\n A more accurate and reliable system for monitoring insects\u0026rsquo; activity as it mitigates human made errors Reduces human effort hence lowering time and cost efforts Image analysis provides a realistic opportunity for the automation of insect pest detection high degree of accuracy at least when tried under controlled environments. effective use of pest surveillance and monitoring system which may result to efficient timing of interventions The user interface and the software are friendly and easy to follow; they are understandable by normal users. Higher scalability, being able to deploy in small monitoring areas (greenhouses) as in large plantation extensions Trap monitoring data may be available in real time through an Internet connection Constraints and Challenges Currently the Insect Monitoring system have following limitations that can be worked upon in coming future:\n Presently, the configuration for the server IP address, buffer message limit and the density threshold value have to be set manually. We plan to create a configuration page for the user to remotely configure the application running on the raspberry pi. We had planned to send the image of the trap along with the timestamp and the insect count in the json string via mqtt. We ran into problems while serialising the image and it became difficult to do so within the given time frame. Conclusion The Insect Monitoring System displays lot of potential and can be extended to implement advance features and functionalities. The advance feature list includes but not limited to the following:\n Higher scalability: The system can be scaled to a large number of devices over large areas. Stream Analytics: We can port the system over to the cloud and apply stream analytics and/or machine learning algorithms to get accurate predictions and patterns of insect activity according to ecological factors. Advanced Image Processing: In the future, other image processing techniques may be used to enable the detection and extraction of insects more efficient and accurate. As of now, we are only processing the count of insects without identifying what kind of insect it is. Identification of different type of insects can be done. Show Live Feed: We can incorporate the option to show live feed from the webcam. Web dashboard can contain an option which will enable the user to select a particular device and show the live feed of the insect trap from the webcam. References http://www.pyimagesearch.com/2016/04/18/install-guide-raspberry-pi-3-raspbian-jessie-opencv-3/ http://opencv-srf.blogspot.in/2011/09/capturing-images-videos.html http://docs.opencv.org/3.1.0/d1/dc5/tutorial_background_subtraction.html http://raspberrywebserver.com/sql-databases/accessing-an-sqlite-database-with-python.html https://pypi.python.org/pypi/paho-mqtt/1.1 https://help.ubuntu.com/lts/serverguide/httpd.html http://raspberrywebserver.com/cgiscripting/rpi-temperature-logger/building-a-web-user-interface-for-the-temperature-monitor.html \u0026ldquo;Pest Detection and Extraction Using Image Processing Techniques” by Johnny L. Miranda, Bobby D. Gerardo, and Bartolome T. Tanguilig III. International Journal of Computer and Communication Engineering, Vol. 3, No. 3, May 2014 “Counting Insects Using Image Processing Approach” by Mu\u0026rsquo;ayad AlRawashda, Thabet AlJuba, Mohammad AlGazzawi, Suzan Sultan+ and Rami Arafeh. Palestine Polytechnic University “Monitoring Of Pest Insect Traps Using Image Sensors \u0026amp; Dspic” by C.Thulasi Priya, K.Praveen, A.Srividya. International Journal of Engineering Trends and Technology-Volume 4 Issue 9-September2013 “Monitoring Pest Insect Traps by Means of Low-Power Image Sensor Technologies” by Otoniel López, Miguel Martinez Rach, Hector Migallon, Manuel P. Malumbres, Alberto Bonastre and Juan J. Serrano. Miguel Hernandez University, Spain This concept was supported by my colleague Ricktam K at IoT Center of Excellence of Nagarro\n","id":172,"tag":"insect monitoring ; IOT ; raspberry pi ; concept ; c++ ; opencv ; python ; MQTT ; nagarro ; technology","title":"IoT based insect monitoring concept","type":"post","url":"/post/iot-insect-monitoring/"},{"content":"The ‘Internet of Things’ (IoT) presents tremendous opportunity in all the industry verticals and business domains. Specifically, the ‘Industrial Internet of Things’(IIoT) promises to increase the value proposition by making self-optimizing business processes. As discussed in the post “The Driving Forces Behind IoT”, there are several challenges to be addressed to realize the full IoT potentials, but following steps will help organization to systemically make their products participate in the IoT journey and stay in the IoT race. These steps will help in modernize their existing products or to define strategy to build futuristic products.\n5 Steps of the IoT race 1. Preparation Analyze the product and services for their strengths and weaknesses. Analyze the sensing and acting capabilites of the products, or in other word see what all ambient information the product can sense and how it can be actuated, or how it can listen to the instructions. Analyze the integrated, embedded services/components of the product and benchmark the product on usability, self-service capability, optimizations, minimal human interference etc. Research the competitive products are offering, and the customer experience.\nDefine a clear IoT vision for your organization and address where do you want to stand, what are your targets, whom do you want to compete with and then define IoT vision for the products and services aligned to your organization’s IoT vision.\nPlan the ROI in term of projected financials, reductions in –ve ROI (human trainings, involvement etc.), and innovation. Consider complete eco-system for the products and services provided internally or externally by partners, suppliers and dealers. Plan the roadmap for achieving the IoT vision.\n2. Registration Once preparation is over, you have a clear vision and roadmap. Now it’s time to register for the IoT race by engaging each of the stakeholders (internal as well as external) to the IoT program. Declare your participation in the IoT race by letting your employees, customers, and partners know about benefits derived from the new vision. Collect views from the stakeholder and adjust the IoT program such that it also meets expectations of the stakeholders.\n3. Warm-up During warm-up, you should be able to complete the technology exploration and feasibility studies. Define a common framework to generate consistent output and incorporate feedbacks. The Framework should include processes for building various proof of concepts for the integrated product with potential sensory architecture, cloud platforms, data storage, communication protocols and networks. The Framework should also have processes to plan the development of pilot projects and get feedback on them. If needed new devices (such as mobile phone, smart glasses, wearables or BYOD) may be introduced here to enhance the customer experiences.\n4. Participation Once you are done with warm-up, you know the potential approaches to be followed, and the devices to be bought into the game. Now you need to plan for the step-by-step development of the product features with a highly agile and flexible methodology, keeping the deployment as small as possible such that we derive the benefits inclemently, and also incorporating the feedback at the same time. Implementation process also need to be optimized time to time by building accelerator, monitors and tools which may help in speeding up the implementation. Eg, building reusable components, frameworks, standards, guidelines, best practices and checklists.\n5. Stay ahead While delivering the product features, also plan for their scaling. Develop SDKs to implement extensions and plugins on the product features. Call for the new/existing partners and service integrators to customize and enrich your product. Let the partner eco-system exploit the product capability.\nHope, these five steps will jump start your IoT journey, and keep you in the race, as a front runner. At every steps feedback is really important, every investment need to be well validated against ROI and then well-rehearsed in by pilot implementation and well exploited by partners. At the end it will be a win-win situation for all the stakeholders.\nIt was originally published at LinkedIn \n","id":173,"tag":"race ; IOT ; disruption ; product ; technology","title":"5 Steps of the IoT race","type":"post","url":"/post/5-steps-of-the-iot-race/"},{"content":"The industry has been going through an evolution. Industry 4.0 is the fourth industrial revolution where the key will be on digital transformation. Industry 4.0 creates what has been called a \u0026ldquo;smart factory\u0026rdquo;. Within the modular structured smart factories, cyber-physical systems monitor physical processes, create a virtual copy of the physical world and make decentralized decisions. Over the Internet of Things, cyber-physical systems communicate and cooperate with each other and with humans in real time, and via the Internet of Services, both internal and cross-organizational services are offered and used by participants of the value chain. The core of industry 4.0 is technology. Digital trust and data analytics are the foundation of this. Focus will be on people and culture to drive transformation. All this is achieved through Industrial Internet of Things (IIoT) which falls under the ecosystem of IoT. But the industry is in complex of itself, where there is a need to connect different kind of heterogeneous systems, high availability and scalability without compromising on the security aspect of the system. Hence there a need to have a specific architecture reference specially designed for the IIoT. This document is all about the architecture reference for IIoT.\nFoundation Principle and Concept Agility With the ever-growing challenges and opportunities that face companies today, a company’s IT division is a competitive differentiator that supports business performance, product development, and operations. A leading benefit companies seek when creating an IIoT solution is the ability to efficiently quantify opportunities. These opportunities are derived from reliable sensor data, remote diagnostics, and remote command and control between users and devices. Companies that can effectively collect these metrics open the door to explore different business hypotheses based on their IIoT data. For example, manufacturers can build predictive analytics solutions to measure, test, and tune the ideal maintenance cycle for their products over time. The IIoT lifecycle is comprised of multiple stages that are required to procure, manufacture, onboard, test, deploy, and manage large fleets of physical devices.\nInteroperability The ability of machines, devices, sensors, and people to connect and communicate with each other via the Internet of Things (IoT) or the Internet of People (IoP). In order for a company’s IoT strategy to be a competitive advantage, the IT organization relies on having a broad set of tools that promote interoperability throughout the IoT solution and among a heterogeneous mix of devices.\nInformation transparency The ability of information systems to create a virtual copy of the physical world by enriching digital plant models with sensor data. This requires the aggregation of raw sensor data to higher-value context information.\nSecurity Security is an essential quality of an IIoT system and it is tightly related to specific security features which are often a basic prerequisite for enabling Trust and Privacy qualities in a system. Ability of the system to enforce the intended confidentiality, integrity and service access policies and to detect and recover from failure in these security mechanisms.\nPerformance and Scalability This perspective addresses two quality properties that are closely related: Performance and Scalability. Both are, compared to traditional information systems, even harder to cope with in a highly distributed scenario as we have it in IIoT. The ability of the system to predictably execute within its mandated performance profile and to handle increased processing volumes in the future if required. Any system with complex, unclear, or ambitious performance requirements; systems whose architecture includes elements whose performance is unknown; and systems where future expansion is likely to be significant.\nArchitecture Components Below is the high level architecture of a IIoT System.\nProtocols (Device Connectivity) One of the issues encountered in the transition to the IIoT is the fact that different edge-of-network devices have historically used different protocols for sending and receiving data. While there are a number of different communication protocols currently in use, such as OPC-UA, the Message Queueing Telemetry Transport (MQTT) transfer protocol is quickly emerging as the standard for IIoT, due to its lightweight overhead, publish/subscribe model, and bidirectional capabilities.\nService Orchestration Service orchestration would happen at this level, where the system will interact with different business adaptors. The services are more of the nature of event based and it is based out the principle of Reactive Manifesto. The services will be responsive, resilient, elastic and message driven.\nRule Engine The Rules Engine makes it possible to build IIoT applications that gather, process, analyze and act on data generated by connected devices at global scale without having to manage any infrastructure. The Rules Engine evaluates inbound messages published into IIoT and transforms and delivers them to another device or a cloud service, based on business rules you define. A rule can apply to data from one or many devices, and it can take one or many actions in parallel.\nDevice Provisioning (Data Identity and State Store) Device provisioning uses the identity store to create identities for new devices in the scope of the system or to remove devices from the system. Devices can also be enabled or disabled. When they are disabled, they have no access to the system, but all access rules, keys, and metadata stay in place.\nSecurity Security is going to be one the fundamental to the system. The data across every tier would go through proper security mechanism.\nThis article was also supported by Ram Chauhan\n","id":174,"tag":"architecture ; IOT ; industrial ; Cloud ; smart factory ; Industry 4.0 ; enterprise ; technology","title":"Industrial IoT (IIOT) Reference Architecture","type":"post","url":"/post/iiot-reference-architecture/"},{"content":"The rapid growth of Internet of Things (IoT) and miniature wearable biosensors have generated new opportunities for personalized eHealth and mHealth services. We present a case study of an intelligent cap that can measure the amount of water in the bottle, monitor activity of opening or closing of bottle cap. This paper presents a Proof of Concept implementation for such a connected smart bottle that sends measurements to the IoT Platform.\nExponential growth of the number of physical objects connected to the Internet, also known as Internet of Things or IoT, is expected to reach 50 billion devices by 2020 [1]. New applications include smart homes, transportation, healthcare, and industrial automation. Smart objects in our environment can provide context awareness that is often missing during monitoring of activities of daily living [2], [3].\nOne of the most important factors for health and wellbeing is proper hydration. In our busy lives, it is hard to remember to drink enough water. And most of the time we forget to drink enough water whether we are in the home, office or on-the-go. Around 75% of Americans are chronically dehydrated all the time.\nWater makes up 60% of our body, 75% of our brain, and 83% of our blood. An intelligent hydration monitoring and management platform can provide automatic, accurate, and reliable monitoring of fluid consumption and provide configurable advice for optimum personalized hydration management. A smart water bottle can satisfy the needs of a variety of groups, including athletes that want to enhance performance, dieters that want to achieve their weight goals, as well as elderly in group homes or living alone that often suffer from dehydration. While most users who seek to improve, their wellness have insufficient water intake, some medical applications require limiting of water intake. Typical examples include kidney disease and congestive heart failure (CHF) patients that need to comply with recommended water intake protocols and limit the total amount of water taken during the day while still taking water regularly throughout the day. An integrated hydration management platform enables users to receive alerts and reminders of when to drink, set goals for how much to drink, show current and previous consumption levels etc. The first intelligent water bottle on the market was Hydra Coach. The bottle features a display that presents hydration information, but no wireless connectivity [4]. The new generation of bottles, such as HydrateMe and Trago feature Bluetooth wireless interfaces and custom smartphone and smartwatch applications [5].\nA smart water bottle measures the amount of water a person is drinking through sensor technology, usually located in the cap of the bottle. The features of a smart water bottle can also include, temperature sensors, freshness timers, drinking reminders, and daily hydration statistics. These additional features can help the drinker know when to fill up their bottle, how often they are drinking and how much more water they should include into their daily goal.\nSolution Concept A smart bottle integrated through an ultrasonic sensor, a push button, a NodeMCU and a LIPO battery as presented in Fig. 1. The controller (NodeMCU) communicates with the cloud using ESP8266 12-E wireless interface. The controller processes data from the smart bottle and sends processed information to the cloud thingspeak.\nProgramming in NodeMCU is same as Arduino. Now let’s discuss the implementation of the system in steps:\n Ultrasonic Sensor and pushbutton is integrated with the NodeMCU digital I/O pins. Ultrasonic Sensor sends the distance of the water level from the cap of the bottle. Based on the distance of the water, controller calculate the percentage of water and sends to the thingspeak cloud using http POST request. Similarly, push button also detects the pressed event and sends the event in the thingspeak. Working Demo here Constraints and Challenges Currently the Smart Bottle system have following limitations that can be worked upon in coming future:\n The ultrasonic sensor is not safe from water so, bottle can’t be tilted in closed condition. Conclusion The Smart Bottle System displays lot of potential and can be extended to implement advance features and functionalities. The advance feature list includes but not limited to the following:\n Water proof ultrasonic sensor to overcome our constraints. The same can be implemented using HX711-Weight sensor with load cell. Instead of using the push button we can use the hall sensor to detect the opening of bottle cap. References [1] A. Al-Fuqaha, M. Guizani, M. Mohammadi, M. Aledhari, and M. Ayyash, “Internet of Things: A Survey on Enabling Technologies, Protocols, and Applications,” IEEE Commun. Surv. Tutor., vol. 17, no. 4, pp. 2347–2376, Fourthquarter 2015. [2] C. Perera, A. Zaslavsky, P. Christen, and D. Georgakopoulos, “Context Aware Computing for The Internet of Things: A Survey,” IEEE Commun. Surv. Tutor., vol. 16, no. 1, pp. 414–454, First 2014. [3] L. Hood, J. C. Lovejoy, and N. D. Price, “Integrating big data and actionable health coaching to optimize wellness,” BMC Med., vol. 13, p. 4, 2015 [4] Emil Jovanov, Vindhya R Nallathimmareddygari, Jonathan E. Pryor,”SmartStuff: A Case Study of a Smart Water Bottle”, 978-1-4577-0220-4/16, 2016 [5] “The future of fitness: Custom Gatorade, smart water bottles, and a patch that analyzes your sweat | Macworld.” http://www.macworld.com/article/3044555/hardware/the-future-of-fitness-custom-gatorade-smart-waterbottles-and-a-patch-that-analyzes-your-sweat.html I worked on this concept with my colleague Ricktam K at IoT Center of Excellence of Nagarro\n","id":175,"tag":"smart bottle ; IOT ; smart cap ; concept ; enterprise ; Node MCU ; Ultrasonic ; thingspeak ; nagarro ; technology","title":"Smart bottle IOT concept","type":"post","url":"/post/building-smart-bottle-concept/"},{"content":"On a usual day at office, if an employee wishes to book a meeting room, he or she first has to open their outlook account, check the calendar for its availability and then send a booking invitation. If they directly go to a meeting room, they have no way of knowing whether it is booked or not without checking the calendar.\nSmart Meeting Room is a step forward in automating the availability and booking procedure of a meeting room.\nThe main idea behind this app is to avoid the hassle of opening outlook, checking for room availability and then booking the room in which a user is already in. The user will just have to give a voice command to check the availability of room and another one to book the room. The application will recognize the user’s command and give a speech output. In order to confirm the identity of person, a camera is initiated, capturing the user’s image and recognizing it using the face detection API.\nTo fully automate a meeting room, an AC control module and a TV control module have been integrated into the app which allows the user to control the room’s air conditioning unit and the TV unit through simple voice commands.\nSolution Concept The application targets controlling the meeting rooms of the company using a simple natural interface. The solution is divided into four modules. Checking Room Availability - A user can directly go to a meeting room and ask for its availability using simple voice commands. The application will recognize the user’s command and give a speech output. Room Booking - If the room is available, we initiate a camera, capture the user’s image and recognize it using the face detection API in order to confirm the identity of person. Once the user is recognised, a meeting request is sent on behalf of the user and the room gets booked. Controlling the AC- The air conditioning unit of the room is controlled using simple voice commands. The application sends the user’s commands to a device attached to the AC which then transmits appropriate Infrared signals. Controlling the TV - Controlling the television unit of the room using voice commands works much the same way as the AC. Benefits The Smart Meeting Room project eases the way in which an employee can interact with a meeting room.\n Using the Speech Recognition and Face Recognition APIs, it provides for very rich and natural interaction between the user and the application. It provides a single interface through which a user can control all the aspects of a meeting room. Whether it is checking for room availability and its booking or controlling the room’s AC and TV, everything can be done using simple voice commands. It mitigates the need for finding and using multiple remote controls for the AC and the TV. A simple voice command does it all. It removes the hassle of manually checking the calendar on outlook and then creating a booking. Design and Architecture The architecture of the Smart Meeting Room system was designed to provide a stable foundation, both for current functionality and for future requirements.\n Universal Windows Application: Universal Windows Platform (UWP) provides a common app platform available on every device that runs Windows 10. The Universal windows app written for the Smart Meeting Room will run on Raspberry Pi 3 installed with windows IoT Core operating system. UWP provides a large set of APIs to work with which makes it easy to integrate richer experiences with your devices such as natural user interfaces, searching, online storage and cloud based services.\n User Interface: A user interface created using XAML provides users to interact with the system directly. Media Control: The Media Control component in the universal apps enables us to use the webcam attached with our system and show a live camera feed to capture the image. Bluetooth Low Energy (BLE): The power-efficiency of Bluetooth with low energy functionality makes it perfect for devices that run for long periods on limited power sources. We use the Bluetooth GATT profile to connect with the Arduino board in order to send the commands for the AC and the TV operations. Microsoft Speech Platform: The Microsoft Speech Platform provides a comprehensive Speech Platform Runtime management in voice-enabled applications. It provides the ability to recognize spoken words (speech recognition) and to generate synthesized speech (text-to-speech or TTS) to enhance users\u0026rsquo; interaction.\n Microsoft Exchange Web Service: The Microsoft’s Exchange Web Service allows us to connect to the Office 365 account and access the public folders and calendars and retrieve the free/busy status and availability of the meeting rooms.\n Microsoft Face API: The Project Oxford’s Face API for windows universal apps allows for programmatic access to the cognitive service subscription. We use the API to detect faces in the picture captured, identify the faces and retrieve their stored information.\n Microsoft Office 365 Web API: The Office 365 Managed APIs provides a managed interface for developing the application that allow programmatic access to public folders and calendars.\n Microsoft Project Oxford Face Detection API: The Microsoft cognitive services Face API Detect one or more human faces in an image, organize people into groups according to visual similarity, and identify previously tagged people in images.\n Arduino Application: The application to control the AC and the TV is run on a single board microcontroller such as Arduino Uno.\n Smart Meeting Room Web Application: A separate web application running on Microsoft Azure Platform is made for users to register themselves.\n Constraints and Challenges The Arduino application is written for a specific model of air conditioner and television. Smart Meeting Room application will not work for any AC or TV. We had to manually retrieve the IR commands to be sent using the IR receiver sensor. Every permutation of the AC operation was generated manually, which was a very tedious work. While writing the Arduino application, we constantly had to keep in mind the size of the code. Due to limited memory, the number of Serial. Write() were to be kept in check which made logging and debugging a challenge. We had to use the Exchange Web Service, writing soap requests and parsing soap responses manually. This was due to the fact that the EWS Managed API generated a popup which cannot be handled in Raspberry Pi. We have implemented the Speech Interpretation in low confidence. This might create a problem in proper interpretation of commands due to different voice accents. We generated the Bluetooth GATT profile on our own with its services and characteristics exposed. No particular standard was followed. For the IR emitter sensor to properly work, it was to be positioned in such a way that it was in a direct line of sight of both the AC and the TV. This became a challenge depending on the how and where the AC and the TV were installed with respect to each other. Conclusion The Smart Meeting Room application is a good experiment, it can be extended to implement more features in future such as\n Implementing meeting check in functionality on Smart Display Controlling room condition of specific meeting room using a web based dashboard Controlling AC and TV of specific meeting room remotely using a web dashboard Turning the air conditioner on/off by sensing the presence of person References Microsoft Cognitive Services Face API Microsoft Azure Blob Storage Universal Windows Platform Speech Recognition https://msdn.microsoft.com/en-us/windows/uwp/input-and-devices/speech-interactions https://blogs.msdn.microsoft.com/cdndevs/2015/05/25/universal-windows-platform-speech-recognition-part-2/ Universal Windows Platform Bluetooth GATT https://developer.microsoft.com/en-us/windows/iot/samples/blegatt2 Microsoft Exchange Web Service Office 365 OAuth https://github.com/OfficeDev/O365-WebApp-OAuthController This concept was supported by my colleagues Ricktam K and Harsh Godha at IoT Center of Excellence of Nagarro\n","id":176,"tag":"smart meeting room ; IOT ; azure ; cloud ; concept ; enterprise ; Arduino ; Raspberry PI ; BLE ; Microsoft ; technology","title":"Building a smart meeting room concept","type":"post","url":"/post/building-smart-meeting-room-concept/"},{"content":"Android Things, an android based embedded operating system, is the new “it” thing in the Internet of Things(IOT) space. Developed by Google under the codename “Brillo, a.k.a. Project Brillo”, it takes the usual Android development stack—Android Studio, the official SDK, and Google Play Services—and applies it to the IOT.\nGoogle formally took the veils off of Brillo at its I/O 2015 conference. It is aimed to be used with low-power and memory constrained IOT devices, allowing the developers to build a smart device using Android APIs and Google Services. Google released the first developer preview version in December 2016 with an aim to roll out updates every 6-8 weeks.\nAndroid Things provides a turnkey solution to start building on development boards like Intel Edison, Intel Joule, NXP Pico, NXP Argon i.IMX6UL, and Raspberry Pi 3. At its core, it supports a System-On-Module (SoM) architecture, where a core computing module can be initially used with development boards and then easily scaled to large production runs with custom designs, while continuing to use the same Board Support Package (BSP) from Google.\nIt provides support for Bluetooth low energy and Wi-Fi. Like the OTA updates for Android Phones, developers can push Google-provided OS updates and custom app updates using the same OTA infrastructure that Google uses for its products and services. It takes advantage of regular best-in-class security updates by building on top of the Android OS. Support for Weave, Google’s IoT communications platform, is on the roadmap and will come in a later developer preview.\nAndroid Things makes developing connected embedded devices easy by providing the same Android development tools, best-in-class Android framework, and Google APIs that make developers successful on mobile. It extends the core Android framework with additional APIs provided by the Things Support Library.\nDifference between Android and Android Things Development Major differences between the core Android development and Android Things.\n Android Things platform is streamlined for a single application use. It doesn\u0026rsquo;t include the standard suite of system apps and content providers. The application uploaded by the developer is automatically launched on startup.\n Although Android Things supports graphical user interfaces using the same UI toolkit available to traditional Android applications, it doesn’t require a display. On devices where a graphical display is not present, activities are still a primary component of the Android Things app. All input events are delivered to the foreground activity.\n Android Things expects one application to expose a \u0026ldquo;home activity\u0026rdquo; in its manifest as the main entry point for the system to automatically launch on boot. This activity must contain an intent filter that includes both CATEGORY_DEFAULT and IOT_LAUNCHER.\n Android Things supports a subset of the Google APIs for Android. As a general rule, APIs that require user input or authentication credentials aren\u0026rsquo;t available to apps.\n Requesting Permissions at Runtime is not supported because embedded devices aren\u0026rsquo;t guaranteed to have a UI to accept the runtime dialog. All normal and dangerous permissions declared in the app\u0026rsquo;s manifest are granted at install time.\n Since there is no system-wide status bar in Android Things, notifications are not supported.\n Let’s start with Android Things Android Things provide turnkey hardware solutions for certified development boards. As of now, it provides support for Intel Edison, Intel Joules, Raspberry Pi 3, and NXP Pico i.MX6UL.\nFlash the image Binary system image files are available on the official Android Things page. Download the image file and install them on the development board of your choice from the approved list of developer kits. Pre-requisites and install instructions for each of the development boards are provided on the official website. Once Android Things OS has been installed on the device, it can be accessed like any other android device using the Android debug Bridge(adb) tool.\n$ adb devices List of devices attached Edion61aa device AP0C52890C device Boot your device. When the boot process finishes, the result is shown below (if your device is attached to a display): If you get this screenshot it means that Android Internet of things is running successfully. The Android Things Launcher shows information about the board on the display, including the IP address. After flashing your board, it is strongly recommended to connect it to the internet. This allows your device to deliver crash reports and receive updates.\nTo connect your board to Wi-Fi using adb:\n Send an intent to the Wi-Fi service that includes the SSID and passcode of your local network:\n$ adb shell am startservice \\ -n com.google.wifisetup/.WifiSetupService \\ -a WifiSetupService.Connect \\ -e ssid \u0026lt;Network_SSID\u0026gt; \\ -e passphrase \u0026lt;Network_Passcode\u0026gt; Note: You can remove the passphrase argument if your network doesn\u0026rsquo;t require a passcode.\n Verify that the connection was successful through logcat:\n$ adb logcat -d | grep Wifi ... V WifiWatcher: Network state changed to CONNECTED V WifiWatcher: SSID changed: ... I WifiConfigurator: Successfully connected to ... Test that you can access a remote IP address:\n$ adb shell ping 8.8.8.8 Building First Android Things App Things apps use the same structure as those designed for phones and tablets. This similarity means you can modify your existing apps to also run on embedded things or create new apps based on what you already know about building apps for Android. Let’s start building our very first app for Android Things. Use a development board of your choice with Android Things OS image installed and the device booted up.\nPre-requisites Before building apps for Things, these requirements must be fulfilled:\n Download Android Studio Update the SDK tools to version 24 or higher Update the SDK with Android 7.0 (API 24) or higher Create or update the app project to target Android 7.0 (API level 24) or higher in order to access new APIs for Things Add the library Android Things devices expose APIs through support libraries that are not part of the Android SDK.\n Add the dependency artifact to the app-level build.gradle file:\ndependencies { ... provided \u0026#39;com.google.android.things:androidthings:0.2-devpreview\u0026#39; } Add the things shared library entry to the app\u0026rsquo;s manifest file:\n\u0026lt;application ...\u0026gt; \u0026lt;uses-library android:name=\u0026#34;com.google.android.things\u0026#34;/\u0026gt; ... \u0026lt;/application\u0026gt; Declare a home activity An application intending to run on an embedded device must declare an activity in its manifest as the main entry point after the device boots. Apply an intent filter containing the following attributes:\n\u0026lt;application android:label=\u0026#34;@string/app_name\u0026#34;\u0026gt; \u0026lt;uses-library android:name=\u0026#34;com.google.android.things\u0026#34;/\u0026gt; \u0026lt;activity android:name=\u0026#34;.MainActivity\u0026#34;\u0026gt; \u0026lt;!-- Launch activity as default from Android Studio --\u0026gt; \u0026lt;intent-filter\u0026gt; \u0026lt;action android:name=\u0026#34;android.intent.action.MAIN\u0026#34;/\u0026gt; \u0026lt;category android:name=\u0026#34;android.intent.category.LAUNCHER\u0026#34;/\u0026gt; \u0026lt;/intent-filter\u0026gt; \u0026lt;!-- Launch activity automatically on boot --\u0026gt; \u0026lt;intent-filter\u0026gt; \u0026lt;action android:name=\u0026#34;android.intent.action.MAIN\u0026#34;/\u0026gt; \u0026lt;category android:name=\u0026#34;android.intent.category.IOT_LAUNCHER\u0026#34;/\u0026gt; \u0026lt;category android:name=\u0026#34;android.intent.category.DEFAULT\u0026#34;/\u0026gt; \u0026lt;/intent-filter\u0026gt; \u0026lt;/activity\u0026gt; \u0026lt;/application Connect the Hardware We will develop a push button application. A button sensor will be connected to the development board, which will generate an event every time it is pushed.\n Connect the GND pin of the button sensor to the GND of the device Connect the VCC of button to the 3.3V pin of the device Connect the SIG pin of button to any GPIO input pin. Interact with Peripherals Android Things Peripheral I/O APIs provide a way to discover and communicate with General Purpose Input Ouput (GPIO) ports.\n The system service responsible for managing peripheral connections is PeripheralManagerService Use this service to list the available ports for all known peripheral types. public class HomeActivity extends Activity { private static final String TAG = \u0026#34;HomeActivity\u0026#34;; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); PeripheralManagerService service = new PeripheralManagerService(); Log.d(TAG, \u0026#34;Available GPIO: \u0026#34; + service.getGpioList()); } } To receive events when a button connected to GPIO is pressed:\n Use PeripheralManagerService to open a connection with the GPIO port wired to the button. Configure the port as input with DIRECTION_IN. Configure which state transitions will generate callback events with setEdgeTriggerType(). Register a GpioCallback to receive edge trigger events. Return true within onGpioEdge() to continue receiving future edge trigger events. When the application no longer needs the GPIO connection, close the Gpio resource. public class HomeActivity extends Activity { private static final String TAG =HomeActivity.class.getSimpleName(); private static final String GPIO_PIN_NAME = “BCM23”; private Gpio mButtonGpio; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); PeripheralManagerService service = new PeripheralManagerService(); \tLog.d(TAG, \u0026#34;Available GPIO: \u0026#34; + service.getGpioList()); try { // Step 1. Create GPIO connection. mButtonGpio = service.openGpio(GPIO_PIN_NAME); // Step 2. Configure as an input. mButtonGpio.setDirection(Gpio.DIRECTION_IN); // Step 3. Enable edge trigger events. mButtonGpio.setEdgeTriggerType(Gpio.EDGE_FALLING); // Step 4. Register an event callback. mButtonGpio.registerGpioCallback(mCallback); } catch (IOException e) { Log.e(TAG, \u0026#34;Error on PeripheralIO API\u0026#34;, e); } } // Step 4. Register an event callback. private GpioCallback mCallback = new GpioCallback() { @Override public boolean onGpioEdge(Gpio gpio) { Log.i(TAG, \u0026#34;GPIO changed, button pressed\u0026#34;); // Step 5. Return true to keep callback active. return true; } }; @Override protected void onDestroy() { super.onDestroy(); // Step 6. Close the resource if (mButtonGpio != null) { mButtonGpio.unregisterGpioCallback(mCallback); try { mButtonGpio.close(); } catch (IOException e) { Log.e(TAG, \u0026#34;Error on PeripheralIO API\u0026#34;, e); } } } } Deploy the app It’s time to deploy the app to the device. Connect to the device using adb. Deploy the app to the device by clicking on the Run button on the android studio and selecting the device. Press the button on the button sensor. The result can be seen on the logcat tab on the android studio. This app can be extended to include a LED sensor. Connect the LED to an output GPIO pin. Whenever the button press event is received, call the PeripheralManagerService to toggle the state of the LED.\npublic class BlinkActivity extends Activity { private static final String TAG = \u0026#34;BlinkActivity\u0026#34;; private static final int INTERVAL_BETWEEN_BLINKS_MS = 1000; private static final String GPIO_PIN_NAME = ...; private Handler mHandler = new Handler(); private Gpio mLedGpio; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState);\\ // Step 1. Create GPIO connection. PeripheralManagerService service = new PeripheralManagerService(); try { mLedGpio = service.openGpio(GPIO_PIN_NAME); // Step 2. Configure as an output. mLedGpio.setDirection(Gpio.DIRECTION_OUT_INITIALLY_LOW); // Step 4. Repeat using a handler. mHandler.post(mBlinkRunnable); } catch (IOException e) { Log.e(TAG, \u0026#34;Error on PeripheralIO API\u0026#34;, e); } } @Override protected void onDestroy() { super.onDestroy(); // Step 4. Remove handler events on close. mHandler.removeCallbacks(mBlinkRunnable); // Step 5. Close the resource. if (mLedGpio != null) { try { mLedGpio.close(); } catch (IOException e) { Log.e(TAG, \u0026#34;Error on PeripheralIO API\u0026#34;, e); } } } private Runnable mBlinkRunnable = new Runnable() { @Override public void run() { // Exit if the GPIO is already closed if (mLedGpio == null) { return; } try { // Step 3. Toggle the LED state mLedGpio.setValue(!mLedGpio.getValue()); // Step 4. Schedule another event after delay. mHandler.postDelayed(mBlinkRunnable, INTERVAL_BETWEEN_BLINKS_MS); } catch (IOException e) { Log.e(TAG, \u0026#34;Error on PeripheralIO API\u0026#34;, e); } } }; } Conclusion This completes the hello world, and trying out the Raspberry PI with AndroidThings. We can utilise standard andorid concepts and still enjoy the IOT.\n One thing to remember is that, at the time of creating this article, Android Things is in the third iteration of its developer preview. This means that, because it\u0026rsquo;s an early release for testing and feedback, some features are currently unavailable or may be buggy as the platform is tested and built out. Currently, support for simple analog sensors is not included in the Android Things general-purpose input/output (GPIO) classes—though there is a technical reasoning for this, SPI and I2C can still be used. As this platform is still new, there are not many drivers for sensors or other hardware, so developers using the platform will need to either create their own drivers or make do with what is currently available or open sourced by other developers in the Android Things community. References [1].\thttps://developer.android.com/things/index.html [2].\thttps://en.wikipedia.org/wiki/Android_Things [3].\thttp://www.androidauthority.com/project-brillo-google-io-612159/ [4].\thttp://www.theverge.com/2015/5/28/8677119/google-project-brillo-iot-google-io-2015 [5].\thttps://arstechnica.com/gadgets/2016/12/google-brillo-rebrands-as-android-things-googles-internet-of-things-os/ [6].\thttps://blog.mindorks.com/google-released-the-developer-preview-of-android-things-iot-75cb49b9ce24#.6nxlbpte7 [7].\thttps://developer.android.com/things/sdk/pio/gpio.html [8].\thttps://developer.android.com/things/sdk/drivers/index.html [9].\thttps://www.novoda.com/blog/writing-your-first-android-things-driver-p1/ I explored this while working at IoT Center of Excellence of Nagarro\n","id":177,"tag":"androidthings ; IOT ; android ; os ; tutorial ; nagarro ; raspberry pi ; technology","title":"Exploring AndroidThings IoT","type":"post","url":"/post/exploring-android-things/"},{"content":" Your house will remain clean for longer duration if you follow certain rules strictly such as put things at right place after use, clear any mesh just after doing it, repair/upgrade things on time, daily dusting, vacuum cleaning etc. We may avoid a massive cleanup if we apply a series of small efforts daily.\n Let for the same while writing the code – Your code will remain maintainable for longer duration if you follow certain rules strictly such as follow the coding standards and guidelines, clean the code just after it, fix/refactor on time, periodic design review and refactoring. We may avoid complete rewrite if we apply a series of small effort daily.\n This article was originally published on My LinkedIn Profile\n","id":178,"tag":"clean-code ; practice ; maintainability ; technology","title":"Writing maintainable code is like house cleaning","type":"post","url":"/post/clean-code/"},{"content":"Everybody in the software industry is talking about IoT in terms of billions of devices, their growth in coming years, changing business models, and adaptation of the IoT in the market. Connecting the things has become so prevalent nowadays, that every software company wants to expand in IoT by building IoT-centric products and services.\nThere are many industrial applications which connect to sensors and actuators, and can be controlled remotely. They have been around for a long time.\nWhy Is There so Much Hype About IoT? Yes, the connected things were already there, but with time, the technology used in connecting the things has been enhanced. IoT is about more than just connecting ordinary things and making them accessible remotely. The “Connected Things” becomes the “Internet of Things” when they are connected to people and their processes. Connected things produce huge amounts of data which can be monitored and visualized remotely, but in the IoT era, this data is analyzed to generate useful insights, build patterns, predict situations, prescribe solutions, and instruct ordinary things to make them self-optimizing.\nThe IoT is a package of:\n The things with the capability to sense and act. Communication protocols to connect the things. A platform to collect and store the data from the things, then analyze the data and instruct the connected things to perform optimally. A network to connect all the things with the platform. Interactive user interface to visualize the data and monitor/control/track the things. Supporting Forces The \u0026ldquo;Connected Things\u0026rdquo; of past was converted to the \u0026ldquo;Internet of Things\u0026rdquo; due to following supportive forces:\n Reliance on software: Software usage is increasing for making the hardware better. Software plays a critical role in a lot of mission critical applications. Many hardware vendors are building embedded software pieces which can be upgraded/enriched later, without really changing the hardware. Technology commoditization: Technology is reaching the hand of the common people. We are now living with technology, our basic devices such as phones, watches, tv, washing machines, shoes, etc. are getting smarter every day. Technology is becoming more affordable. Hardware is getting cheaper due to mass production and process enhancement. Connection everywhere: Network speed and coverage is increasing. The world is getting closer every day by connecting to the Internet. The Internet is getting cheaper and more accessible. Communication overhead is also getting reduced by using the better protocols, thus data access speed is also increasing. More data and more science: Necessity is the mother of invention. The focus on data analytics and science is increasing, as more data is getting generated. Data storage capacity is also increasing with time. Rapid development: Companies are investing in frameworks and platforms to build, deploy, and produce the software faster. Cloud services are getting cheaper and easily accessible. Increase the revenue by reducing the cost: Enterprises are targeting to increase revenue by reducing cost through predictive maintenance and the use of self-optimizing machines. Stay in the race: Based on the predictions of IoT growth by 2020 or so, everybody wants to stay in the race. Obstacles for IoT Growth Lack of standards: Current IoT development is more consensus-based standards. Sets of companies come together and define their own standards, and this leads to the interoperability issues. Security tradeoff: IoT devices are smaller, less capable and may not perform well when high-security standards are required. Investment: A lot of companies are talking about IoT but very few are really investing. Most companies playing safe by waiting and watching. Huge ecosystem: IoT’s technical ecosystem is huge. It involves everything including hardware, network, protocol, software, data science, data storage, mobility, IoT platform, cloud infrastructure, maintenance and integration services. It is really difficult for companies to build such as a huge ecosystem. Companies need to partner with specialists and share their customers and revenue. Integration to existing infrastructure: Integrating the IoT tech stack to traditional communication infrastructure is challenging. For example connecting over HTTPS, SSL, SOAP, RDBMS, SFTP for CRM, ERP, ECM portal. Conclusion IoT is not new; it has been there for a long time. The “Connected Things/M2M\u0026quot; of the past is today’s IoT with a new set of communication protocols, improved hardware, better data collection, and analytics capability with nice visualization to monitor and interact with IoT devices. There are a lot of forces to keep IoT growing in future, but there are some obstacles too, which need to be crossed by the IoT solution providers.\nIt was originally published at Dzone and LinkedIn \n","id":179,"tag":"guide ; iot ; fundamentals ; connected things ; forces ; technology","title":"The Driving Forces Behind IoT","type":"post","url":"/post/the-driving-forces-iot/"},{"content":"As we understand “IoT Platform” is an essential building block of IoT Ecosystem . A platform essentially decouples the business application from low level details of the technology stack and required services. Thus, it makes more sense to go for an off-the-shelf platform that provide all the relevant features and required flexibility, instead of developing the whole IoT stack from scratch. Selection of an IoT Platform is a key to develop a scalable and robust IoT Solution. Choice of IoT Platform is also important in reducing the solution’s time to market-readiness.\nChallenges Since IoT is not a single technology but a complex mix of varying technologies, its relevance to business verticals is not uniform. Choosing a platform, that suits to specific needs or the one that suits most of the generic IoT needs, is a cumbersome task. A simple web search on Google for IoT Platforms results in more than 200 IoT Platforms, ranging from market leaders like Microsoft Azure, Amazon Web Services to open-source platform like SiteWhere \u0026amp; Kaa and established ones like PTC Thingworx, Bosch IoT Suite. These platforms not only vary in features and services they provide, but also in the terminology they use for similar features.\nTo tackle these challenges of selecting an IoT Platform, we have devised a strategic approach that compares and analyzes multiple platforms by mapping them against a Common Framework.\nThe Common Framework Below image shows an IoT reference architecture and depicts a Common Framework for IoT Platform. IoT reference architecture shows three major components:\n IoT Nodes: Devices composed of sensors and / or actuators on one part, a communication module on another. Data sensed by these devices is either utilized in computation at the edge gateways or exchanged with IoT Platform for further analysis. IoT Platform: Facilitates communication with devices, data-pipelining, device management, analytics and application development. Business and external systems: Such systems leverage IoT in business functions like SAP ERP etc. and visualization systems like Augmented Reality, Virtual Reality, Mobile, Web Browsers etc. The Common Framework defines features of IoT Platform under following categories, each of which is further divided in factors:\n Connectivity: connectivity enables easy and reliable integration of devices with platform by providing standard protocols. Connectivity can be evaluated on following factors:\n Data Ingestion - How reliable and scalable consumption of device data in the platform? Integration SDKs - How is the support for multiple languages, how does the platform gives more control to application developers. Key Protocols - does the platform supports for key protocols and ensure decoupling the business applications from underlying communication. Device Management: Provides an easy and reliable way for operators to manage or monitor collection of devices. Following factors impacts Device Management:\n Data Identity \u0026amp; Registry - The platform should have mechanism to maintain a device registry to store the Device Identity and security credentials. Device State Storage - Easy asynchronous update of device latest state \u0026amp; capabilities in device twin or shadow. Device Security - Authenticate connected devices using security credentials Remote Software Upgrade - Remote update to collection or individual IoT Node. Analytics \u0026amp; Visualization: Device data analysis to gain insights for business profitability and visualizing in user friendly way. Following analytics features are expected in the platform:\n Batch Analytics - An efficient way of processing high volumes of historical data Stream Processing - Analyze events and data on the fly. Predictive/ Preventive Maintenance - Deep machine learning, which leads to predictive or prevent analysis. Visualization - Displaying result of analysis (historical or real-time) on multiple visualization means Data Routing: Ease of defining data routing rules to execute complex use cases that consumes device and sensor data. Rule engine capabilities are factored as:\n GUI Based Rules - Ease for non-technical users to create or modify rules. Nested Conditions - Support for complex use cases Routing Control - Have a control on routing of data to other services. Rule Repository - Version and maintain multiple rules. Storage: Valuable data generated by the devices shall be securely persisted and backed up regularly to prevent loss from any disaster. Following are important decision factor for storage and backup:\n Data Store Types - Multiple types of data stores provides flexibility in designing application Backup \u0026amp; Recovery - Platform should provide means to back up the data and perform recovery actions if needed Disaster Recovery - Disaster recovery requires that data redundancy is maintained at multiple sites Security: Security is the biggest concern in any IoT solution and needs to be taken care at all layers. Following highlights the key security considerations in platform:\n IT compliances - deploying services in hybrid cloud behind the organization firewall \u0026amp; within organization IT compliance. Identity \u0026amp; Access Mgmt - Ensure only the right individuals access the right resources at the right times for the right reasons Key Management - Encrypt keys and small secrets like passwords using keys stored in hardware security modules MARS: Maintainability, Availability, Reliability and Scalability, these are important aspects for cost optimization and to prevent business losses from downtime. Factors:\n Administration - Service and Application logging and monitoring Elasticity - On demand scale-up and scale-down. Enterprise Messaging - prevent leakage or queue overflows when data flows through services. SLA - prevent downtime losses Enterprise Integration, API Gateway, Application Development: platform’s integration capabilities are critical to leverage IoT in existing business systems. Support for application development and concept of an API Gateway allows easy integration with external visualization systems like Google Glass, HoloLens, Mobile \u0026amp; Web.\n Conclusion The Common Framework makes it easier to evaluate multiple IoT Platforms. A simple process of evaluation starts with breaking down of an IoT Platform’s features and then mapping these features to the Common Framework (categories and factors within categories). After doing a similar mapping for all platforms, an apple to apple comparison of features in different platforms is possible. One can remove and add the categories or factors as per relevance to their evaluation requirements. Read our next white paper on IoT Platform evaluation in details to understand how to score the features and complete the evaluation exercise to find a best fit for your IoT Platform needs.\nContributors : This paper was also supported by Umang Garg and Manish Asija\n","id":180,"tag":"framework ; IOT ; platform ; comparison ; analysis ; cloud ; security ; technology","title":"Framework for choosing an IoT Platform","type":"post","url":"/post/choosing_iot_platform/"},{"content":"SIGFOX is providing a communication solution dedicated to the Internet of Things\nDedicated to the IOT means:\n Simplicity: No configuration, no pairing, no signaling Autonomy: Very low energy consumption, allowing years of autonomy on battery without maintenance Small messages: No large assets or multimedia, only small messages It is a LPWA (Low-Power Wide-Area) network, currently operating in 20 countries in Europe, Americas and Asia/Pacific. Communications over the SIGFOX network are bi-directional: uplink from device \u0026amp; downlink to the device. Specifically,\nCharacteristics of SigFox Network SigFox sets up antennas on towers (like a cell phone company), and receives data transmissions from devices like parking sensors or water meters. These transmissions use frequencies that are unlicensed. SigFox deploys Low-Power Wide Area Networks (LPWAN) that work in concert with hardware that manufacturers can integrate into their products. Any device with integrated SigFox hardware can connect to the internet – in regions where a SigFox network has been deployed – without any external hardware, like a Wi-Fi or ZigBee router. SigFox wireless systems send very small amounts of data (12 bytes) very slowly, at just 100 bits per second so it can reach farther distances with fewer base stations. Up to 140 messages per object per day Low Battery consumption as sensors can \u0026ldquo;go to sleep\u0026rdquo; when not transmitting data, consuming energy only when they need to. The focus on low-power, low-bandwidth communications also makes the SigFox network relatively easy to deploy. While adding new SigFox node, reconfiguration to existing nodes is not required. Data transportation becomes very long range (distances up to 40km in open field) and communication with buried, underground equipment becomes possible, all this being achieved with high reliability and minimal power consumption. Furthermore, the narrow throughput transmission combined with sophisticated signal processing provides effective protection against interference. This also ensures that the integrity of the data transmitted is respected. This technology is a good fit for any application that needs to send small, infrequent bursts of data. Frequencies The SIGFOX network is relying on Ultra-Narrow Band(UNB) modulation, and operating in unlicensed sub-GHz frequency bands. This protocol offers a great resistance to jamming \u0026amp; standard interferers, as well as a great capacity per receiving base stations SIGFOX complies with local regulations, adjusting central frequency, power output and duty cycles SIGFOX Requirements A SIGFOX-ready module or transceiver (SIGFOX solution) Ex- SI446x A valid subscription. Development kits \u0026amp; evaluation boards come with an included one-year subscription Be in a covered area (In SIGFOX Network). SIGFOX Network SIGFOX’s network is designed around a hierarchical structure:\n UNB modems communicate with base stations, or cells, covering large areas of several hundred square kilometers. UNB - Ultra Narrow Band, technology uses free frequency radio bands (no license needed) to transmit data over a very narrow spectrum to and from connected objects. Base stations route messages to server. Servers check data integrity and route the messages to your information system Messaging Sending a message through SIGFOX –\n There is no signaling, nor negotiation between a device and a receiving station. The device decides when to send its message, picking up a pseudo-random frequency. It\u0026rsquo;s up to the network to detect the incoming messages, as well as validating \u0026amp; reduplicating them. The message is then available in the SIGFOX cloud, and forwarded to any 3rd party cloud platform chosen by the user. Security Data security provided by SIGFOX –\n Every message is signed, with information proper to the device (incl. a unique private key) and to the message itself. This prevent spoofing, alteration or replay of genuine messages. Encryption \u0026amp; scrambling of the data are supported, with the choice of the more appropriate solution up to the consumer. Hardware solutions: SIGFOX makes its IP available at no cost to silicon and module vendors. This strategy has encouraged leading silicon and module vendors to integrate SIGFOX in their existing assets, giving solution enablers a wide choice of very competitively priced components. SIGFOX enables bi-directional as well as mono-directional communications and can co-exist with other communication protocols, both short and long range. High energy consumption has been a major obstacle with regards to unleashing the potential of the Internet of Things, which is why one of the key elements of the SIGFOX strategy is to continue to push the boundaries for energy consumption lower and low. SIGFOX Web Application Web application provided by SIGFOX system will allow users to access application and perform their desired tasks for devices. There are number of REST end points exposed by the server.\nThis research was supported by my colleagues at IoT Center of Excellence and Nagarro and SIGFOX\n","id":181,"tag":"sigfox ; IOT ; platform ; introduction ; guide ; nagarro ; technology","title":"SIGFOX - An Introduction","type":"post","url":"/post/sigfox-an-introduction/"},{"content":"We have recently migrated source code from Hibernate ORM to JDBC (Spring JDBC template) based implementation. Performance has been improved 10 times.\nUser case (Oracle 11g, JBoss 6, JDK 6, Hibernate, Spring 3) : A tree structure in database is getting populated from a deep file system (directory structure) having around 75000 nodes. Each node (directory) contains text files, which get parsed based on business rules and then populate the database ( BRANCHs representing a node, tables referring to branch, tree_nodes). The database tables was well mapped to Hibernate JPA entries and on saving Branch object, its relevant entities were automatically getting saved. Whole operation was performed in recursive manner by traversing over directory tree. Each table’s primary key is auto generated from a separate sequence.\nAs per development level performance testing, it was estimated that initial tree will take 30 hours to load whole tree. This was not acceptable, as UAT cannot be started without this migration.\nWe have tried batching,flushing in hibernate, and seen the improvement but Spring JDBC is 10 times faster then that of Hibernate based approaches.\nWhile migrating to Spring JDBC based implementation, the biggest issue was to resolve the generated ID referred in other queries. Since ID is getting generated by a sequence and Spring JDBC template’s ‘batchUpdate’ do not have provision to fetch generated IDs. Hibernate was doing this automatically by updating ID field of the entity object.\n We fetched bulk ids from the sequence, in a single query (select customer_id_seq.nextval from (select level from dual connect by LEVEL \u0026lt;=?size)) . Set ids in entity object while iterating BatchPreparedStatementSetter.setValues method. This way entity object get populated in the same way as it is done in hibernate, without any special iteration/processing. Once entity object is populated, all the dependent batches can be fired so that entity.getId returns the correct value.\nThis way we were able to migrate Hibernate based code-base to Spring JDBC with minimal changes in source code.\nThis article was originally published on My LinkedIn Profile\n","id":182,"tag":"orm ; java ; hibernate ; spring ; technology","title":"Choose ORM carefully!","type":"post","url":"/post/choose-orm-carefully/"},{"content":"JavaScript has been regarded as a functional programing language by most of the developers. This book is targeted for all the JavaScript developers to shift their thinking from the functional to the object oriented programing language.\nThe course requires basic knowledge of JavaScript syntax. If you already know some JavaScript, you will find a lot of eye-openers as you go along and learn what more the JavaScript can do for you.\nThis course starts with the basics of JavaScript programming language and the object oriented programing (OOP) concepts and how to achieve the same using JavaScript. The additional JavaScript concepts like AMD, JSON, popular frameworks, and libraries are also included in the document.\nThis course covers the JavaScript OOP concepts in comparison with the well-known Object Oriented Programing language Java. You will have the comparative code in Java and JavaScript and the use of the famous IDE Eclipse.\nThe usage of JavaScript has been changed a lot since its invention. Earlier it was used only as the language for the web browser for the client-side scripting, but now it is also being used for server-side scripting, desktop applications, mobile apps, gaming, graphics, charting-reporting, etc. The following diagram depicts its usages: Programming in JavaScript Let’s recall the JavaScript language, this section covers the very basics of JavaScript, but it may be an eye opener for some developers, so try the examples given below and learn the flexibility of the JavaScript programming language. I hope you know how to run the example given below, just add these in \u0026lt;script\u0026gt; tag on a HTML page.\nGeneral Statements Following are different type of statements of JavaScript language.\n// Single Line Comment /** * * Multi-Line * Comment */ //Declare var variable1; //Declare and define var variable1 = 10; //Assign variable1 = \u0026#34;New value\u0026#34;; //Check type of a variable alert (typeof variable1); // string - dynamic type language. console.log (\u0026#34;variable1=\u0026#34; + variable1); //log to console. I hope you know ‘console’, if not //press F12 in chrome or IE and see the tab ‘Console’. Operators Equality Checking ==\tequal to ===\tequal value and equal type !=\tnot equal !==\tnot equal value or not equal type \u0026gt;\tgreater than \u0026lt;\tless than \u0026gt;=\tgreater than or equal to \u0026lt;=\tless than or equal to Guard/Default Operator \u0026amp;\u0026amp; - and || - or The logical operators are and, or, and not. The \u0026amp;\u0026amp; and || operators accept two operands and provides their logical result. \u0026amp;\u0026amp; and || are short circuit operators. If the result is guaranteed after the evaluation of the first operand, it skips the second operand.\nTechnically, the exact return value of these two operators is also equal to the final operand that is evaluated. This is why the \u0026amp;\u0026amp; operator and the || operator are also known as the guard operator and the default operator respectively.\nArray and Map See the beauty of JavaScript, flexible use of array.\n//Array var arrayVar = []; arrayVar [0] = \u0026#34;Text\u0026#34;; arrayVar [1] = 10; //Run time memory allocation. arrayVar [arrayVar.length] = \u0026#34;Next\u0026#34;; console.log (arrayVar.length); // 3 console.log (arrayVar [0]); // Text //Map Usage of same Array. arrayVar [\u0026#34;Name\u0026#34;] = \u0026#34;Kuldeep\u0026#34;; arrayVar [\u0026#34;Group\u0026#34;] = \u0026#34;JSAG\u0026#34;; console.log (arrayVar [\u0026#34;Name\u0026#34;]); // Kuldeep //You can also access String Key with DOT operator. console.log (arrayVar.Name); // Kuldeep //You can also access index as map way. console.log (arrayVar [\u0026#34;0\u0026#34;]); // Text console.log (arrayVar.length); //3 – Please note length of array is increase only for numeric values arrayVar [\u0026#34;3\u0026#34;] = \u0026#34;JSAG\u0026#34;; console.log (arrayVar.length) //4 – Numeric values in String also treated as numeric. Auto cast arrayVar [500] = \u0026#34;JSAG\u0026#34;; console.log (arrayVar.length) //501 Object Literals Object literals can be defined using JSON syntax.\nvar person1 = { name: \u0026#34;My Name\u0026#34;, age: 30, cars: [1, 3], addresses: [{ \tstreet1: \u0026#34;\u0026#34;, \tpin: \u0026#34;\u0026#34; }, { \tstreet1: \u0026#34;\u0026#34;, \tpin : \u0026#34;\u0026#34; } ] }; console.log (person1.name); console.log (person1.addresses [0].street1); Control Statements If-else-if chain Same as any other language.\nvar value; var message = \u0026#34;\u0026#34;; if (value) { message = \u0026#34;Some valid value\u0026#34;; if (value == \u0026#34;O\u0026#34;) { \tmessage = \u0026#34;Old Value\u0026#34;; } else if (value == \u0026#34;N\u0026#34;) { \tmessage = \u0026#34;New Value\u0026#34;; } else { \tmessage = \u0026#34;Other Value\u0026#34;; } } else if (typeof (value) == \u0026#34;undefined\u0026#34;) { message = \u0026#34;Undefined Value\u0026#34;; } else { message = value; } console.log (\u0026#34;Message : \u0026#34; + message); Switch Switch accepts both numeric and string in same block.\nvar value = \u0026#34;O\u0026#34;; var message = \u0026#34;\u0026#34;; switch (value) { case 1: message = \u0026#34;One\u0026#34;; break; case \u0026#34;O\u0026#34;: message = \u0026#34;Old Value\u0026#34;; break; case \u0026#34;N\u0026#34;: message = \u0026#34;New Value\u0026#34;; break; case 2: message = \u0026#34;Two\u0026#34;; break; default: message = \u0026#34;Unknown\u0026#34;; } console.log (\u0026#34;Message : \u0026#34; + message); Loop The while loop, continue, break statements same as Java.\n//The while loop while (value \u0026gt; 5) { value --; if (value == 100) { \tconsole.log (\u0026#34;Break Condition\u0026#34;); \tbreak; } else if (value \u0026gt; 10) { \tconsole.log (\u0026#34;Value more than 10\u0026#34;); \tcontinue; } console.log (\u0026#34;Value : \u0026#34; + value); } The for loop //For loop on array for (var index = 0; index \u0026lt; arrayVar.length; index++) { console.log (\u0026#34;For Index - \u0026#34;, index, arrayVar [index]); // Note that it will only print the array with index numeric indexes (because of ++operation). } //For each loop (similar to Java) for (var key in arrayVar) { console.log (\u0026#34;For Key - \u0026#34;, key, arrayVar [key]); // Note that it will print all the values of array/map w.r.t keys. } The do-while loop // Do while loop. do { console.log (\u0026#34;Do while\u0026#34;); if (value) { \tvalue = false; } } while (value) Function Similar to the other programming language a function is a set of statements to perform some functionality.\nDefine The functions are defined as follows:\n// Declarative way /** * Perform the given action * * @param {String} actionName - Name of the action * @param {Object} payload - payload for the action to be performed. * @return {boolean} result of action */ function performAction (actionName, payload) { console.log (\u0026#34;perform action:” actionName,” payload:” payload); return true; } Since JS is dynamic language, we don’t need to specify the type of parameters and type of return value.\nInvoke We can invoke a function as follows\nvar returnValue = performAction (\u0026#34;delete\u0026#34;, \u0026#34;userData\u0026#34;); console.log (returnValue); The ‘arguments’ variable One can access/pass arguments in a function even if they are not defined in the original function. The ‘arguments’ variable is an object of type array-like, you can perform most of the operation of an array on this.\n/** * Perform an action * * @return {boolean} result of action */ function performAction () { console.log (arguments.length); console.log (\u0026#34;perform action:” arguments [0],” payload:” arguments [1]); return true; } var returnValue = performAction (\u0026#34;delete\u0026#34;, \u0026#34;userData\u0026#34;); console.log (returnValue); Function as Object We can assign a function definition to an object, and access the function using the objects.\n/** * Perform an action * * @return {boolean} result of action */ var myFunction = function performAction () { console.log (arguments.length); console.log (\u0026#34;perform action :” arguments [0],” payload:” arguments [1]); Return true; }; console.log (typeof myFunction); //function var returnValue = myFunction (\u0026#34;delete\u0026#34;, \u0026#34;userData\u0026#34;); Function inside Function In Javascript, you can define functions inside a function.\n/** * Perform an action * * @return {boolean} result of action */ var myFunction = function performAction () { console.log (arguments.length); function doInternalAction (actionName, payload) { \t\tconsole.log (\u0026#34;perform action:” actionName,” payload:” payload); \treturn true; } return doInternalAction (arguments [0], arguments [1]); }; Exception Handling JavaScript supports exception handling similar to other languages.\nvar obj; try { console.log (\u0026#34;Value = \u0026#34; + obj.check); } catch (e) { console.log (e.toString ()); console.log (\u0026#34;Error occurred of type : \u0026#34; + e.name); console.log (\u0026#34;Error Details: \u0026#34; + e.message); } TypeError: Cannot read property 'check' of undefined Error occurred of type: TypeError Error Details: Cannot read property 'check' of undefined The Error object has name and message. You can also throw your own errors having name and message.\nfunction check (obj) { if (! obj.hasOwnProperty (\u0026#34;check\u0026#34;)) { \tthrow {name: \u0026#34;MyCheck\u0026#34;, message: \u0026#34;Check property does not exists\u0026#34;}; // Also try... throw new Error (“message”); } } var obj = {} try { check (obj); } catch (e) { console.log (e.toString ()); console.log (\u0026#34;Error occurred of type: \u0026#34; + e.name); console.log (\u0026#34;Error Details: \u0026#34; + e.message); } [object Object] Error occurred of type: MyCheck Error Details: Check property does not exists Context this The context of a function is accessed via “this” keyword. It is a reference to the object using which the function is invoked. If we call a function directly, “this” refers to the window object.\nfunction log (message) { console.log (this, message); } log (\u0026#34;hi\u0026#34;); var Logger = {} Logger.info = function info (message) { console.log (this, message); } Logger.info (\u0026#34;hi\u0026#34;); Window {top: Window, window: Window, location: Location, external: Object, chrome: Object…} \u0026quot;hi\u0026quot; Object {info: function} \u0026quot;hi\u0026quot; Invoking functions with context You can change the context of a function while invoking using “call” and “apply” syntax.\n[function object].call ([context], arg1, arg2, arg3 …); [function object].apply ([context], [argument array]); function info (message) { console.log (this, \u0026#34;INFO:\u0026#34; + this.name + \u0026#34;:\u0026#34; + this.message +\u0026#34; input: “+ message); } var iObj = { name: \u0026#34;My Name\u0026#34;, message: \u0026#34;This is a message\u0026#34;, }; info.call (iObj, \u0026#34;hi\u0026#34;); info.apply (iObj, [\u0026#34;hi\u0026#34;]); Object {name: \u0026quot;My Name\u0026quot;, message: \u0026quot;This is a message\u0026quot;} \u0026quot;INFO: My Name: This is a message input: hi\u0026quot; Object {name: \u0026quot;My Name\u0026quot;, message: \u0026quot;This is a message\u0026quot;} \u0026quot;INFO: My Name: This is a message input: hi\u0026quot; Scope The scope of a variable is limited to the function definition. The variable remains alive till the existence of the function.\nfunction script_js () { var info = function info (message) { \tconsole.log (\u0026#34;INFO:\u0026#34; + message); }; var message = “This is a message\u0026#34;; } info (message); Uncaught ReferenceError: message is not defined If a variable is not defined with a var keyword, it is referred as a global variable which is attached to the window context.\nfunction script_js () { info = function info (message) { \tconsole.log (\u0026#34;INFO:\u0026#34; + message); }; message = “This is a message\u0026#34;; } script_js (); info (message); INFO: This is a message But for accessing such global variables we need to call the methods once to initialize them in the scope. We can limit a variable to a scope w.r.t to a js file, or to a block by defining it inside a function.\nThe scoping functions need to be called immediately after the definition; this is called IIFE (Immediately Invoked Function Expression). Instead of explicitly calling the scoping functions we can call the function along with its definition as follows.\n(function script_js () { info = function info (message) { \tconsole.log (\u0026#34;INFO:\u0026#34; + message); }; message = “This is a message\u0026#34;; })(); info (message); IIFE can be used to limit the scope of the variables w.r.t a file, block, etc. and whichever needs to be exposed outside the scope can be assigned to the global variables.\nReflection Reflection is examining the structure of a program and its data. The language of a program is called a programming language (PL), and the language in which the examination is done is called a meta programming language (MPL). PL and MPL can be the same language. For example,\nfunction getCarData () { var myCar = { name: \u0026#34;Toyota\u0026#34;, model: \u0026#34;Corolla\u0026#34;, power_hp: 1800, getColor: function () { // ... } }; alert (myCar.hasOwnProperty (\u0026#39;constructor\u0026#39;)); // false alert (myCar.hasOwnProperty (\u0026#39;getColor\u0026#39;)); // true alert (myCar.hasOwnProperty (\u0026#39;megaFooBarNonExisting\u0026#39;)); // false if (myCar.hasOwnProperty (\u0026#39;power_hp\u0026#39;)) { alert (typeof (myCar.power_hp)); // number } } getCarData (); Eval: The eval () method evaluates a JavaScript code represented as a string, for example:\nvar x = 10; var y = 20; var a = eval (\u0026#34;x * y\u0026#34;); var b = eval (\u0026#34;2 + 3\u0026#34;); var c = eval (\u0026#34;x + 11\u0026#34;); alert (\u0026#39;a=\u0026#39;+a); alert (\u0026#39;b=\u0026#39;+b); alert (\u0026#39;c=\u0026#39;+c); ####Timers and Intervals With JavaScript, we can execute some code at the specified time-intervals, called timing events. To time events in JavaScript is very easy. The two key methods that are used are:\n setInterval() - executes a function, over and over again, at the specified time intervals it wait a specified number of milliseconds, and then execute a specified function, and it will continue to execute the function, once at every given time-interval.\n window.setInterval (\u0026quot;javascript function\u0026quot;, milliseconds); //Alert \u0026quot;hello\u0026quot; every 2 secondss setInterval (function () {alert (\u0026quot;Hello\u0026quot;)}, 2000); setTimeout() - executes a function, once, after waiting a specified number of milliseconds\n window.setTimeout (\u0026quot;javascript function\u0026quot;, milliseconds); Object Oriented Programing in JavaScript As HTML5 standards are getting matured, JavaScript has become the most popular programming language of the web for client-side environments, and is gaining its ground rapidly on the server-side as well. Despite the popularity, major concern of JavaScript is code quality, as JavaScript code-bases grow larger and more complex, the object-oriented capabilities of the language become increasingly crucial. Although JavaScript’s object-oriented capabilities are not as powerful as those of static server-side languages like Java, its current object-oriented model can be utilized to write a better code.\nThe Class and Object Construction A class consists of data and operations on those data. A class has construction and initialization mechanisms. Let’s discuss how a class can be defined in JavaScript. There is no special keyword in JavaScript to define a class; a class is a set of following:\n Constructor - A constructor is defined as a normal function.\n/** * This is the constructor. */ function MyClass () { }; Let’s make it better to avoid calling it a function.\n/** * This the constructor. */ function MyClass () { if (! (this instanceof MyClass)){ throw new Error (\u0026#34;Constructor can\u0026#39;t be called as function\u0026#34;); } }; MyClass (); Uncaught Error: Constructor can't be called as function Every object has a prototype associated to it. For the above class, the prototype will be:\nMyClass.prototype { constructor: MyClass () { if (! (this instanceof MyClass)){ throw new Error (\u0026#34;Constructor can\u0026#39;t be called as function\u0026#34;); } }, __proto__: Object {...} } Instance Fields You can define the instance fields by attaching the variables with “this” or class’s prototype, inside the constructor or after creating the object. Class’s prototype can also be defined using JSON Syntax.\n/** * This the constructor. * @constructor */ function MyClass () { if (! (this instanceof MyClass)){ throw new Error (\u0026#34;Constructor can\u0026#39;t be called as function\u0026#34;); } this.instanceField1 = \u0026#34;instanceField1\u0026#34;; this.instanceField2 = \u0026#34;instanceField2\u0026#34;; }; MyClass.prototype.name = \u0026#34;test\u0026#34;; Let’s initialize the fields using constructor\n/** * This the constructor. * @param {String} arg1 * @param {int} arg2 * @constructor */ function MyClass (arg1, arg2) { if (! (this instanceof MyClass)){ throw new Error (\u0026#34;Constructor can\u0026#39;t be called as function\u0026#34;); } this.instanceField1 = arg1; this.instanceField2 = arg2; }; Instance Methods - The same way we define instance methods.\n/** * This the constructor. * @param {String} arg1 * @param {int} arg2 * @constructor */ function MyClass (arg1, arg2) { if (! (this instanceof MyClass)){ throw new Error (\u0026#34;Constructor can\u0026#39;t be called as function\u0026#34;); } this.instanceField1 = arg1; this.instanceField2 = arg2; /** * Print the fields */ this.printFields = function printFields () { console.log (\u0026#34;instanceField1: \u0026#34; + this.instanceField1 + \u0026#34; instanceField2: \u0026#34; + this.instanceField2); }; }; MyClass.prototype.printName = function printName () { console.log (this.name); }; Instance Creation - An instance can be created using a new keyword.\nvar myInstance = new MyClass (\u0026#34;Kuldeep\u0026#34;, 31); console.log (myInstance.instanceField1); console.log (myInstance.instanceField2); myInstance.printName (); myInstance.printFields (); var myInstance1 = new MyClass (\u0026#34;Sanjay\u0026#34;, 30); console.log (myInstance1.instanceField1); console.log (myInstance1.instanceField2); myInstance1.printFields (); myInstance1.printName (); Kuldeep 31 test instanceField1: Kuldeep instanceField2: 31 Sanjay 30 instanceField1: Sanjay instanceField2: 30 test You can add/update/delete instance fields from the instance variable as well, any change in an instance variable will only be effective in that instance variable.\nmyInstance.instanceField1 = \u0026#34;Kuldeep Singh\u0026#34;; myInstance1.instanceField3 = \u0026#34;Rajasthan\u0026#34;; myInstance1.name = \u0026#34;New test\u0026#34;; MyClass.prototype.name = 100; myInstance.printFields = function () { console.log (\u0026#34;1: \u0026#34; + this.instanceField1 + \u0026#34; 2: \u0026#34; + this.instanceField2 +\u0026#34; 3: “+ this.instanceField3); }; myInstance.printFields (); myInstance1.printFields (); myInstance.printFields.call (myInstance1); myInstance1.printName (); 1: Kuldeep Singh 2: 31 3: undefined script.js:227 instanceField1: Sanjay instanceField2: 30 script.js:198 1: Sanjay 2: 30 3: Rajasthan script.js:227 New test Please note that once an object is created, any change in the prototype will not impact the created instance. On “new” a copy of class’s prototype is created and assigned to instance variable. Constructors then initiate/update the members copied from prototype.\n Static Fields and Methods - The static fields are variables which are associated with a class, not with an instance. In JavaScript the static variables are only accessible to the class reference, unlike Java they are not accessible using an instance variable. There is no static keyword in JavaScript.\nMyClass.staticField1 = 100; MyClass.staticField2 = \u0026#34;Static\u0026#34;; MyClass.printStaticFields = function () { console.log (\u0026#34;staticField1: \u0026#34; + this.staticField2 + \u0026#34; staticField2: \u0026#34; + this.staticField2); }; var myInstance = new MyClass (\u0026#34;Kuldeep\u0026#34;, 31); MyClass.printStaticFields (); MyClass.printOtherFields = function () { console.log (\u0026#34;1: \u0026#34; + this.instanceField1 + \u0026#34; 2: \u0026#34; + this.instanceField2); }; MyClass.printOtherFields (); myInstance.printStaticFields (); staticField1: Static staticField2: Static 1: undefined 2: undefined Uncaught TypeError: Object #\u0026lt;MyClass\u0026gt; has no method 'printStaticFields' Singleton - If you need to use only one instance of an object, use a Singleton either by defining the class by the static methods or define the object as literals using JSON syntax.\n Encapsulation In the above example we have clubbed the data and the operations in one class structure but the data can directly be accessed outside of the class as well. So it is a violation of the encapsulation concept. In this section we will see how we can protect the data inside a class or in other words define the access rules for the data and the operations. In JavaScript there is no access modifier such as public, private or protected or package. But we have discussed the scope; by function scoping we can limit the access to the data. Going forward let’s write all the code in IIFE scoping function to control the file level access of the variables.\n Public - The variables and the methods which are associated with “this” or Class.prototype is by default public.\nperson.js\n(function () { /** * This the constructor. * * @param {String} name * @param {int} age * @constructor */ function Person (name, age) { if (! (this instanceof Person)) { throw new Error (\u0026#34;Constructor can\u0026#39;t be called as function\u0026#34;); } /** * @public Name */ this.name = name; /** * @public Age */ this.age = age; /** * @public * @return String representation */ this.toString = function toString () { return \u0026#34;Person [name:\u0026#34; + this.name + \u0026#34;, age:\u0026#34; + this.age + \u0026#34;]\u0026#34;;\t }; }; /** * @public Full Name of the person. */ Person.prototype.fullName = \u0026#34;Kuldeep Singh\u0026#34;; var person = new Person (\u0026#34;Kuldeep\u0026#34;, 31); console.log (\u0026#34;Person: \u0026#34; + person); console.log (person.name); console.log (person.fullName); })(); Person: Person [name: Kuldeep, age: 31] Kuldeep Kuldeep Singh Private - Anything defined with ‘var’ inside a function remains private to that function. So variables and methods defined with var keywords in a constructor are private to the constructor and logically with the class and also the declarative style inner methods are in the private scopes.\n(function () { /** * This the constructor. * * @param {String} name * @param {int} age * @constructor */ function Person (name, age) { if (! (this instanceof Person)) { throw new Error (\u0026#34;Constructor can\u0026#39;t be called as function\u0026#34;); } /** * @private Name */ var _name = name; /** * @private Age */ var _age = age; /** * @public String representation * @return */ this.toString = function toString () { return \u0026#34;Person [name:\u0026#34; + _name + \u0026#34;, age:\u0026#34; + _age + \u0026#34;]\u0026#34;; }; }; var person = new Person (\u0026#34;Kuldeep\u0026#34;, 31); console.log (\u0026#34;Person: \u0026#34; + person); console.log (person._name); })(); Person: Person [name: Kuldeep, age: 31] undefined Expose the data using getters and setters wherever required, accessing private members is called privileged access. The private members are not assessable from a prototypical definition (Class.prototype) of the public methods.\n(function () { /** * This is the constructor. * * @param {String} name * @param {int} age * @constructor */ function Person (name, age) { if (! (this instanceof Person)) { throw new Error (\u0026#34;Constructor can\u0026#39;t be called as function\u0026#34;); } /** * @private Name */ var _name = name; /** * @private Age */ var _age = age; /** * @public */ this.getName = function getName () { return _name; }; /** * @public */ this.getAge = function getAge () { return _age; }; /** * @public String representation * @return */ this.toString = function toString () { return \u0026#34;Person [name:\u0026#34; + _name + \u0026#34;, age:\u0026#34; + _age + \u0026#34;]\u0026#34;; }; }; var person = new Person (\u0026#34;Kuldeep Singh\u0026#34;, 31); console.log (person.getAge ()); console.log (person.getName ()); })(); It’s a good practice to define public API in the prototype of a class and access private variables using the privileged methods.\n Inheritance In JavaScript there are two ways to implement inheritance:\n Prototypical Inheritance - This type of inheritance is implemented by\n Copying the prototype of the base class to the sub class, Changing the constructor property of the prototype We have created a person class. Now let’s expose the class with the window context. person.js\n(function () { /** * This is the constructor. * * @param {String} name * @param {int} age * @constructor */ function Person (name, age) { if (! (this instanceof Person)) { throw new Error (\u0026#34;Constructor can\u0026#39;t be called as function\u0026#34;); } var _name = name; var _age = age; this.getName = function getName (){ return _name; }; this.getAge = function getAge () { return _age; }; /** * @public String representation * @return */ this.toStr = function toString () { return \u0026#34;name :\u0026#34; + _name + \u0026#34;, age :\u0026#34; + _age; }; }; window.Person = Person; })(); Let’s extend the person class and create an Employee class.\nEmployee.prototype = new Person (); Employee.prototype.constructor = Employee; JavaScript does not have any “super” keyword so we need to call super constructor explicitly. Using call or apply. employee.js\n(function () { /** * This is the constructor. * * @param {String} type * @constructor */ function Employee (code, firstName, lastName, age) { if (! (this instanceof Employee)) { throw new Error (\u0026#34;Constructor can\u0026#39;t be called as function\u0026#34;); } //Calling super constructor Person.call (this, firstName, age); // Private fields var _code = code; var _firstName = firstName; var _lastName = lastName; // Public getters this.getCode = function getCode () { return _code; }; this.getFirstName = function getFirstName () { return _firstName; }; this.getLastName = function getLastName () { return _lastName; }; this.toStr = function toString () { return \u0026#34;Code:\u0026#34; + _code + \u0026#34;, Name :\u0026#34; + _firstName +\u0026#34; \u0026#34;+ _lastName; }; }; Employee.prototype = new Person (); Employee.prototype.constructor = Employee; var emp = new Employee (425, \u0026#34;Kuldeep\u0026#34;, \u0026#34;Singh\u0026#34;, 31); console.log (\u0026#34;Employee: \u0026#34; + emp.toStr ()); console.log (emp.getAge ()); console.log (emp.getName ()); })(); Employee: Employee [Code: 425, Name: Kuldeep Singh] 31 Kuldeep All the methods of the Person class are available in the Employee class. You can also check instanceof and isPrototypeOf.\n console.log (emp instanceof Employee); console.log (emp instanceof Person); console.log (Employee.prototype.isPrototypeOf (emp)); console.log (Person.prototype.isPrototypeOf (emp)); The complete prototype chain is maintained; when a member of a child object is accessed, it is searched from the members of the object. If it is not found there, it is searched in its prototype object. If it is not found there, it is searched in the members and the prototype of the parent object. It is searched up to the prototype of the object.\nThe prototypical inheritance supports only the multi-level inheritance. If you make a change in the base classes/its prototype after creating the object instance, the change will be effecting as per the traversal rule explained.\nIn the above example, if you add a method in the Person class after creating emp object, that method will also be available in emp object even if it was not there when the emp object was created.\nIn above code we have also overridden toStr method and called the super constructor from the constructor.\nImprovement:\nWe have created a new object of Person to create a copy of its prototype. Here we have done an extra operation to call the constructor as well with no arguments. We can avoid this extra operation by following.\nInstead of new Person(), do Object.create (Person.prototype)\nIt creates a better prototype chain for the created object.\n_With new\n Employee.prototype = new Person (); Employee.prototype.constructor = Employee; With Object.create()\n Employee.prototype = Object.create (Person.prototype); Employee.prototype.constructor = Employee; Mixin Based Inheritance - When we need to inherit the functionality of the multiple objects, we use this approach to implement inheritance. This can be achieved by the external libraries which copy all the properties of the source objects to the destination objects, such as _.extends or $.extends Let’s try with jQuery, include “jquery-x.y.z.min.js” in html page. Change the Person class as a JSON object:\nperson.js\n(function () { Person = { constructor: function Person (name, age) { var _name = name; /** * @private Age */ var _age = age; /** * @public */ this.getName = function getName () { return _name; }; /** * @public */ this.getAge = function getAge () { return _age; }; /** * @public String representation * @return */ this.toStr = function toString () { return \u0026#34;name:\u0026#34; + _name + \u0026#34;, age:\u0026#34; + _age; }; } }; window.Person = Person; })(); Similarly, Employee class as a JSON object:\nemployee.js\n(function () { Employee = { constructor: function Employee (code, firstName, lastName, age) { // Call super Person.constructor.call (this, firstName, age); // Private fields var _code = code; var _firstName = firstName; var _lastName = lastName; // Public getters this.getCode = function getCode () { return _code; }; this.getFirstName = function getFirstName () { return _firstName; }; this.getLastName = function getLastName () { return _lastName; }; this.toStr = function toString () { return \u0026#34;Code :\u0026#34; + _code + \u0026#34;, Name :\u0026#34; + _firstName + \u0026#34; \u0026#34; + _lastName; ; }; } }; var emp = {}; $.extend (emp, Person, Employee).constructor (425, \u0026#34;Kuldeep\u0026#34;, \u0026#34;Singh\u0026#34;, 31); console.log (emp); var emp2 = {}; $.extend (emp2, Person, Employee).constructor (425, \u0026#34;Test1\u0026#34;, \u0026#34;Test1\u0026#34;, 30); console.log (\u0026#34;Employee : \u0026#34; + emp.toStr ()); console.log (emp.getAge ()); console.log (emp2.getName ()); })(); Employee {constructor: function, getName: function, getAge: function, toStr: function, getCode: function…} Employee: Code: 425, Name: Kuldeep Singh 31 Test1 With this approach you cannot check instanceof or isPrototypeOf console.log (emp instanceof Employee);\nconsole.log (emp instanceof Person); console.log (Employee.prototype.isPrototypeOf (emp)); console.log (Person.prototype.isPrototypeOf (emp));\nBecause all the properties are copied in the object and they remain as its own property.\nPlease be informed that if you make changes in the structure (adding/changing/deleting property) of the original base classes after creating the object then that change will not be propagated to the instance of the child object. So, the mixing-based inheritance does not keep the parent-child relationship.\n Abstraction Due to the flexibility of JavaScript, abstraction is not much needed, as we can pass/use the data without defining their types. But, yes we can implement the abstract types and interfaces to enforce some common functionality.\n Abstract Type - The abstract types are the data types (classes) which cannot be instantiated. Such data types are used to contain common functionality as well as abstraction of the functionality that a sub-class should implement. For example, the human class - human.js\n(function () { function Human () { if (! (this instanceof Human)) { throw new Error (\u0026#34;Constructor can\u0026#39;t be called as function\u0026#34;); } throw new Error (\u0026#34;Can’t instantiate Human class\u0026#34;); }; Human.prototype.getClassName = function _getClassName () { return \u0026#34;class: Human\u0026#34;; }; /** * @abstract */ Human.prototype.getBirthDate = function _getBirthDate () { throw new Error (\u0026#34;Not implemented\u0026#34;); }; window.Human = Human; })(); var h1 = new Human (); Uncaught Error: Can’t instantiate Human class Let’s extend this class in Person. person.js\n(function () { /** * This is the constructor. * * @param {String} name * @param {int} age * @constructor */ function Person (name, age) { if (! (this instanceof Person)) { throw new Error (\u0026#34;Constructor can\u0026#39;t be called as function\u0026#34;); } var _name = name; var _age = age; this.getName = function getName () { return _name; }; this.getAge = function getAge () { return _age; }; /** * @public String representation * @return */ this.toStr = function toString () { return \u0026#34;name:\u0026#34; + _name + \u0026#34;, age:\u0026#34; + _age; }; }; Person.prototype = Object.create (Human.prototype); Person.prototype.constructor = Person; window.Person = Person; var p1 = new Person (\u0026#34;Kuldeep\u0026#34;, 31); console.log (p1.getName ()); console.log (p1.getClassName ()); console.log (p1.getBirthDate ()); })(); Kuldeep class: Human Uncaught Error: Not implemented So implementation needs to be provided for the abstract methods.\n Interface - Interface is similar to an abstract class but it enforces that implementing class must implement all the methods declared in the interface. There is no direct interface or a virtual class in JavaScript, but this can be implemented by checking whether a method is implemented in the class or not. Example: human.js\n(function () { var IFactory = function (name, methods) { this.name = name; // copies array this.methods = methods.slice (0); }; /** * @static * Method to validate if methods of interface are available in \u0026#39;this_obj\u0026#39; * * @param {object} this_obj - object of implementing class. * @param {object} interface - object of interface. */ IFactory.validate = function (this_obj, interface) { for (var i = 0; i \u0026lt; interface.methods.length; i++) { var method = interface.methods[i]; if (! this_obj [method] || typeof this_obj [method]! == \u0026#34;function\u0026#34;) { throw new Error (\u0026#34;Class must implement Method: \u0026#34; + method); } } }; window.IFactory = IFactory; /** * @interface */ var Human = new IFactory(\u0026#34;Human\u0026#34;, [\u0026#34;getClassName\u0026#34;, \u0026#34;getBirthDate\u0026#34;]); window.Human = Human; })(); Let’s have a class which must comply with the Human interface definition. person.js\n(function () { /** * This is the constructor. * * @param {String} name * @param {int} age * @constructor */ function Person (name, age) { if (! (this instanceof Person)) { throw new Error (\u0026#34;Constructor can\u0026#39;t be called as function\u0026#34;); } var _name = name; var _age = age; this.getName = function getName () { return _name; }; this.getAge = function getAge () { return _age; }; /** * @public String representation * @return */ this.toStr = function toString () { return \u0026#34;name:\u0026#34; + _name + \u0026#34;, age:\u0026#34; + _age; }; IFactory.validate (this, Human); }; window.Person = Person; var p1 = new Person (\u0026#34;Kuldeep\u0026#34;, 31); console.log (p1.getName ()); })(); Uncaught Error: Class must implement Method: getClassName Let’s implement the unimplemented methods: person.js\n(function () { /** * This is the constructor. * * @param {String} name * @param {int} age * @constructor */ function Person (name, age, dob) { if (! (this instanceof Person)) { throw new Error (\u0026#34;Constructor can\u0026#39;t be called as function\u0026#34;); } var _name = name; var _age = age; var _dob = dob; this.getName = function _getName () { return _name; }; this.getAge = function _getAge () { return _age; }; this.getBirthDate = function _getDob () { return _dob; }; /** * @public String representation * @return */ this.toStr = function toString () { return \u0026#34;name:\u0026#34; + _name + \u0026#34;, age:\u0026#34; + _age; }; IFactory.validate (this, Human); }; Person.prototype.getClassName = function _getClassName () { return \u0026#34;class: Person\u0026#34;; }; window.Person = Person; var p1 = new Person (\u0026#34;Kuldeep\u0026#34;, 31, \u0026#34;05-Aug-1982\u0026#34;); console.log (p1.getName ()); console.log (p1.getBirthDate ()); console.log (p1.getClassName ()); })(); Kuldeep 05-Aug-1982 class: Person Polymorphism Polymorphism is referred to as multiple forms associated with one name. In JavaScript polymorphism is supported by overloading and overriding the data and the functionality.\n Overriding - Overriding is supported in JavaScript, if you define the function/variable with same name again in child class it will override the definition which was inherited from the parent class.\n Function overriding - The example of overriding can be found in Person-Employee example of Inheritance Variable overriding - You can override the variables in the same way as you can override the functions. Calling super.function() - Now let’s see how we can inherit the functionality of the base class while overriding, in other words, calling super.method(). Since there is no “super” keyword in JavaScript, we need to implement it using call/apply. Let’s consider person.js created in the last section. person.js\n(function () { /** * This is the constructor. * * @param {String} name * @param {int} age * @constructor */ function Person (name, age, dob) { if (! (this instanceof Person)) { throw new Error (\u0026#34;Constructor can\u0026#39;t be called as function\u0026#34;); } var _name = name; var _age = age; var _dob = dob; this.getName = function _getName () { return _name; }; this.getAge = function _getAge () { return _age; }; this.getBirthDate = function _getDob () { return _dob; }; /** * @public String representation * @return */ this.toStr = function toString () { return \u0026#34;name:\u0026#34; + _name + \u0026#34;, age:\u0026#34; + _age; }; IFactory.validate (this, Human); }; Person.prototype.getClassName = function _getClassName () { return \u0026#34;class: Person\u0026#34;; }; window.Person = Person; })(); And have employee.js as follows: employee.js\n(function () { /** * This is the constructor. * * @param {String} type * @constructor */ function Employee (code, firstName, lastName, age) { if (! (this instanceof Employee)) { throw new Error (\u0026#34;Constructor can\u0026#39;t be called as function\u0026#34;); } //Calling super constructor Person.call (this, firstName, age); // Private fields var _code = code; var _firstName = firstName; var _lastName = lastName; // Public getters this.getCode = function _getCode () { return _code; }; this.getFirstName = function _getFirstName () { return _firstName; }; this.getLastName = function _getLastName () { return _lastName; }; this.toStr = function _toString () { return \u0026#34;Code:\u0026#34; + _code + \u0026#34;, Name :\u0026#34; + _firstName +\u0026#34; \u0026#34;+ _lastName; }; }; Employee.prototype = Object.create (Person.prototype); Employee.prototype.constructor = Employee; Employee.prototype.getClassName = function _getClassName () { //Calling super.getClassName () var baseClassName = Person.prototype.getClassName.call (this); return \u0026#34;class: Employee extending \u0026#34; + baseClassName; }; var emp = new Employee (425, \u0026#34;Kuldeep\u0026#34;, \u0026#34;Singh\u0026#34;, 31); console.log (\u0026#34;Employee: \u0026#34; + emp.toStr ()); console.log (emp.getClassName ()); })(); Employee: Code: 425, Name: Kuldeep Singh class: Employee extending class: Person Privileged Function vs. Public Function - Now let’s try to call the privileged function in the child class. Change the Emplyee.toStr method as follows\nthis.toStr = function toString () { var baseStr = Person.prototype.toStr.call (this); return \u0026#34;Code:\u0026#34; + _code + \u0026#34;, Name:\u0026#34; + _firstName +\u0026#34; \u0026#34;+ _lastName + \u0026#34; baseStr:\u0026#34; + baseStr; }; And you will get Uncaught TypeError: Cannot call method \u0026lsquo;call\u0026rsquo; of undefined The reason is you cannot call the privileged method in the child class, but you can override the privileged method. Please note this deference between the privileged (not linked with the prototype) and the public methods (linked with prototype).\n Overloading\n Operator Overloading - JavaScript have implement operator overloading inbuilt for “+” operator. For numeric data type it acts as add operation but for String data type it act as concatenation operator. No custom overloading is support for operators. Functional Overloading - Since the JavaScript functions are flexible in terms of the number of arguments, the type of arguments, and the return types. Only the method name is considered as the signature of the method. You can pass any number of arguments, or any type of argument to a function, it does not matter if the function definition has included those parameters or not. Please ref to Section JavaScript Functions Due to this flexibility, functional overloading is not supported in JavaScript. If you define another function having same name but different parameters then that another function will just overwrite the original function. Modularization Once we have many custom object types, the class and the files, we need to group them and manage their visibility, dependencies, and loading. As of now JavaScript does not directly support the management of the visibility, the dependencies and the loading of the objects or the classes. Generally these things are managed by a namespace/package or modules in the other language. However, future version of JavaScript (ECMAScript 6) will come up with the full-fledged support for a module and a namespace. Till then let’s see how it can be implemented in JavaScript.\n Package or Namespace - There is no inbuilt package or namespace in JavaScript, but we can implement it JSON style objects. namespaces.js\n(function () { // create namespace /** * @namespace com.tk.training.oojs */ var com = { tk: { training: { oojs: {} } } }; /** * @namespace com.tk.training.oojs.interface */ com.tk.training.oojs.interface = {}; /** * @namespace com.tk.training.oojs.manager */ com.tk.training.oojs.manager = {}; //Register namespace with global. window.com = com; })(); Let’s use these namespaces in the classes we have created so far. human.js\n(function () { ... ... /** * @interface */ var Human = new IFactory (\u0026#34;Human\u0026#34;, [\u0026#34;getClassName\u0026#34;, \u0026#34;getBirthDate\u0026#34;]); //register with namespace com.tk.training.oojs.interface.IFactory = IFactory; com.tk.training.oojs.interface.Human = Human; })(); // person.js (function () { function Person (name, age, dob) { ... ... com.tk.training.oojs.interface.IFactory.validate (this, com.tk.training.oojs.interface.Human); }; Person.prototype.getClassName = function _getClassName () { return \u0026#34;class: Person\u0026#34;; }; //Register with namespace. com.tk.training.oojs.manager.Person = Person; })(); employee.js\n(function () { function Employee (code, firstName, lastName, age) { ... //Calling super constructor com.tk.training.oojs.manager.Person.call(this, firstName, age); ... }; Employee.prototype = Object.create (com.tk.training.oojs.manager.Person.prototype); Employee.prototype.constructor = Employee; Employee.prototype.getClassName = function _getClassName () { //Calling super.getClassName () var baseClassName = com.tk.training.oojs.manager.Person.prototype.getClassName.call(this); return \u0026#34;class: Employee extending \u0026#34; + baseClassName; }; com.tk.training.oojs.manager.Employee = Employee; })(); var emp = new com.tk.training.oojs.manager.Employee(425, \u0026#34;Kuldeep\u0026#34;, \u0026#34;Singh\u0026#34;, 31); console.log (emp); console.log (emp.getClassName ()) This approach is quite verbose. You can shorthand by creating local variable for package such as\n var manager = com.tk.training.oojs.manager console.log (manger.Person) Modules - A module is a way of dividing a JavaScript into packages, with that we define what a module is exporting and what are its dependencies.\nLet’s define a simple module:\n/** * @param {Array} dependencies * @param {Array} parameters */ function module (dependencies, parameters) { var localVariable1 = parameters [0]; var localVariable2 = parameters [1]; /** * @private function */ function localFunction () { return localVariable1 = localVariable1 + localVariable2; } /** * @exports */ return { getVariable1: function _getV1 () { return localVariable1; }, getVariable2: function _getV2 () { return localVariable2; }, add: localFunction, print: function _print () { console.log (\u0026#34;1=\u0026#34;, localVariable1, “2=\u0026#34;, localVariable2); } }; } var moduleInstance = module ([], [5, 6]); console.log (moduleInstance.getVariable1 ()); console.log (moduleInstance.getVariable2 ()); console.log (moduleInstance.add ()); moduleInstance.print (); moduleInstance = module ([], [11, 12]); console.log (moduleInstance.getVariable1 ()); console.log (moduleInstance.getVariable2 ()); console.log (moduleInstance.add ()); moduleInstance.print (); 5 6 11 1= 11, 2= 6 11 12 23 1= 23, 2= 12 A module encapsulates the internal data and exports the functionality, we can create a module for creating the objects for the classes. Every time a module is called, it returns a new instance of the returning JSON object. All the exported properties and the APIs are accessed from the returned object. Let’s change the classes created so far to follow the module pattern.\nnamespaces.js\nvar com = (function () { // create namespace /** * @namespace com.tk.training.oojs */ var com = {\ttk: {training: {oojs: {}}}}; /** * @namespace com.tk.training.oojs.interface */ com.tk.training.oojs.interface = {}; /** * @namespace com.tk.training.oojs.manager */ com.tk.training.oojs.manager = {}; return com; })(); ifactory.js\n com.tk.training.oojs.interface.IFactory = (function () { /** * @static * Method to validate if methods of interface are available in \u0026#39;this_obj\u0026#39; * * @param {object} this_obj - object of implementing class. * @param {object} interface - object of interface. */ function validate (this_obj, interface) { for (var i = 0; i \u0026lt; interface.methods.length; i++) { var method = interface.methods[i]; if (! this_obj [method] || typeof this_obj [method]! == \u0026#34;function\u0026#34;) { throw new Error (\u0026#34;Class must implement Method: \u0026#34; + method); } } }; return {validate: validate}; })(); human.js\ncom.tk.training.oojs.interface.Human = (function() { return {name: \u0026#34;Human”, methods: [\u0026#34;getClassName\u0026#34;, \u0026#34;getBirthDate\u0026#34;]}; })(); person.js\ncom.tk.training.oojs.manager.Person = (function (dependencies) { //dependencies var IFactory = dependencies [0]; var Human = dependencies [1]; function Person (name, age, dob) { if (! (this instanceof Person)) { throw new Error (\u0026#34;Constructor can\u0026#39;t be called as function\u0026#34;); } var _name = name; var _age = age; var _dob = dob; this.getName = function _getName () { return _name; }; this.getAge = function _getAge () { return _age; }; this.getBirthDate = function _getDob () { return _dob; }; /** * @public String representation * @return */ this.toStr = function toString () { return \u0026#34;name:\u0026#34; + _name + \u0026#34;, age:\u0026#34; + _age; }; IFactory.validate (this, Human); }; Person.prototype.getClassName = function _getClassName () { return \u0026#34;class: Person\u0026#34;; }; //export the person class; return Person; }) ([com.tk.training.oojs.interface.IFactory, com.tk.training.oojs.interface.Human]); employee.js\ncom.tk.training.oojs.manager.Employee = (function (dependencies) { //dependencies var Person = dependencies [0]; /** * This is the constructor. * * @param {String} type * @constructor */ function Employee (code, firstName, lastName, age) { if (! (this instanceof Employee)) { throw new Error (\u0026#34;Constructor can\u0026#39;t be called as function\u0026#34;); } //Calling super constructor Person.call (this, firstName, age); // Private fields var _code = code; var _firstName = firstName; var _lastName = lastName; // Public getters this.getCode = function _getCode () { return _code; }; this.getFirstName = function _getFirstName () { return _firstName; }; this.getLastName = function _getLastName () { return _lastName; }; this.toStr = function toString () { return \u0026#34;Code:\u0026#34; + _code + \u0026#34;, Name:\u0026#34; + _firstName +\u0026#34; \u0026#34;+ _lastName; }; }; Employee.prototype = Object.create (Person.prototype); Employee.prototype.constructor = Employee; Employee.prototype.getClassName = function _getClassName () { //Calling super.getClassName () var baseClassName = Person.prototype.getClassName.call (this); return \u0026#34;class: Employee extending \u0026#34; + baseClassName; }; //export the employee class; return Employee; }) ([com.tk.training.oojs.manager.Person]); var emp = new com.tk.training.oojs.manager.Employee (425, \u0026#34;Kuldeep\u0026#34;, \u0026#34;Singh\u0026#34;, 31); console.log (emp); console.log (emp.getClassName ()); You can also encapsulate the data in a module, so instead of having a lot of classes we may have modules returning the exposed APIs.\nBy the above approach we can clearly state the dependencies in each module. But we still need to define a global variable to hold the dependencies. Also, in an HTML page we need to include javascript files in the specific order such that all the dependencies get resolved correctly.\n\u0026lt;script src=\u0026#34;resources/jquery-1.9.0.min.js\u0026#34;\u0026gt;\u0026lt;/script\u0026gt; \u0026lt;script src=\u0026#34;resources/namespaces.js\u0026#34;\u0026gt;\u0026lt;/script\u0026gt; \u0026lt;script src=\u0026#34;resources/ifactory.js\u0026#34;\u0026gt;\u0026lt;/script\u0026gt; \u0026lt;script src=\u0026#34;resources/human.js\u0026#34;\u0026gt;\u0026lt;/script\u0026gt; \u0026lt;script src=\u0026#34;resources/person.js\u0026#34;\u0026gt;\u0026lt;/script\u0026gt; \u0026lt;script src=\u0026#34;resources/employee.js\u0026#34;\u0026gt;\u0026lt;/script\u0026gt; As the number of classes, modules, and files increases, it will be difficult to remember the order of dependencies. So maintaining such a code is still difficult. In next section we will discuss how to resolve this issue.\n Dependencies Management - The management of dependency in JavaScript can be done using the AMD (Asynchronous Module Definition) approach, which is based upon the module pattern discussed in the last section. The modules are loaded as and when required and only once. In general, the Java Scripts are loaded in the sequence in which they are included in the HTML page, but using AMD they will be loaded asynchronously and in the order they are used.\nThere are JavaScript libraries which support AMD approach, one of which is require.js. We can also group modules in the multiple folders which eliminate the need of namespaces. See the following updated code using AMD approach. Download require.js from here, and include it in HTML, remove the other included script tags.\n\u0026lt;script data-main=\u0026#34; resources/main.js\u0026#34; src=\u0026#34;resources/require.js\u0026#34;\u0026gt;\u0026lt;/script\u0026gt; We have refactored javascript code in following structure The syntax for defining a module\n define (\u0026lt;array of dependencies\u0026gt;, function (Dependency1, Dependency1 …) { }); The similar syntax is used for require. Let’s define the modules created so far in AMD syntax.\nhuman.js\ndefine (function () { console.log (\u0026#34;Loaded Hunam\u0026#34;); return {name: \u0026#34;Human”, methods: [\u0026#34;getClassName\u0026#34;, \u0026#34;getBirthDate\u0026#34;]}; }); ifactory.js\ndefine (function () { console.log (\u0026#34;Loaded IFactory\u0026#34;); /** * @static * Method to validate if methods of interface are available in \u0026#39;this_obj\u0026#39; * * @param {object} this_obj - object of implementing class. * @param {object} interface - object of interface. */ function validate (this_obj, interface) { for (var i = 0; i \u0026lt; interface.methods.length; i++) { var method = interface.methods[i]; if (! this_obj [method] || typeof this_obj [method]! == \u0026#34;function\u0026#34;) { throw new Error (\u0026#34;Class must implement Method: \u0026#34; + method); } } }; return {validate: validate}; }); person.js\ndefine ([\u0026#34;interface/human\u0026#34;, \u0026#34;interface/ifactory\u0026#34;], function (Human, IFactory) { console.log (\u0026#34;Loaded Person with dependencies: “, Human, IFactory); function Person (name, age, dob) { if (! (this instanceof Person)) { throw new Error (\u0026#34;Constructor can\u0026#39;t be called as function\u0026#34;); } var _name = name; var _age = age; var _dob = dob; this.getName = function _getName () { return _name; }; this.getAge = function _getAge () { return _age; }; this.getBirthDate = function _getDob () { return _dob; }; /** * @public String representation * @return */ this.toStr = function toString () { return \u0026#34;name:\u0026#34; + _name + \u0026#34;, age:\u0026#34; + _age; }; IFactory.validate (this, Human); }; Person.prototype.getClassName = function _getClassName () { return \u0026#34;class: Person\u0026#34;; }; //export the person class; return Person; }); employee.js\ndefine ([\u0026#34;impl/person\u0026#34;], function (Person) { console.log (\u0026#34;Loaded Employee\u0026#34;); function Employee (code, firstName, lastName, age) { if (! (this instanceof Employee)) { throw new Error (\u0026#34;Constructor can\u0026#39;t be called as function\u0026#34;); } //Calling super constructor Person.call (this, firstName, age); // Private fields var _code = code; var _firstName = firstName; var _lastName = lastName; // Public getters this.getCode = function _getCode () { return _code; }; this.getFirstName = function _getFirstName () { return _firstName; }; this.getLastName = function _getLastName () { return _lastName; }; this.toStr = function toString () { return \u0026#34;Code:\u0026#34; + _code + \u0026#34;, Name:\u0026#34; + _firstName +\u0026#34; \u0026#34;+ _lastName; }; }; Employee.prototype = Object.create (Person.prototype); Employee.prototype.constructor = Employee; Employee.prototype.getClassName = function _getClassName () { //Calling super.getClassName () var baseClassName = Person.prototype.getClassName.call (this); return \u0026#34;class: Employee extending \u0026#34; + baseClassName; }; //export the employee class; return Employee; }); main.js\nrequire ([\u0026#34;impl/employee\u0026#34;], function (Employee) { console.log (\u0026#34;Loaded Main\u0026#34;); var emp = new Employee (425, \u0026#34;Kuldeep\u0026#34;, \u0026#34;Singh\u0026#34;, 31); console.log (emp); console.log (emp.getClassName ()); }); Loaded Hunam Loaded IFactory Loaded Person with dependencies: Object {name: \u0026quot;Human\u0026quot;, methods: Array [2]} Object {validate: function} Loaded Employee Loaded Main Employee {getName: function, getAge: function, getBirthDate: function, toStr: function, getCode: function…} class: Employee extending class: Person Also, see the snap of JS loading sequence given below, the timelines human.js and ifactory.js are loaded almost in parallel. Documentation This section describes JavaScript documentation using JSDoc, which is a markup language used to annotate the JavaScript source code files. The syntax and the semantics of JSDoc are similar to those of the Javadoc scheme, which is used for documenting code written in Java. JSDoc differs from Javadoc, in that it is specialized to handle the dynamic behavior of JavaScript. Jsdoc_toolkit-2.4.0\nGenerate Documentation Download the JSDoc toolkit and unzip the archive.\n Running JsDoc Toolkit requires you to have Java installed on your computer. For more information see http://www.java.com/getjava/.\n Before running the JsDoc Toolkit app you should change your current working directory to the jsdoc-toolkit folder. Then follow the examples given below or as shown on the project wiki.\n Command Line - On a computer running Windows a valid command line to run JsDoc Toolkit might look like this:\njava -jar jsrun.jar app\\run.js -a -t=templates\\jsdoc mycode.js You can use –d switch to specify an output directory.\n Reports - It generates similar reports as Java Docs\n Deployment We have discussed modularization and documentation, which result in a lot of files and a lot of lines for documentation. There will be lot of round trips on server for loading the JavaScript which is not recommended in production. We should minify, obfuscate and concatenate the JavaScript files to reduce the round trips and size of the JavaScript code to be executed in browsers.\nDuring development we should use the AMD approach, and for production we can optimize the scripts by the following steps.\nInstall r.js Optimizer Download node.js and install it. On node console run following C:\\\u0026gt;node npm -g require.js Generate build profile resources/build-profile.js\n({ baseUrl: \u0026#34;.\u0026#34;, name: \u0026#34;main\u0026#34;, out: \u0026#34;main-build.js\u0026#34; }) Build, Package and Optimize Run following command in cmd prompt\nr.js.cmd -o build-profile.js It will generate a file main-build.js as follows, which is a minified, obfuscated and concatenated file.\ndefine(\u0026#34;interface/human\u0026#34;,[],function(){return console.log(\u0026#34;Loaded Hunam\u0026#34;),{name:\u0026#34;Human\u0026#34;,methods:[\u0026#34;getClassName\u0026#34;,\u0026#34;getBirthDate\u0026#34;]}}),define(\u0026#34;interface/ifactory\u0026#34;,[],function(){function e(e,t){for(var n=0;n\u0026lt;t.methods.length;n++){var r=t.methods[n];if(!e[r]||typeof e[r]!=\u0026#34;function\u0026#34;)throw new Error(\u0026#34;Class must implement Method: \u0026#34;+r)}}return console.log(\u0026#34;Loaded IFactory\u0026#34;),{validate:e}}),define(\u0026#34;impl/person\u0026#34;,[\u0026#34;interface/human\u0026#34;,\u0026#34;interface/ifactory\u0026#34;],function(e,t){function n(r,i,s){if(!(this instanceof n))throw new Error(\u0026#34;Constructor can\u0026#39;t be called as function\u0026#34;);var o=r,u=i,a=s;this.getName=function(){return o},this.getAge=function(){return u},this.getBirthDate=function(){return a},this.toStr=function f(){return “name :\u0026#34;+o+\u0026#34;, age :\u0026#34;+u},t.validate(this,e)}return console.log(\u0026#34;Loaded Person with dependencies : \u0026#34;,e,t),n.prototype.getClassName=function(){return “class: Person\u0026#34;},n}),define(\u0026#34;impl/employee\u0026#34;,[\u0026#34;impl/person\u0026#34;],function(e){function t(n,r,i,s){if(!(this instanceof t))throw new Error(\u0026#34;Constructor can\u0026#39;t be called as function\u0026#34;);e.call(this,r,s);var o=n,u=r,a=i;this.getCode=function(){return o},this.getFirstName=function(){return u},this.getLastName=function(){return a},this.toStr=function f(){return “Code:\u0026#34;+o+\u0026#34;, Name :\u0026#34;+u+\u0026#34; \u0026#34;+a}}return console.log(\u0026#34;Loaded Employee\u0026#34;),t.prototype=Object.create(e.prototype),t.prototype.constructor=t,t.prototype.getClassName=function(){var n=e.prototype.getClassName.call(this);return “class: Employee extending \u0026#34;+n},t}),require([\u0026#34;impl/employee\u0026#34;],function(e){console.log(\u0026#34;Loaded Main\u0026#34;);var t=new e(425,\u0026#34;Kuldeep\u0026#34;,\u0026#34;Singh\u0026#34;,31);console.log(t),console.log(t.getClassName())}),define(\u0026#34;main\u0026#34;,function(){}); Include Include the optimized script for production release.\n\u0026lt;script data-main=\u0026#34;resources/main-build.js\u0026#34; src=\u0026#34;resources/require.js\u0026#34;\u0026gt;\u0026lt;/script\u0026gt; This completes the course. Happy reading!\n","id":183,"tag":"oops ; object oriented ; javascript ; js ; course ; practice ; language ; technology","title":"Object Oriented JavaScript","type":"post","url":"/post/oojs/"},{"content":"I define software effectiveness as, doing the objective effectively, I mean correctly. Efficiency can be defined as, using the resources optimally where resources could be memory, CPU, time, files, connections, databases etc.\nFrom my experience, in most (should I call many) of the software projects, efficiency/performance is not much accentuated during the system design and in earlier phases(requirement and estimation) as compared to the emphasis given late in the game, coding and testing and mostly in maintenance.\nStressing on efficiency, during the early SDLC phases, can eliminate lot of problems. If we consider the efficiency in say coding phase, then we probably able to develop an optimal system in which 90% of the code using just 1% of CPU in peak load, but 10% of code is using 99% of CPU. If we worry about the performance only after the system is built then we are in the worst situation, we probably could not reach 90% optimal code level.\nComing back to effectiveness, usually this is the most emphasized topic in all the SDLC phases, we always try to make the system understandable, testable, and maintainable.\nEfficiency is generally against the code quality measures that were considered to improve effectiveness, more efficient code is usually more difficult to understand, hard to maintain, sometime very hard to test.\nSo based on project, we should benchmark/strengthen the SDLC process to balance-out the efficiency and effectiveness in each phases. Keep in mind that there are always some modules where efficiency is more concerned than the understandability, maintainability. We may change our mind setup when understanding a highly efficient module that this will require more effort to understand, maintain and test.\nThis post is not meant to allow you writing less effective code, saying that you are writing efficient code so; it will be less understandable, not testable and not maintainable. Use better design pattern, prepare approach and discuss it, design the efficient module vary carefully\nThis article was originally published on My LinkedIn Profile\n","id":184,"tag":"effectiveness ; software ; efficiency ; general ; technology","title":"Software Effectiveness vs Software Efficiency","type":"post","url":"/post/software-effectiveness-vs-efficiency/"},{"content":"Hash collision degrades the performance of HashMap significantly. Java 8 has introduced a new strategy to deal with hash collisions, thus improving the performance of HashMaps. Considering this improvement in Java 8 for hash collisions, existing applications can expect performance improvements in case they are using HashMaps having large number of elements by simply upgrading to Java 8.\nEarlier, when multiple keys ends up in the same bucket, then values along with their keys are placed in a linked list. In case of retrieval linked list has to be traversed to get the entry. In worst case scenario, when all keys are mapped to the same bucket, the lookup time of HashMap increases from O(1) to O(n).\nImprovement/Changes in Java 8 Java 8 has come with following improvement of HashMap objects in case of high collisions.\n The alternative String hash function added in Java 7 has been removed. Buckets containing a large number of colliding keys will store their entries in a balanced tree instead of a LinkedList after certain threshold is reached. Above changes ensures performance of O(log(n)) in worst case scenarios (hash function is not distributing keys properly) and O(1) with proper hashCode().\nHow linked list is replaced with binary tree? In Java 8 HashMap replaces linked list with a binary tree when the number of elements in a bucket reaches certain threshold. While converting list to binary tree hashcode is used as a branching variable. If there are two different hashcode in the same bucket, one is considered bigger and goes to the right of the tree and other one to the left. But when both the hashcode are equal, HashMap assumes that the keys are Comparable, and compares the key to determine the direction so that some order can be maintained. It is a good practice to make the keys of HashMap comparable.\nThis JDK 8 change applies only to HashMap, LinkedHashMap and ConcurrentHashMap.\nPerformance Benchmarking Based on a simple experiment of creating HashMaps of different sizes and performing put and get operation by key,\nFollowing results have been recorded.\n HashMap.get() operation with proper hashCode() logic: HashMap.get() operation with broken (hashCode is same for all Keys) hashCode() logic HashMap.put() operation with proper hashCode() logic: HashMap.put() operation with broken (hashCode is same for all Keys) hashCode() logic There is a great improvement when there is a map with high number of elements, and even if there are collision, HashMap of Java 8 works much faster than the older versions.\n","id":185,"tag":"performance ; java 8 ; java ; whatsnew ; hashmap ; benchmarking ; technology","title":"HashMap Performance Improvement in Java 8","type":"post","url":"/post/java8-hashmap-performance-improvement/"},{"content":"After Java 8, developers can apply functional programming constructs in a pure Object-Oriented programming language through lambda expressions. Using lambda expression sequential and parallel execution can be achieved by passing behavior into methods. In Java world lambdas can be thought of as an anonymous method with a more compact syntax. Here compact means it is not mandatory to specify access modifiers, return type and parameter types while defining the expression.\nThere are various reasons for addition of lambda expression in Java platform but the most beneficial of them is we can easily distribute processing of collection over multiple threads. Prior to Java 8 if the processing of elements in a collection has to be done in parallel, client code was supposed to perform the necessary steps and not the collection. In Java 8 using Lambda Expression and Stream API we can pass processing logic of elements into methods provided by collections and now collection is responsible for parallel processing of elements not the client. Also parallel processing effectively utilizes multicore CPUs used now a days.\nLambda Expression Syntax: (parameters) -\u0026gt; expression or (parameters) -\u0026gt; { statements; } Java 8 provide support for lambda expressions only with functional interfaces. FunctionalInterface is also a new feature introduced in Java 8. Any Interface with single abstract method is called Functional Interface. Since there is only one abstract method, there is no confusion in applying the lambda expression to that method.\nBenefits of Lambda Expression Less Lines of Code - One of the benefit of using lambda expression is reduced amount of code, see below example. // Using Anonymous class Runnable r1 = new Runnable() { \t@Override \tpublic void run() { \tSystem.out.println(\u0026#34;Prior to Java 8\u0026#34;); \t} \t}; // Runnable using lambda expression Runnable r2 = () -\u0026gt; { \tSystem.out.println(\u0026#34;From Java 8\u0026#34;); \t}; As we know in Java, lambda can be used only with functional interfaces. In above example Runnable is a functional interface, so we can easily apply lambda expression here. In this case we are not passing any parameter in lambda expression because the run() method of the functional interface (Runnable) takes no argument. Also syntax of the lambda expression says we can omit curly braces ({}) in case of single statement in the method body. In case of multiple statements, we should use curly braces as done in above example. Sequential and Parallel Execution Support by passing behavior in methods Prior to Java 8, processing elements of any collection can be done by obtaining an iterator from the collection and then iterating over the elements and then processing each element. If the requirement is to process the elements in parallel, it should be done by client code. With the introduction of Stream API in Java 8, functions can be passed to collection methods and now it’s responsibility of collection to process the elements either in sequential or parallel manner.\n // Using for loop for iterating a list \tpublic static void printUsingForLoop(){ \tList\u0026lt;String\u0026gt; stringList = new ArrayList\u0026lt;String\u0026gt;(); \tstringList.add(\u0026#34;Hello\u0026#34;); \tfor (String string : stringList) { \tSystem.out.println(\u0026#34;Content of List is: \u0026#34; + string); \t} \t} // Using forEach and passing lambda expression for iteration \tpublic static void printUsingForEach() { \tList\u0026lt;String\u0026gt; stringList = new ArrayList\u0026lt;String\u0026gt;(); \tstringList.add(\u0026#34;Hello\u0026#34;); \tstringList.stream().forEach((string) -\u0026gt; { \tSystem.out.println(\u0026#34;Content of List is: \u0026#34; + string); \t}); \t} Higher Efficiency (Utilizing Multicore CPU’s) Using Stream API and lambda expression we can achieve higher efficiency (parallel execution) in case of bulk operations on collections. Also lambda expressions can help in achieving internal iteration of collections rather than external iteration as shown in above example. Now a days we have CPU’s with multicores, we can take the advantage of these multicore CPU’s by parallel processing of collections using lambda.\nLambda Expression and Objects In Java any lambda expression is an object as they are instance of functional interface. We can assign a lambda expression to any variable and pass it like any other object. See below example how the lambda expression is assigned to a variable, and how it is invoked.\n// Functional Interface @FunctionalInterface public interface TaskComparator { \tpublic boolean compareTasks(int a1, int a2); } public class TaskComparatorImpl { \tpublic static void main(String[] args) { \tTaskComparator myTaskComparator = (int a1, int a2) -\u0026gt; {return a1 \u0026gt; a2;}; \tboolean result = myTaskComparator.compareTasks(5, 2); \tSystem.out.println(result); \t} } Where Lambda’s can be applied? Lambda expressions can be used anywhere in Java where we have a target type. In Java we have target type in following contexts:\n Variable declarations and assignments Return statements Method or constructor arguments Ref : https://www.javaworld.com/article/3452018/get-started-with-lambda-expressions.html\n","id":186,"tag":"lambda-expression ; java 8 ; java ; whatsnew ; introduction ; tutorial ; language ; technology","title":"Introduction to Java Lambda Expression","type":"post","url":"/post/java8-lambda-expression/"},{"content":"Prior to JDK 8, collections can only be managed through iterators with the use of for, foreach or while loops. It means that we instruct a computer to execute the algorithm steps.\nint sum(List\u0026lt;Integer\u0026gt; list) { Iterator\u0026lt;Integer\u0026gt; intIterator = list.iterator(); int sum = 0; while (intIterator.hasNext()) { int number = intIterator.next(); if (number \u0026gt; 5) { sum += number; } } return sum; } The above approach has the following tailbacks:\n We need to express how the iteration will take place, we needed an external iteration as the client program to handle the traversal. The program is sequential; no way to do it in parallel. The code is verbose. Java 8 - Stream APIs Java 8 introduces Stream API to circumvent the aforementioned shortcomings. It allows us to leverage other changes, e.g., lambda expression, method reference, functional interface and internal iteration introduced via forEach() method.\nThe above logic of the program can be rewritten in a single line:\nint sum(List\u0026lt;Integer\u0026gt; list) { return list.stream().filter(i -\u0026gt; i \u0026gt; 5).mapToInt(i -\u0026gt; i).sum(); } The streamed content is filtered with the filter method, which takes a predicate as its argument. The filter method returns a new instance of Stream\u0026lt;T\u0026gt;. The map function is used to transform each element of the stream into a new element. The map returns a Stream\u0026lt;R\u0026gt;, in the above example, we used mapToInt to get the IntStream. Finally, IntStream has a sum operator, which calculates the sum of all elements in the stream, and we got the expected result so cleanly. Sample Examples: public class Java8Streams{ public static void main(String args[]) { List\u0026lt;String\u0026gt; strList = Arrays.asList(\u0026#34;java\u0026#34;, \u0026#34;\u0026#34;, \u0026#34;j2ee\u0026#34;, \u0026#34;\u0026#34;, \u0026#34;spring\u0026#34;, \u0026#34;hibernate\u0026#34;, \u0026#34;webservices\u0026#34;); // Count number of String which starts with \u0026#34;j\u0026#34; long count = strList.stream().filter(x -\u0026gt; x.startsWith(\u0026#34;j\u0026#34;)).count(); System.out.printf(\u0026#34;List %s has %d strings which startsWith \u0026#39;j\u0026#39; %n\u0026#34;, strList, count); // Count String with length more than 4 count = strList.stream().filter(x -\u0026gt; x.length()\u0026gt; 4).count(); System.out.printf(\u0026#34;List %s has %d strings of length more than 4 %n\u0026#34;, strList, count); // Count the empty strings count = strList.stream().filter(x -\u0026gt; x.isEmpty()).count(); System.out.printf(\u0026#34;List %s has %d empty strings %n\u0026#34;, strList, count); // Remove all empty Strings from List List\u0026lt;String\u0026gt; emptyStrings = strList.stream().filter(x -\u0026gt; !x.isEmpty()).collect(Collectors.toList()); System.out.printf(\u0026#34;Original List : %s, List without Empty Strings : %s %n\u0026#34;, strList, emptyStrings); // Create a List with String more than 4 characters List\u0026lt;String\u0026gt; lstFilteredString = strList.stream().filter(x -\u0026gt; x.length()\u0026gt; 4).collect(Collectors.toList()); System.out.printf(\u0026#34;Original List : %s, filtered list with more than 4 chars: %s %n\u0026#34;, strList, lstFilteredString); // Convert String to Uppercase and join them using comma List\u0026lt;String\u0026gt; SAARC = Arrays.asList(\u0026#34;India\u0026#34;, \u0026#34;Japan\u0026#34;, \u0026#34;China\u0026#34;, \u0026#34;Bangladesh\u0026#34;, \u0026#34;Nepal\u0026#34;, \u0026#34;Bhutan\u0026#34;); String SAARCCountries = SAARC.stream().map(x -\u0026gt; x.toUpperCase()).collect(Collectors.joining(\u0026#34;, \u0026#34;)); System.out.println(\u0026#34;SAARC Countries: \u0026#34;+SAARCCountries); // Create List of square of all distinct numbers List\u0026lt;Integer\u0026gt; numbers = Arrays.asList(1, 3, 5, 7, 3, 6); List\u0026lt;Integer\u0026gt; distinct = numbers.stream().map( i -\u0026gt; i*i).distinct().collect(Collectors.toList()); System.out.printf(\u0026#34;Original List : %s, Square Without duplicates : %s %n\u0026#34;, numbers, distinct); //Get count, min, max, sum, and average for numbers List\u0026lt;Integer\u0026gt; lstPrimes = Arrays.asList(3, 5, 7, 11, 13, 17, 19, 21, 23); IntSummaryStatistics stats = lstPrimes.stream().mapToInt((x) -\u0026gt; x).summaryStatistics(); System.out.println(\u0026#34;Highest prime number in List : \u0026#34; + stats.getMax()); System.out.println(\u0026#34;Lowest prime number in List : \u0026#34; + stats.getMin()); System.out.println(\u0026#34;Sum of all prime numbers : \u0026#34; + stats.getSum()); System.out.println(\u0026#34;Average of all prime numbers : \u0026#34; + stats.getAverage()); } } List [java, , j2ee, , spring, hibernate, webservices] has 2 strings which startsWith 'j' List [java, , j2ee, , spring, hibernate, webservices] has 3 strings of length more than 4 List [java, , j2ee, , spring, hibernate, webservices] has 2 empty strings Original List : [java, , j2ee, , spring, hibernate, webservices], List without Empty Strings : [java, j2ee, spring, hibernate, webservices] Original List : [java, , j2ee, , spring, hibernate, webservices], filtered list with more than 4 chars: [spring, hibernate, webservices] SAARC Countries: INDIA, JAPAN, CHINA, BANGLADESH, NEPAL, BHUTAN Original List : [1, 3, 5, 7, 3, 6], Square Without duplicates : [1, 9, 25, 49, 36] Highest prime number in List: 23 Lowest prime number in List: 3 Sum of all prime numbers: 119 Average of all prime numbers: 13.222222222222221 API overview The main type in the JDK 8 Stream API package java.util.stream is the Stream interface. The types IntStream, LongStream, and DoubleStream are streams over objects and the primitive int, long and double types.\nDifferences between Stream and Collection:\n A stream does not store data. An operation on a stream does not modify its source, but simply produces a result. Collections have a finite size, but streams do not. Like an Iterator, a new stream must be generated to revisit the same elements of the source. We can obtain Streams in the following ways:\n From a Collection via the stream() and parallelStream() methods; From an array via Arrays.stream(Object[]); From static factory methods on the stream classes, such as Stream.of(Object[]), IntStream.range(int, int) or Stream.iterate(Object, UnaryOperator); The lines of a file from BufferedReader.lines(); Streams of file paths from methods in Files; Streams of random numbers from Random.ints(); Other stream-bearing methods: BitSet.stream(), Pattern.splitAsStream(java.lang.CharSequence), and JarFile.stream(). Stream operations Intermediate operations Stateless operations, such as filter and map, retain no state from previous element.\nStateful operations, such as distinct and sorted, may have state from the previous elements.\nTerminal operations Operations, such as Stream.forEach or IntStream.sum, may traverse the stream to produce a result or a side-effect. After the terminal operation is performed, the stream pipeline is considered consumed, and can no longer be used.\nReduction Operations The general reduction operations are reduce() and collect() and the special reduction operations are sum(), max(), or count().\nParallelism All the operations of streams can execute either in serial or in parallel. For example, Collection has methods Collection.stream() and Collection.parallelStream(), which produce sequential and parallel streams respectively.\nReferences •\thttps://docs.oracle.com/javase/8/docs/api/java/util/stream/package-summary.html\n","id":187,"tag":"stream-api ; java 8 ; java ; whatsnew ; introduction ; tutorial ; language ; technology","title":"Introduction to Java Stream API","type":"post","url":"/post/java8-stream-api/"},{"content":"Presented At: G. Venkat, Senior Vice President of Technology \u0026amp; Solutions, at Nagarro and I have spoken at the Oracle OpenWorld / JavaOne conference on Wednesday, October 1, 2014, form 10:00 – 11:00 a.m., Pacific Time, at Ballroom 6 in the Hilton San Francisco at Union Square.\nI have supported Venkat and presented the JMod tool demo with him.\nHere are some coverages - https://www.nagarro.com/en/news-events/nagarro-has-java-8-covered-at-oracle-open-world I have lead the Nagarro Stall and showcased the JMod demo to 300+ visitors.\nThe talk is about Enterprises invest substantial resources in developing and maintaining a portfolio of Java applications to run business-critical processes. To stay competitive, enterprises must regularly enhance Java applications and migrate to newer JDK versions. Security and performance improvements as well as newer APIs and features make these upgrades compelling and necessary. However, the upgrade process is often complicated and unpredictable. This presentation shares how to\n Build tools and automation to manage modernization of Java applications Automate the identification of migration points within a Java application Create repeatable, predictable modernization processes Eliminate risk and maximize benefits Video Here More With Venkat, we have also worked on Java Modernization Tools - https://www.nagarro.com/en/java-modernization Later Venkat has published all the learning in his book - https://www.amazon.in/Rapid-Modernization-Java-Applications-Enterprise/dp/0071842039\n","id":188,"tag":"java ; java 8 ; transformation ; javaone ; talk ; modernization ; oracle ; nagarro ; event","title":"Contributor - JavaOne - Rapid Modernization to Java8","type":"event","url":"/event/oracle-rapid-modernization-of-java-applications-to-jdk-8/"},{"content":"The vision of IoT can be seen from two perspectives— ‘Internet’ centric and ‘Thing’ centric. The Internet centric architecture will involve internet services being the main focus while data is contributed by the objects. In the object centric architecture, the smart objects take the center stage.\nIn order to realize the full potential of cloud computing as well as ubiquitous sensing, a combined framework with a cloud at the center seems to be most viable. This not only gives the flexibility of dividing associated costs in the most logical manner but is also highly scalable. Sensing service providers can join the network and offer their data using a storage cloud; analytic tool developers can provide their software tools; artificial intelligence experts can provide their data mining and machine learning tools useful in converting information to knowledge and finally computer graphics designers can offer a variety of visualization tools.\nCloud computing can offer these services as Infrastructures, Platforms or Software where the full potential of human creativity can be tapped using them as services. The data generated, tools used and the visualization created disappears into the background, tapping the full potential of the Internet of Things in various application domains. The Cloud integrates all ends by providing scalable storage, computation time and other tools to build new businesses.\nMajor Platforms – A Quick Snapshot Below are given the quick snapshot of major IoT Cloud platforms with their main key features:\nAWS IoT Supports HTTP, WebSockets, and MQTT Rules Engine: A rule can apply to data from one or many devices, and it can take one or many actions in parallel. It can route messages to AWS endpoints including AWS Lambda, Amazon Kinesis, Amazon S3, Amazon Machine Learning, Amazon DynamoDB, Amazon CloudWatch, and Amazon Elasticsearch Service with built-in Kibana integration (Kibana is an open source data visualization plugin for Elasticsearch.) Device Shadows: Create a persistent, virtual version, or “shadow,” of each device that includes the device’s latest state. https://developer.amazon.com/blogs/post/Tx3828JHC7O9GZ9/Using-Alexa-Skills-Kit-and-AWS-IoT-to-Voice-Control-Connected-Devices\nAzure IoT Suite Easily integrate Azure IoT Suite with your systems and applications, including Salesforce, SAP, Oracle Database, and Microsoft Dynamics Azure IoT Suite packages together Azure IoT services with preconfigured solutions. Supports HTTP, Advanced Message Queuing Protocol (AMQP), and MQ Telemetry Transport (MQTT). Gateway SDK – a framework to create extensible gateway solutions, code which sends data from multiple devices over gateway to cloud connection https://azure.microsoft.com/en-in/blog/microsoft-azure-iot-suite-connecting-your-things-to-the-cloud/\nGoogle Cloud IoT Take advantage of Google’s heritage of web-scale processing, analytics, and machine intelligence. Utilizes Google\u0026rsquo;s global fiber network (70 points of presence across 33 countries) for ultra-low latency https://cloud.google.com/iot-core/ IBM Watson IoT Machine Learning - Automate data processing and rank data based on learned priorities. Raspberry Pi Support - Develop IoT apps that leverage Raspberry Pi, cognitive capabilities and APIs Real-Time Insights - Contextualize and analyze real-time IoT data https://cloud.ibm.com/catalog/services/internet-of-things-platform\nGE Predix Designed mainly for IIoT platforms. Supports over 60 regulatory frameworks worldwide. Based on Pivotal Cloud Foundry (Cloud Foundry is an open source cloud computing platform as a service (PaaS)) https://www.ge.com/digital/iiot-platform\nBosch IoT Suite Bosch IoT Analytics: Our analytics services make analyzing field data much simpler. The anomaly detection service helps investigate problems that occur in connected devices. The usage profiling service can determine typical usage patterns within a group of devices. Bosch IoT Hub: Messaging backbone for device related communication as attach point for various protocol connectors Bosch IoT Integrations: Integration with third-party services and systems https://www.bosch-iot-suite.com/capabilities-bosch-iot-suite/\nCisco Jasper IoT Focus on services. Control Center: Providing a vast array of flexible ways to automate your IoT services. https://www.cisco.com/c/en/us/solutions/internet-of-things/iot-control-center.html\nXively The messaging broker supports connections using native MQTT and WebSockets MQTT. Xively provides a C client library for use on devices Xively provides an application for integrating connected products into the Salesforce Service Cloud. https://xively.com/\nComparative Analysis We have done extensive study and trials and at this moment here is our findings, below is given the matrix of parameters vs different cloud platforms.\nThis paper was also supported by Ram Chauhan\n","id":189,"tag":"framework ; IOT ; platform ; comparison ; Analysis ; Cloud ; Azure ; Google ; IBM ; GE ; Bosch ; technology","title":"IOT Cloud Platforms - A Comparative Study","type":"post","url":"/post/iot_cloud_platform_comparision/"},{"content":"This article explains the http integration using Apache Camel. Create a configurable router to any http URL having place holders.\n Prerequisite Windows7, Eclipse Juno Java 1.7 Creating HTTP Router Please follow below steps to generate POC for HTTP routing using apache camel.\nCreate Sample Camel Example Create eclipse project using archetype “camel-archetype-spring”\nmvn archetype:generate DarchetypeGroupId=com.tk.poc.camel -DarchetypeArtifactId=camel-archetype-spring -DarchetypeVersion=2.11.0 -DarchetypeRepository=https://repository.apache.org/content/groups/snapshots-group Create following Java file\npackage com.tk.poc.camel; import org.apache.camel.Endpoint; import org.apache.camel.Exchange; import org.apache.camel.Processor; import org.apache.camel.Producer; import org.apache.camel.builder.RouteBuilder; import org.apache.camel.spring.Main; public class MyRouteBuilder extends RouteBuilder { /** * Allow this route to be run as an application */ public static void main(String[] args) throws Exception { new Main().run(); } public void configure() { } } pom.xml – add following dependencies\n\u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.apache.camel\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;camel-http\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;2.11.0\u0026lt;/version\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.apache.camel\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;camel-stream\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;2.11.0\u0026lt;/version\u0026gt; \u0026lt;/dependency\u0026gt; Clean up src/main/resources/META-INF/spring/camel-context.xml\n\u0026lt;camel:camelContext xmlns=\u0026#34;http://camel.apache.org/schema/spring\u0026#34;\u0026gt; \u0026lt;camel:package\u0026gt;com.nagarro.jsag.poc.camel\u0026lt;/camel:package\u0026gt; \u0026lt;/camel:camelContext\u0026gt; Run mvn eclipse:eclipse to include dependencies in class path of eclipse.\nmvn eclipse:eclipse Create Command Configurations Let’s configure commands in a file in following format Commands will to be defined in a text file where every line of text file represents a command in following format:\n\u0026lt;command-key\u0026gt;###\u0026lt;number of placeholders\u0026gt;###\u0026lt;URL with placeholders if any in format “{index of placeholder starting from 0}”\u0026gt; commands.txt\n start###0###http://localhost:8081/home-page/poc.jsp search###1###http://localhost:8081/home-page/poc.jsp?search={0} clean###0###http://localhost:8081/home-page/logout.jsp Change the Main method of MyRouteBuilder.java\nprivate static Map\u0026lt;String, String\u0026gt; commandMap = new HashMap\u0026lt;String, String\u0026gt;(); private static StringBuilder commandText = new StringBuilder(\u0026#34;Commands:\\n\u0026#34;); public static void main(String[] args) throws Exception { String filePath = \u0026#34;./commands.txt\u0026#34;; if(args.length\u0026gt;0){ filePath = args[0]; } try (FileReader fileReader = new FileReader(filePath); BufferedReader reader = new BufferedReader(fileReader);) { String line = reader.readLine(); while (line != null) { String[] keyVal = line.split(\u0026#34;###\u0026#34;); commandMap.put(keyVal[0], keyVal[2]); commandText.append(\u0026#34;\\t\u0026#34;).append(keyVal[0]); if(!\u0026#34;0\u0026#34;.equals(keyVal[1])){ commandText.append(\u0026#34; (require \u0026#34;).append(keyVal[1]).append(\u0026#34; parameter)\u0026#34;); } commandText.append(\u0026#34;\\n\u0026#34;); line = reader.readLine(); } commandText.append(\u0026#34;\\texit\u0026#34;); commandText.append(\u0026#34;\\nEnter command and required parameters:\u0026#34;); } new Main().run(); } Configure the Camel Routes We will configure routes such as we take input (command) from console and then fetch the http URL associated to the command and then route to http URL, route the response to console as well as a log file. Define configure method as follows:\npublic void configure() { from(\u0026#34;stream:in?promptMessage=\u0026#34; + URLEncoder.encode(commandText.toString())).process(new Processor() { \t@Override \tpublic void process(Exchange exchange) throws Exception { \tString input = exchange.getIn().getBody(String.class); \tboolean success = true; \tif (input.trim().length() \u0026gt; 0) { \tif(\u0026#34;exit\u0026#34;.equals(input)){ \tSystem.exit(0); \t} \tString[] commandAndParams = input.split(\u0026#34; \u0026#34;); \turl = commandMap.get(commandAndParams[0]); \tif (url != null) { \tif (commandAndParams.length \u0026gt; 1) { \tString [] params = Arrays.copyOfRange(commandAndParams, 1, commandAndParams.length); url = MessageFormat.format(url, (Object[])params); \t}\t\texchange.getOut().setHeader(Exchange.HTTP_URI, url); \t} else { \tsuccess = false; \t} \t} else { \tsuccess = false; \t} \tif (!success) { \tthrow new Exception(\u0026#34;Error in input\u0026#34;); \t} \t} \t}).to(\u0026#34;http://localhost:99/\u0026#34;).process(new Processor() { \t@Override \tpublic void process(Exchange exchange) throws Exception { \tStringBuffer stringBuffer = new StringBuffer(\u0026#34;\\n\\nURL :\u0026#34; + url); \tstringBuffer.append(\u0026#34;\\n\\nHeader :\\n\u0026#34;); \tMap\u0026lt;String, Object\u0026gt; headers = exchange.getIn().getHeaders(); \tfor (String key : headers.keySet()) { \tif (!key.startsWith(\u0026#34;camel\u0026#34;)) { \tstringBuffer.append(key).append(\u0026#34;=\u0026#34;) \t.append(headers.get(key)).append(\u0026#34;\\n\u0026#34;); \t} \t} \tString body = exchange.getIn().getBody(String.class); \tstringBuffer.append(\u0026#34;\\nBody:\\n\u0026#34; + body); \texchange.getOut().setBody(stringBuffer.toString()); \texchange.getOut().setHeader(Exchange.FILE_NAME, \tconstant(\u0026#34;response.txt\u0026#34;)); \t} \t}).to(\u0026#34;file:.?fileExist=Append\u0026#34;).to(\u0026#34;stream:out\u0026#34;); } Export as runnable jar Right click on the project and export an runnable jar. name it http.jar\nRun the example Run command “java -jar http.jar \u0026lt;commands file path\u0026gt;” , if you don’t specify commands files will automatically pick the “commands.txt” from same folder and display following D:\\HTTPClient\u0026gt;java -jar http.jar Commands: start menu (require 1 parameter) clean exit Enter command and required parameters: Enter command “start” Enter command and required parameters:start URL : http://localhost:8081/home-page/poc.jsp Header : … Body: .. Commands: start search (require 1 parameter) clean exit Enter command and required parameters: Enter command “Menu with choice” Enter command and required parameters:search IBM URL : http://localhost:8081/home-page/poc.jsp?search=IBM Header : … Body: … Commands: start menu (require 1 parameter) clean exit Enter command and required parameters: You may continue seeing the menu or may call clean or exit Enter command and required parameters:clean URL : http://localhost:8081/home-page/logout.jsp … Enter command and required parameters:exit D:\\Ericsson\\HTTPClient\u0026gt; Conclusion We have developed the Router using camel and same can directly be utilized in the project.\n","id":190,"tag":"java ; Apache Camel ; enterprise ; integration ; technology","title":"HTTP Router with Apache Camel ","type":"post","url":"/post/apache-camel-http-route/"},{"content":"This article is about troubleshooting issues we have faced while using apache camel’s SSH routes. It also covers step wise guide to setup apache camel routes.\nPrerequisites Windows7 Eclipse Juno Java 1.7 Problem Statement We were getting following issues in the logs when connecting to one of the SSH server using Apache Camel-SSH. This was happening in one of the instance in production environment.\nHere are few logs :\nServer A :\n Session created... 2013-11-28 18:12:44:855 | DEBUG [MSC service thread 1-15] Connected to 2013-11-28 18:12:44:855 | DEBUG [MSC service thread 1-15] Attempting to authenticate username ******* using Password... 2013-11-28 18:12:44:857 | INFO [NioProcessor-2] Server version string: SSH-2.0-SSHD-CORE-0.6.0 2013-11-28 18:12:45:165 | DEBUG [ClientInputStreamPump] Send SSH_MSG_CHANNEL_DATA on channel 0 2013-11-28 18:12:45:166 | DEBUG [ClientInputStreamPump] Send SSH_MSG_CHANNEL_EOF on channel 0 2013-11-28 18:12:45:178 | DEBUG [NioProcessor-2] Received packet SSH_MSG_CHANNEL_FAILURE 2013-11-28 18:12:45:178 | DEBUG [NioProcessor-2] Received SSH_MSG_CHANNEL_FAILURE on channel 0 2013-11-28 18:12:45:193 | DEBUG [NioProcessor-2] Closing session 2013-11-28 18:12:45:193 | DEBUG [NioProcessor-2] Closing channel 0 2013-11-28 18:12:45:193 | DEBUG [NioProcessor-2] Closing channel 0 immediately 2013-11-28 18:12:45:194 | DEBUG [NioProcessor-2] Closing IoSession 2013-11-28 18:12:45:194 | DEBUG [NioProcessor-2] IoSession closed 2013-11-28 18:12:45:195 | INFO [NioProcessor-2] Session ***@/***:** closed Server B :\n2013-11-29 13:53:09:280 | INFO [NioProcessor-2] Session created... 2013-11-29 13:53:09:297 | DEBUG [MSC service thread 1-13] Connected to 2013-11-29 13:53:09:297 | DEBUG [MSC service thread 1-13] Attempting to authenticate username ***** using Password... 2013-11-29 13:53:09:300 | INFO [NioProcessor-2] Server version string: SSH-2.0-OpenSSH_5.3 22013-11-29 13:53:09:472 | DEBUG [ClientInputStreamPump] Space available for client remote window 2013-11-29 13:53:09:472 | DEBUG [ClientInputStreamPump] Send SSH_MSG_CHANNEL_DATA on channel 0 2013-11-29 13:53:09:472 | DEBUG [ClientInputStreamPump] Send SSH_MSG_CHANNEL_EOF on channel 0 2013-11-29 13:53:09:494 | DEBUG [NioProcessor-2] Received packet SSH_MSG_CHANNEL_DATA 2013-11-29 13:53:09:494 | DEBUG [NioProcessor-2] Received SSH_MSG_CHANNEL_DATA on channel 0 2013-11-29 13:53:09:494 | DEBUG [NioProcessor-2] Received packet SSH_MSG_CHANNEL_EOF 2013-11-29 13:53:09:494 | DEBUG [NioProcessor-2] Received SSH_MSG_CHANNEL_EOF on channel 0 2013-11-29 13:53:09:495 | DEBUG [NioProcessor-2] Received packet SSH_MSG_CHANNEL_REQUEST 2013-11-29 13:53:09:495 | INFO [NioProcessor-2] Received SSH_MSG_CHANNEL_REQUEST on channel 0 2013-11-29 13:53:09:495 | DEBUG [NioProcessor-2] Received packet SSH_MSG_CHANNEL_CLOSE 2013-11-29 13:53:09:495 | DEBUG [NioProcessor-2] Received SSH_MSG_CHANNEL_CLOSE on channel 0 2013-11-29 13:53:09:495 | DEBUG [NioProcessor-2] Send SSH_MSG_CHANNEL_CLOSE on channel 0 Solving the Chaos! We have taken following steps to resole the issues\n Validate SSH - Checked SSH connection with putty and it was working fine, we were able to execute the commands.\n Analyze Logs - SSH_MSG_CHANNEL_FAILURE error was coming on server A, but is working fine when trying to connect server B. We have noticed following logs\nServer A:\n2013-11-28 18:12:44:857 | INFO [NioProcessor-2] Server version string: SSH-2.0-SSHD-CORE-0.6.0 Server B:\n 2013-11-29 13:53:09:300 | INFO [NioProcessor-2] Server version string: SSH-2.0-OpenSSH_5.3 So different SSH servers are being used.\nResolution 1 – Use same version of SSH Server Install SSHD-CORE-0.6.0 on Server A /local system\n Download http://mina.apache.org/sshd-project/download_0.6.0.html Extract download zip\n Go to /bin\n Run Server : Run sshd.bat(windows) or sshd.sh(linux)\n Run Client: ssh.bat -l [loginuser] -p [port] [hostname] [command]\n ssh.bat -l login -p 8000 localhost Enter password same as login name “login” It will open up the OS shell where you can run commands.\n You can also try to connect ssh using Putty.exe on host and port.\n So now our new SSH server is ready to use, but if we try to connect SSH as execution channel ssh.sh or ssh.bat with some command It always returns with\nssh.bat -l login -p 8000 localhost dir Exception in thread \u0026#34;main\u0026#34; java.lang.IllegalArgumentException: command must not be null at org.apache.sshd.client.channel.ChannelExec.\u0026lt;init\u0026gt;(ChannelExec.java:35) at org.apache.sshd.client.session.ClientSessionImpl.createExecChannel(ClientSessionImpl.java:175) at org.apache.sshd.client.session.ClientSessionImpl.createChannel(ClientSessionImpl.java:160) at org.apache.sshd.client.session.ClientSessionImpl.createChannel(ClientSessionImpl.java:153) at org.apache.sshd.SshClient.main(SshClient.java:402) 53352 [NioProcessor-2] INFO org.apache.sshd.client.session.ClientSessionImpl - Closing session This is because the execution channel at client side is not well implemented. Only SCP command has been implemented. For now we can ignore this client error and try the camel example with new SSH server.\n Try Camel-SSH with new SSH Server (SSHD-CORE) Create Sample Camel-SSH Example Create eclipse project using archetype “camel-archetype-spring\u0026quot; mvn archetype:generate DarchetypeGroupId=com.tk.poc.camel -DarchetypeArtifactId=camel-archetype-spring - DarchetypeVersion=2.11.0 -DarchetypeRepository=https://repository.apache.org/content/groups/snapshots-group Create following Java file\n package com.tk.poc.camel; import org.apache.camel.Endpoint; import org.apache.camel.Exchange; import org.apache.camel.Processor; import org.apache.camel.Producer; import org.apache.camel.builder.RouteBuilder; import org.apache.camel.spring.Main; /** * A simple example router from a file system to an ActiveMQ queue and then to a file system * * @version */ public class MyRouteBuilder extends RouteBuilder { private static String source = \u0026#34;\u0026#34;; private static String url = \u0026#34;\u0026#34;; /** * Allow this route to be run as an application */ public static void main(String[] args) throws Exception { if (args.length \u0026gt; 0) { source = args[0]; url = args[1]; } String[] ar = {}; new Main().run(ar); } public void configure() { from(\u0026#34;file:\u0026#34; + source + \u0026#34;?noop=true\u0026amp;consumer.useFixedDelay=true\u0026amp;consumer.delay=60000\u0026#34;).process(new Processor() { public void process(Exchange exchange) throws Exception { Endpoint ePoint = exchange.getContext().getEndpoint(url); Producer producer = ePoint.createProducer(); producer.process(exchange); } }).bean(new SomeBean()); } public static class SomeBean { public void someMethod(String body) { System.out.println(\u0026#34;Received: \u0026#34; + body); } } } Pom.xml - dependencies\n\u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.apache.camel\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;camel-ssh\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;2.11.0\u0026lt;/version\u0026gt; \u0026lt;!-- use the same version as your Camel core version --\u0026gt; \u0026lt;/dependency\u0026gt; Clean up src/main/resources/META-INF/spring/camel-context.xml\n\u0026lt;camel:camelContext xmlns=\u0026#34;http://camel.apache.org/schema/spring\u0026#34;\u0026gt; \u0026lt;camel:package\u0026gt;com.tk.poc.camel\u0026lt;/camel:package\u0026gt; \u0026lt;/camel:camelContext\u0026gt; Run mvn eclipse:eclipse to include dependencies in class path of eclipse.\nmvn eclipse:eclipse You can now directly run the MyRouteBuilder.java in eclipse with required argument\n Source – a folder path, where folder is containing a text file having command to execute on server.\n Camel URL for SSH component\n ssh://[username]@[host]?password=[password]\u0026amp;port=[port] Let’s export the project as executable jars keeping the dependencies in separate folder\n Lets create a run.bat\n java -jar ssh.jar \u0026#34;D:/TK/Testing/data\u0026#34; \u0026#34;ssh://login@localhost?password=login\u0026amp;port=8000\u0026#34; pause Place the run.bat in the directory where jar has been exported.\n Run the camel SSH example Execute run.bat We get same error as we see on production So problem is now reproducible locally. Resolution 2: Implement Exec channel support in SSH component After looking at the component code, we found that only SCP command has been implemented, EXEC channel is not working correctly.\nExtract as eclipse project Download the camel-ssh source and import it as eclipse project\n Update its pom.xml as follows\n \u0026lt;project xmlns=\u0026#34;http://maven.apache.org/POM/4.0.0\u0026#34; xmlns:xsi=\u0026#34;http://www.w3.org/2001/XMLSchema-instance\u0026#34; \txsi:schemaLocation=\u0026#34;http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\u0026#34;\u0026gt; \t\u0026lt;modelVersion\u0026gt;4.0.0\u0026lt;/modelVersion\u0026gt; \t\u0026lt;groupId\u0026gt;org.apache.camel\u0026lt;/groupId\u0026gt; \t\u0026lt;artifactId\u0026gt;camel-ssh\u0026lt;/artifactId\u0026gt; \t\u0026lt;packaging\u0026gt;jar\u0026lt;/packaging\u0026gt; \t\u0026lt;version\u0026gt;2.11.0\u0026lt;/version\u0026gt; \t\u0026lt;name\u0026gt;Camel :: SSH\u0026lt;/name\u0026gt; \t\u0026lt;description\u0026gt;Camel SSH support\u0026lt;/description\u0026gt; \t\u0026lt;properties\u0026gt; \t\u0026lt;camel.osgi.export.pkg\u0026gt;org.apache.camel.component.ssh.*\u0026lt;/camel.osgi.export.pkg\u0026gt; \t\u0026lt;camel.osgi.export.service\u0026gt;org.apache.camel.spi.ComponentResolver;component=ssh\u0026lt;/camel.osgi.export.service\u0026gt; \t\u0026lt;/properties\u0026gt; \t\u0026lt;dependencies\u0026gt; \t\u0026lt;dependency\u0026gt; \t\u0026lt;groupId\u0026gt;org.apache.camel\u0026lt;/groupId\u0026gt; \t\u0026lt;artifactId\u0026gt;camel-core\u0026lt;/artifactId\u0026gt; \t\u0026lt;version\u0026gt;2.11.0\u0026lt;/version\u0026gt; \t\u0026lt;/dependency\u0026gt; \t\u0026lt;dependency\u0026gt; \t\u0026lt;groupId\u0026gt;org.apache.mina\u0026lt;/groupId\u0026gt; \t\u0026lt;artifactId\u0026gt;mina-core\u0026lt;/artifactId\u0026gt; \t\u0026lt;version\u0026gt;2.0.7\u0026lt;/version\u0026gt; \t\u0026lt;/dependency\u0026gt; \t\u0026lt;dependency\u0026gt; \t\u0026lt;groupId\u0026gt;org.apache.sshd\u0026lt;/groupId\u0026gt; \t\u0026lt;artifactId\u0026gt;sshd-core\u0026lt;/artifactId\u0026gt; \t\u0026lt;version\u0026gt;0.6.0\u0026lt;/version\u0026gt; \t\u0026lt;/dependency\u0026gt; \t\u0026lt;!-- test dependencies --\u0026gt; \t\u0026lt;dependency\u0026gt; \t\u0026lt;groupId\u0026gt;bouncycastle\u0026lt;/groupId\u0026gt; \t\u0026lt;artifactId\u0026gt;bcprov-jdk15\u0026lt;/artifactId\u0026gt; \t\u0026lt;version\u0026gt;140\u0026lt;/version\u0026gt; \t\u0026lt;scope\u0026gt;compile\u0026lt;/scope\u0026gt; \t\u0026lt;/dependency\u0026gt; \t\u0026lt;dependency\u0026gt; \t\u0026lt;groupId\u0026gt;org.slf4j\u0026lt;/groupId\u0026gt; \t\u0026lt;artifactId\u0026gt;slf4j-log4j12\u0026lt;/artifactId\u0026gt; \t\u0026lt;scope\u0026gt;test\u0026lt;/scope\u0026gt; \t\u0026lt;version\u0026gt;1.6.1\u0026lt;/version\u0026gt; \t\u0026lt;/dependency\u0026gt; \t\u0026lt;/dependencies\u0026gt; \u0026lt;/project\u0026gt; Update the component - Change the “SshEndpoint.java”\nFrom :\npublic SshResult sendExecCommand(String command) throws Exception { SshResult result = null; ... ... ClientChannel channel = session.createChannel(ClientChannel.CHANNEL_EXEC, command); ByteArrayInputStream in = new ByteArrayInputStream(new byte[]{0}); channel.setIn(in); ... ... } To:\n public SshResult sendExecCommand(String command) throws Exception { SshResult result = null; ... ... ClientChannel channel = session.createChannel(ClientChannel.CHANNEL_SHELL); log.debug(\u0026#34;Command : \u0026#34; + command); InputStream inputStream = new ByteArrayInputStream(command.getBytes()); channel.setIn(inputStream); ... ... } Build Again - Build the jar\nmvn install Try Camel-SSH Again Use updated jar - Rebuild Camel SSH example or place the camel-ssh-2.11.0.jar in the ssh_lib folder.\n Specify Shell Termination command\n/data/command.txt\nls exit Execute Run.bat\nIt returns the result of command on server\nSH_MSG_CHANNEL_EXTENDED_DATA on channel 0 [ NioProcessor-2] ClientSessionImpl DEBUG Received packet SSH_MSG_CHANNEL_EOF [ NioProcessor-2] ChannelShell INFO Received SSH_MSG_CHANNEL_EOF on channel 0 [ NioProcessor-2] ClientSessionImpl DEBUG Received packet SSH_MSG_CHANNEL_REQUEST [ NioProcessor-2] ChannelShell INFO Received SSH_MSG_CHANNEL_REQUEST on channel 0 [ NioProcessor-2] ClientSessionImpl DEBUG Received packet SSH_MSG_CHANNEL_CLOSE [ NioProcessor-2] ChannelShell INFO Received SSH_MSG_CHANNEL_CLOSE on channel 0 [ NioProcessor-2] ChannelShell INFO Send SSH_MSG_CHANNEL_CLOSE on channel 0 Received: 2 anaconda-ks.cfg key.pem Kuldeep logfile Music nohup.out Videos Another Issue on Server We have faced issue that channel is not being closed and following log is being printing\nResolution 3: Identify the termination command on Shell We need to identify the termination command for the shell on server, we noticed that termination command on server’s default shell was “exit;” instead of “exit”. We updated /data/command.txt accordingly, /data/command.txt\n ls exit; and things worked for us.\nSo here you have customized Apache Camel component.\n","id":191,"tag":"Apache Camel ; java ; ssh ; tutorial ; customization ; technology","title":"Apache Camel SSH Component","type":"post","url":"/post/apache-camel-ssh-component/"},{"content":"This article help you start with Groovy, with step by step guide and series of examples. It starts with an overview and then covers in detail examples.\nGroovy Overview Groovy is an Object Oriented Scripting Language which provides Dynamic, Easy-to-use and Integration capabilities to the Java Virtual Machine. It absorbs most of the syntax from Java and it is much powerful in terms of functionalities which is manifested in the form Closures, Dynamic Typing, Builders etc. Groovy also provides simplified API for accessing Databases and XML.\nGroovy is almost compatible to Java, e.g. almost every Java construct is valid Groovy coding which makes the usage of Groovy for an experience Java programmer easy.\nFeatures While the simplicity and ease of use is the leading principle of Groovy here are a few nice features of Groovy:\n Groovy allows changing classes and methods at runtime. For example if a class does not have a certain method and this method is called by another class the called class can decided what to do with this call. Groovy does not require semicolons to separate commands if they are in different lines (separated by new-lines). Groovy has list processing and regular expressions directly build into the language. Groovy implements also the Builder Pattern which allows creating easily GUI\u0026rsquo;s, XML Documents or Ant Tasks. Asserts in Groovy will always be executed. Working with Groovy We will create a basic initial examples and understand the syntax and semantics of Groovy the scripting language.\nHello Groovy World This example shows how to write a groovy program. It is a basic program to demo the Groovy code style.\n A Groovy source files ends with the extension .groovy and all Groovy classes are by default public. In Groovy all fields of a class have by default the access modifier \u0026ldquo;private\u0026rdquo;. package com.tk.groovy.poc class GroovyHelloWorld { \t// declaring the string variable \t/** The first name. */ \tString firstName \tString lastName \t/** * Main. * * @param args the args * This is how mail function is defined in groovy */ \tstatic void main(def args) { \tGroovyHelloWorld p = new GroovyHelloWorld() \tp.firstName(\u0026#34;Kuldeep\u0026#34;) \tp.lastName = \u0026#34;Singh\u0026#34; \tprintln(\u0026#34;Hello \u0026#34; + p.firstName + \u0026#34; \u0026#34; + p.lastName);\t} } Encapsulation in Groovy Groovy doesn’t force you to keep the file name at the name of public class you define. In Groovy all fields of a class have by default the access modifier \u0026ldquo;private\u0026rdquo; and Groovy provides automatically getter and setter methods for the fields. Therefore all Groovy classes are per default JavaBeans.\npackage com.tk.groovy.poc; class Person { \tString firstName; \tString lastName; \t\tstatic main(def args) { \t// creating instance of person class \tPerson person = new Person(); \t// groovy auto generates the setter and getter method of variables \tperson.setFirstName(\u0026#34;Kuldeep\u0026#34;); \tperson.setLastName(\u0026#34;Singh\u0026#34;) \t// calling autogenerated getter method \tprintln(person.getFirstName() + \u0026#34; \u0026#34; + person.getLastName()) \t} } Constructors Groovy will also directly create constructors in which you can specify the element you would like to set during construction. Groovy archives this by using the default constructor and then calling the setter methods for the attributes. package com.tk.groovy.poc class GroovyPerson { \tString firstName \tString lastName \tstatic void main(def args) { \tGroovyHelloWorld p = new GroovyHelloWorld() \t//Groovy default constructor \tGroovyPerson defaultPerson = new GroovyPerson(); \t\t// 1 generated Constructor \t// similar constructor is generated for lastName also \tGroovyPerson personWithFirstNameOnly = new GroovyPerson(firstName :\u0026#34;Kuldeep\u0026#34;) \tprintln(personWithFirstNameOnly.getFirstName()); \t\t// 2 generated Constructor \tGroovyPerson person = new GroovyPerson(firstName :\u0026#34;Kuldeep\u0026#34;, lastName :\u0026#34;Singh\u0026#34;) \tprintln(person.firstName + \u0026#34; \u0026#34; + person.lastName) \t} } Groovy Closures A closure in Groovy is an anonymous chunk of code that may take arguments, return a value, and reference and use variables declared in its surrounding scope. In many ways it resembles anonymous inner classes in Java, and closures are often used in Groovy in the same way that Java developers use anonymous inner classes. However, Groovy closures are much more powerful than anonymous inner classes, and far more convenient to specify and use.\nImplicit variable: ‘it’ variable is defined which has a special meaning. Closure in this case takes as single argument.\npackage com.tk.groovy.poc class GroovyClosures { \t/** * Main Function. * * @param args the args */ \tdef static main(args) { \t/** * Defining closure printSum * * -\u0026gt; : is a special keyword here */ \tdef printSum = { a, b -\u0026gt; println a+b } \tprintSum(5,7) \t/** * Closure with implicit argument */ \tdef implicitArgClosure = { println it } \timplicitArgClosure( \u0026#34;I am using implicity variable \u0026#39;it\u0026#39;\u0026#34; ) \t} } Closures are very heavily used in Groovy. Above example is very basic example. For deep understanding of closure kindly read : http://groovy.codehaus.org/Closures+-+Formal+Definition\nMethods and Collections Below program shows how to declare methods and how to use collections in groovy and some basic operations on collections.\n Groovy has native language support for collections, lists, maps sets, arrays etc. - Each collection can be iterated either by assigning the values variable, Or by using implicit variable ‘it’ Groovy also have Range collection which can be directly used to include all values in a range A Groovy Closure is like a \u0026ldquo;code block\u0026rdquo; or a method pointer. It is a piece of code that is defined and then executed at a later point. It has some special properties like implicit variables, support for currying and support for free variables package com.tk.groovy.poc class GroovyCollections { \tdef listOperations(){ \tprintln(\u0026#34;Operation over LIST: \u0026#34;) \t// defining the Integer type list using generic \tList list = [1, 2, 3, 4] \t// Note this is same as below declaration \t// Here we are explicitly declaring the type of list \t//List\u0026lt;Integer\u0026gt; list = [1, 2, 3, 4] \tprintln(\u0026#34;Iterating the list using variable assignment: \u0026#34;) \t// using .each method to iterate over list \t// \u0026#39;itemVar\u0026#39; is implicitly declared as a variable of same type as list \t// with each block, values will be assigned to \u0026#39;itemVar\u0026#39; \t// this is syntax of groovy Closure \tlist.each{itemVar-\u0026gt; print itemVar + \u0026#34;, \u0026#34; } \tprintln(\u0026#34;\u0026#34;) \tprintln(\u0026#34;Iterating the list using implicit variable \u0026#39;it\u0026#39;: \u0026#34;) \t// inspite of declaring the variable explicitly directly use \t// the implicit variable \u0026#39;it\u0026#39; \tlist.each{print it + \u0026#34;, \u0026#34;} \t} \tdef mapOpertion(){ \tprintln(\u0026#34;Operation over MAP: \u0026#34;) \tdef map = [\u0026#34;Java\u0026#34;:\u0026#34;server\u0026#34;, \u0026#34;Groovy\u0026#34;:\u0026#34;server\u0026#34;, \u0026#34;JavaScript\u0026#34;:\u0026#34;web\u0026#34;] \tprintln(\u0026#34;Iterating the map using variable assignment: \u0026#34;) \tmap.each{k,v-\u0026gt; \tprint k + \u0026#34;, \u0026#34; \tprintln v \t} \tprintln(\u0026#34;\u0026#34;) \tprintln(\u0026#34;Iterating the map using implicit variable \u0026#39;it\u0026#39;: \u0026#34;) \tmap.each{ \tprint it.key + \u0026#34;, \u0026#34; \tprintln it.value \t} \t} \tdef setOperations(){ \tSet s1 = [\u0026#34;a\u0026#34;, 2, \u0026#34;a\u0026#34;, 2, 3] \tprintln(\u0026#34;Iterating the map using variable assignment: \u0026#34;) \ts1.each{setVar-\u0026gt; print setVar + \u0026#34;, \u0026#34;} \tprintln(\u0026#34;\u0026#34;) \tprintln(\u0026#34;Iterating the set using implicit variable \u0026#39;it\u0026#39;: \u0026#34;) \ts1.each{ print it + \u0026#34;, \u0026#34;} \t} \tdef rangeOperations() { \tdef range = 1..50 \t//get the end points of the range \tprintln() \tprintln (\u0026#34;Range starts from: \u0026#34; + range.from + \u0026#34; -- Range ends to: \u0026#34; + range.to) \tprintln(\u0026#34;Iterating the range using variable assignment: \u0026#34;) \trange.each{rangeVar-\u0026gt; print rangeVar + \u0026#34;, \u0026#34;} \tprintln(\u0026#34;\u0026#34;) \tprintln(\u0026#34;Iterating the range using implicit variable \u0026#39;it\u0026#39;: \u0026#34;) \trange.each{ print it + \u0026#34;, \u0026#34;} \t} \t/** * Main Method * * @param args the args * * This is how we declare a static method in groovy * by using static keyword */ \tstatic main(def args){ \tGroovyCollections groovyCol = new GroovyCollections() \tgroovyCol.listOperations(); \tprintln(\u0026#34;\u0026#34;) \tgroovyCol.mapOpertion(); \tprintln(\u0026#34;\u0026#34;) \tgroovyCol.setOperations(); \tprintln(\u0026#34;\u0026#34;) \tgroovyCol.rangeOperations(); \t} }  File operations \u0026amp; Exception handling Exceptions in groovy: groovy doesn\u0026rsquo;t actually have checked exceptions. Groovy will pass an exception to the calling code until it is handled, but we don\u0026rsquo;t have to define it in our method signature package com.tk.groovy.poc class GroovyFileOperations { \tprivate def writeFile(def String filePath, def String content) { \ttry { \tdef file = new File(filePath); \t// declaring out variable using groovy closer \t// writing content using groovy implicit out variable \tfile.withWriter { out -\u0026gt; \tout.writeLine(content) \t} \t// all keyword defines all exceptions \t} catch (All) { \t// print the stack trace \tprintln(All) \t} \t} \tprivate def readFile(def String filePath) { \tdef file = new File(filePath); \tfile.eachLine { line -\u0026gt; println(line) } \t} \tdef static main(def args) { \tdef filePath = LICENCE.txt; \tdef content = \u0026#34;testContent\u0026#34; \tGroovyFileOperations fileOp = new GroovyFileOperations(); \tprintln(\u0026#34;Writing content in the file\u0026#34;) \tfileOp.writeFile(filePath, content); \tprintln(\u0026#34;Content from the file\u0026#34;) \tfileOp.readFile(filePath); \t} } Dynamic method invocation Using the dynamic features of Groovy calling method by its name only, Defining static methods\npackage com.tk.groovy.poc class Dog { \t/** * Bark. */ \tdef bark() { \tprintln \u0026#34;woof!\u0026#34; \t} \t/** * Sit. */ \tdef sit() { \tprintln \u0026#34;sitting\u0026#34; \t} \t/** * Jump. */ \tdef jump() { \tprintln \u0026#34;boing!\u0026#34; \t} \t/** * Do action. * * @param animal the animal * @param action the action * * This is how we define the static method */ \tdef static doAction( animal, action ) { \t//action name is passed at invocation \tanimal.\u0026#34;$action\u0026#34;() \t} \t/** * Main. * * @param args the args */ \tstatic main(def args) { \tdef dog = new Dog() \t// calling the method by its name \tdoAction( dog, \u0026#34;bark\u0026#34; ) \tdoAction( dog, \u0026#34;jump\u0026#34; ) \t} } Groovy and Java Inter-Operability This example shows how we can use groovy script program with java programs. There are two different ways shown below to integrate the groovy script with java program-\n Using GroovyCloassLoader: GroovyCloassLoader load scripts that are stored outside the local file system, and finally how to make your integration environment safe and sandboxed, permitting the scripts to perform only the operations you wish to allow Directly Injecting GroovyClassObject – We can directly create the object of Groovy script class and then can perform the operation as needed. Groovy Class:\npackage com.tk.groovy.poc.groovy import com.tk.groovy.poc.java.JavaLearner /** * The Class GroovyPerson. */ class GroovyLearner { \tdef printGroovy() { \tprintln(\u0026#34;I am a groovy Learner\u0026#34;) \t} \tstatic void main(def args) { \tJavaLearner javaLearner = new JavaLearner(); \tjavaLearner.printJava(); \t} } Java Class:\npackage com.tk.groovy.poc.java; import groovy.lang.GroovyClassLoader; import java.io.File; import java.io.IOException; import java.lang.reflect.InvocationTargetException; import org.codehaus.groovy.control.CompilationFailedException; import com.nagarro.groovy.poc.groovy.GroovyLearner; /** * The Class JavaLearner. */ public class JavaLearner { \t/** * Prints the java. */ \tpublic void printJava() { \tSystem.out.println(\u0026#34;I am a java learner\u0026#34;); \t} \t/** * The main method. * * @param args * the arguments */ \tpublic static void main(String[] args) { \t// using GroovyCloassLoader class \tString relativeGroovyClassPath = \u0026#34;src\\\\com\\\\tk\\\\groovy\\\\poc\\\\groovy\\\\GroovyLearner.groovy\u0026#34;; \ttry { \tClass groovyClass = new GroovyClassLoader().parseClass(new File(relativeGroovyClassPath)); \tObject groovyClassInstance = groovyClass.newInstance(); \tgroovyClass.getDeclaredMethod(\u0026#34;printGroovy\u0026#34;, new Class[] {}).invoke(groovyClassInstance, new Object[] {}); \t} catch (CompilationFailedException | IOException | InstantiationException | IllegalAccessException \t| IllegalArgumentException | InvocationTargetException | NoSuchMethodException | SecurityException e) { \te.printStackTrace(); \t} \t// directly calling groovy class in java \t// this work on the fact that both groovy and java are compiled to \t// .class file \tGroovyLearner groovyLearner = new GroovyLearner(); \tgroovyLearner.printGroovy(); \t} }  Building Groovy Applications Below section demonstrates the use of Groovy script with Maven, Ant, and logging library –log4j.\nIntegration with Ant: Below example shows you how easy it is to enhance the build process by using Groovy rather than XML as your build configuration format.\nCentral to the niftiness of Ant within Groovy is the notion of builders. Essentially, builders allow you to easily represent nested tree-like data structures, such as XML documents, in Groovy. And that is the hook: With a builder, specifically an Ant-Builder, you can effortlessly construct Ant XML build files and execute the resulting behavior without having to deal with the XML itself. And that\u0026rsquo;s not the only advantage of working with Ant in Groovy. Unlike XML, Groovy is a highly expressive development environment wherein you can easily code looping constructs, conditionals, and even explore the power of reuse, rather than the cut-and-paste drill you may have previously associated with creating new build.xml files. And you still get to do it all within the Java platform!\nThe beauty of builders, and especially Groovy’s Ant Builder, is that their syntax follows a logical progression closely mirroring that of the XML document they represent. Methods attached to an instance of Ant Builder match the corresponding Ant task; likewise, those methods can take parameters (in the form of a map) that correspond to the task\u0026rsquo;s attributes. What more, nested tags are such as includes and file-sets are then defined as closures.\npackage com.tk.groovy.project /** * The Class Build. */ class Build { \tdef base_dir = \u0026#34;.\u0026#34; \tdef src_dir = base_dir + \u0026#34;/src\u0026#34; \tdef lib_dir = base_dir + \u0026#34;/lib\u0026#34; \tdef build_dir = base_dir + \u0026#34;/classes\u0026#34; \tdef dist_dir = base_dir + \u0026#34;/dist\u0026#34; \tdef file_name = \u0026#34;Fibonacci\u0026#34; \tdef ant=null; \tvoid clean() { \tant.delete(dir: \u0026#34;${build_dir}\u0026#34;) \tant.delete(dir: \u0026#34;${dist_dir}\u0026#34;) \t} \t/** * Builds the. */ \tvoid build() { \tant= new AntBuilder().sequential { \ttaskdef name: \u0026#34;groovyc\u0026#34;, classname: \u0026#34;org.codehaus.groovy.ant.Groovyc\u0026#34; \tgroovyc srcdir: \u0026#34;src\u0026#34;, destdir: \u0026#34;${build_dir}\u0026#34;, { \tclasspath { \tfileset dir: \u0026#34;${lib_dir}\u0026#34;, { include name: \u0026#34;*.jar\u0026#34; } \tpathelement path: \u0026#34;${build_dir}\u0026#34; \t} \tjavac source: \u0026#34;1.7\u0026#34;, target: \u0026#34;1.7\u0026#34;, debug: \u0026#34;on\u0026#34; \t} \tjar destfile: \u0026#34;${dist_dir}/${file_name}.jar\u0026#34;, basedir: \u0026#34;${build_dir}\u0026#34;,filesetmanifest: \u0026#34;mergewithoutmain\u0026#34;,{ \tmanifest{ \tattribute( name: \u0026#39;Main-Class\u0026#39;, value: \u0026#34;com.tk.groovy.Fibonacci\u0026#34; ) \t} \t} \tjava classname:\u0026#34;${file_name}\u0026#34;, fork:\u0026#39;true\u0026#39;,{ \tclasspath { \tfileset dir: \u0026#34;${lib_dir}\u0026#34;, { include name: \u0026#34;*.jar\u0026#34; } \tpathelement path: \u0026#34;${build_dir}\u0026#34; \t} \t} \t} \t} \t/** * Run. * * @param args the args */ \tvoid run(args) { \tif ( args.size() \u0026gt; 0 ) { \tinvokeMethod(args[0], null ) \t} \telse { \tbuild() \t} \t} \t/** * Main. * * @param args the args */ \tstatic main(args) { \tdef b = new Build() \tb.run(args) \t} } Integration with Maven: An approach to Groovy and Maven integration is to use the gmaven plugin for Maven. This example is made by using\nThese examples could be useful in any project where we want to do some kind of sanitary test over our ear file and can perform handy things at the build time.\nFor maven to compile and understand the Groovy class files, we have to follow the same folder layout as we used to do for java project with only few differences.\n project/ | +- | +- pom.xml | | +- src/main/resources/ | | +- src/main/groovy/ | | | +- com/nagarro/groovy | | | +- XXX.groovy | +- src/main/test/ | +- com/nagarro/groovy | +- XXXTest.Groovy pom.xml\n\u0026lt;?xml version=\u0026#34;1.0\u0026#34; encoding=\u0026#34;UTF-8\u0026#34;?\u0026gt; \u0026lt;project xmlns=\u0026#34;http://maven.apache.org/POM/4.0.0\u0026#34; xmlns:xsi=\u0026#34;http://www.w3.org/2001/XMLSchema-instance\u0026#34; xsi:schemaLocation=\u0026#34;http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd\u0026#34;\u0026gt; \u0026lt;modelVersion\u0026gt;4.0.0\u0026lt;/modelVersion\u0026gt; \u0026lt;groupId\u0026gt;GroovyMavenPOC\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;GroovyMavenPOC\u0026lt;/artifactId\u0026gt; \u0026lt;name\u0026gt;Example Project\u0026lt;/name\u0026gt; \u0026lt;version\u0026gt;1.0.0\u0026lt;/version\u0026gt; \u0026lt;dependencies\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.codehaus.groovy.maven.runtime\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;gmaven-runtime-1.6\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;1.0\u0026lt;/version\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;junit\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;junit\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;3.8.1\u0026lt;/version\u0026gt; \u0026lt;scope\u0026gt;test\u0026lt;/scope\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;/dependencies\u0026gt; \u0026lt;build\u0026gt; \u0026lt;plugins\u0026gt; \u0026lt;plugin\u0026gt; \u0026lt;groupId\u0026gt;org.codehaus.groovy.maven\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;gmaven-plugin\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;1.0\u0026lt;/version\u0026gt; \u0026lt;executions\u0026gt; \u0026lt;execution\u0026gt; \u0026lt;goals\u0026gt; \u0026lt;goal\u0026gt;generateStubs\u0026lt;/goal\u0026gt; \u0026lt;goal\u0026gt;compile\u0026lt;/goal\u0026gt; \u0026lt;goal\u0026gt;generateTestStubs\u0026lt;/goal\u0026gt; \u0026lt;goal\u0026gt;testCompile\u0026lt;/goal\u0026gt; \u0026lt;/goals\u0026gt; \u0026lt;/execution\u0026gt; \u0026lt;/executions\u0026gt; \u0026lt;/plugin\u0026gt; \u0026lt;/plugins\u0026gt; \u0026lt;/build\u0026gt; \u0026lt;/project\u0026gt; Groovy File\npackage com.tk.groovy.poc import groovy.util.GroovyTestCase class HelloWorldTest extends GroovyTestCase { void testShow() { new HelloWorld().show() } }\tpackage com.tk.groovy.poc import groovy.util.GroovyTestCase /** * The Class HelperTest. */ class HelperTest extends GroovyTestCase { void testHelp() { new Helper().help(new HelloWorld()) } } Performance Analysis This POC shows how significant the performance improvements in Groovy 2.0 have turned out and how Groovy 2.0 would now compare to Java in terms of performance.\nIn case the performance gap meanwhile had become minor or at least acceptable it would certainly be time to take a serious look at Groovy.\nThe performance measurement consists of calculating Fibonacci numbers with same code in Groovy script and Java Class file.\nAll measurements were done on an Intel Core i7-3720QM CPU 2.60 GHz using JDK7u6 running on Windows 7 with Service Pack 1. I used Eclipse Juno with the Groovy plugin using the Groovy compiler version 1.8.6.xx-20121219-0800-e42-RELEASE.\nFibonacci Algorithm Java Program\npackage com.nagarro.groovy.poc; public class JavaFibonacci { \tpublic static int fibStaticTernary(int n) { \treturn (n \u0026gt;= 2) ? fibStaticTernary(n - 1) + fibStaticTernary(n - 2) : 1; \t} \tpublic static int fibStaticIf(int n) { \tif (n \u0026gt;= 2) \treturn fibStaticIf(n - 1) + fibStaticIf(n - 2); \telse \treturn 1; \t} \tint fibTernary(int n) { \treturn (n \u0026gt;= 2) ? fibStaticTernary(n - 1) + fibStaticTernary(n - 2) : 1; \t} \tint fibIf(int n) { \tif (n \u0026gt;= 2) \treturn fibIf(n - 1) + fibIf(n - 2); \telse \treturn 1; \t} \tpublic static void main(String[] args) { \tlong start = System.currentTimeMillis(); \tJavaFibonacci.fibStaticTernary(40); \tSystem.out.println(\u0026#34;Java(static ternary): \u0026#34; + (System.currentTimeMillis() - start) + \u0026#34;ms\u0026#34;); \tstart = System.currentTimeMillis(); \tJavaFibonacci.fibStaticIf(40); \tSystem.out.println(\u0026#34;Java(static if): \u0026#34; + (System.currentTimeMillis() - start) + \u0026#34;ms\u0026#34;); \tstart = System.currentTimeMillis(); \tnew JavaFibonacci().fibTernary(40); \tSystem.out.println(\u0026#34;Java(instance ternary): \u0026#34; + (System.currentTimeMillis() - start) + \u0026#34;ms\u0026#34;); \tstart = System.currentTimeMillis(); \tnew JavaFibonacci().fibIf(40); \tSystem.out.println(\u0026#34;Java(instance if): \u0026#34; + (System.currentTimeMillis() - start) + \u0026#34;ms\u0026#34;); \t} } Groovy Program\npackage com.tk.groovy.poc class GroovyFibonacci { \tstatic int fibStaticTernary (int n) { \tn \u0026gt;= 2 ? fibStaticTernary(n-1) + fibStaticTernary(n-2) : 1 \t} \tstatic int fibStaticIf (int n) { \tif(n \u0026gt;= 2) fibStaticIf(n-1) + fibStaticIf(n-2) else 1 \t} \tint fibTernary (int n) { \tn \u0026gt;= 2 ? fibTernary(n-1) + fibTernary(n-2) : 1 \t} \tint fibIf (int n) { \tif(n \u0026gt;= 2) fibIf(n-1) + fibIf(n-2) else 1 \t} \tpublic static void main(String[] args) { \tdef warmup = true \tdef warmUpIters = 50 \tdef start = System.currentTimeMillis() \tJavaFibonacci.fibStaticTernary(40) \tprintln(\u0026#34;Groovy(static ternary): ${System.currentTimeMillis() - start}ms\u0026#34;) \tstart = System.currentTimeMillis() \tJavaFibonacci.fibStaticIf(40) \tprintln(\u0026#34;Groovy(static if): ${System.currentTimeMillis() - start}ms\u0026#34;) \tdef fib = new JavaFibonacci() \tstart = System.currentTimeMillis() \tfib.fibTernary(40) \tprintln(\u0026#34;Groovy(instance ternary): ${System.currentTimeMillis() - start}ms\u0026#34;) \tstart = System.currentTimeMillis() \tfib.fibIf(40) \tprintln(\u0026#34;Groovy(instance if): ${System.currentTimeMillis() - start}ms\u0026#34;) \t} } See below table for performance metric s of above two programs:\n Groovy 2.0 Java static ternary 1015ms 746ms static if 905ms 739ms instance ternary 708ms 739ms instance if 924ms 739ms IO Operations Now another performance test is done on I/O operations by Groovy and Java. See below the respective programs and performance metric table.\nAll measurement were done using previous configuration with additional file specification, we have used test.txt file with below specification; No of Lines- 5492493 Size of File- 243 Mega Bytes\nJava File Read\npackage com.tk.groovy.poc; public class ReadFileJava { \tprivate static String FILE_NAME=LICENCE.txt; \tpublic static void readFile() { \tBufferedReader br = null; \ttry { \tFile file=new File(FILE_NAME); \tdouble bytes = file.length(); \tdouble kilobytes = (bytes / 1024); \tdouble megabytes = (kilobytes / 1024); \tbr = new BufferedReader(new FileReader(file)); \tint count = 0; \twhile (br.readLine() != null) { \tcount++; \t} \t} catch (FileNotFoundException e) { \te.printStackTrace(); \t} catch (IOException e) { \te.printStackTrace(); \t} \t} \tpublic static void main(String[] args) { \tlong start = System.currentTimeMillis(); \treadFile(); \tlong end = System.currentTimeMillis(); \tSystem.out.println(\u0026#34;Time Consumed------\u0026gt;\u0026gt;\u0026gt;\u0026gt;\u0026#34; + (end - start)+\u0026#34;ms\u0026#34;); \t} } Groovy File Read\n package com.tk.groovy.poc class GroovyReadFile { \tstatic main(def a){ \tdef filePath = \u0026#34;D:\\\\test.txt\u0026#34; \tlong start = System.currentTimeMillis(); \tint count = 0; \tnew File(filePath).eachLine{ \tcount++; \t} \tprintln count; \tlong end = System.currentTimeMillis(); \tprintln (\u0026#34;Time Consumed------\u0026gt;\u0026gt;\u0026gt;\u0026gt;\u0026#34;+(end - start) +\u0026#34;ms\u0026#34;); \t} } Performance metric table:\n Groovy 2.0 Java Iteration 1 2848ms 1667ms Iteration 2 2649ms 1578ms Iteration 3 2692ms 1811ms Iteration 4 2686ms 1799ms Conclusion I would like to see some benefits in using Groovy where appropriate and perhaps better than Java. But a lot of the \u0026ldquo;features\u0026rdquo; of dynamic languages appear to me as disadvantages. For example all the useful compile time checks and IDE support (auto completion etc.) are missing or not really good. Code in Groovy (and other similar languages) often may be much shorter or concise than the same code in a language like Java.\nThe times where Groovy was somewhat 10-20 times slower than Java are definitely over whether @CompileStatic is used or not. This means to me that Groovy is ready for applications where performance has to be somewhat comparable to Java\nSo I see Groovy\u0026rsquo;s place where you have to script a fast and small solution to a specific problem as we its use is shown with Maven integration. References\n http://groovy.codehaus.org/Beginners+Tutorial http://www.vogella.com/articles/Groovy/article.html http://groovy.codehaus.org/Getting+Started+Guide http://groovy.codehaus.org/Dynamic+Groovy http://groovy.codehaus.org/Closures http://groovy.codehaus.org/Closures+-+Formal+Definition This article is also supported by my friend Dhruv Bansal\n","id":192,"tag":"groovy ; java ; introduction ; language ; comparison ; technology","title":"Groovy - Getting Started","type":"post","url":"/post/groovy-getting-started/"},{"content":"In previous article we learned basics of Scala, in this articles we will learn how to setup build scripts for scala and build applications using scala.\nWe will also learn few web development frameworks for Scala and compare them with similar framework in Java.\nBuilding Scala Applications Below program demonstrates the use of Scala script with Maven, Ant, and logging library – LogBack.\nIntegration with Ant Below example shows how Scala project can be built by the ant build.\n Scala build task Use of ScalaTestAntTask to run scala test case Ant build.xml \u0026lt;?xml version=\u0026#34;1.0\u0026#34;?\u0026gt; \u0026lt;project name=\u0026#34;scalaantproject\u0026#34; default=\u0026#34;run\u0026#34;\u0026gt; \t\u0026lt;!-- root directory of this project --\u0026gt; \t\u0026lt;property name=\u0026#34;project.dir\u0026#34; value=\u0026#34;.\u0026#34; /\u0026gt; \t\u0026lt;!-- main class to run --\u0026gt; \t\u0026lt;property name=\u0026#34;main.class\u0026#34; value=\u0026#34;\u0026lt;full qualified main class name\u0026gt;\u0026#34; /\u0026gt; \t\u0026lt;!-- location of scalatest.jar for unit testing --\u0026gt; \t\u0026lt;property name=\u0026#34;scalatest.jar\u0026#34; value=\u0026#34;${basedir}/lib/scalatest_2.9.0-1.9.1.ja”/\u0026gt; \u0026lt;target name=\u0026#34;init\u0026#34;\u0026gt; \t\u0026lt;property environment=\u0026#34;env\u0026#34; /\u0026gt; \t\u0026lt;!-- derived path names --\u0026gt; \t\u0026lt;property name=\u0026#34;build.dir\u0026#34; value=\u0026#34;${project.dir}/build\u0026#34; /\u0026gt; \t\u0026lt;property name=\u0026#34;source.dir\u0026#34; value=\u0026#34;${project.dir}/src\u0026#34; /\u0026gt; \t\u0026lt;property name=\u0026#34;test.dir\u0026#34; value=\u0026#34;${project.dir}/test\u0026#34; /\u0026gt; \t\u0026lt;!-- scala libraries for classpath definitions --\u0026gt; \t\u0026lt;property name=\u0026#34;scala-library.jar\u0026#34; value=\u0026#34;${scala.home}/lib/scala-library.jar\u0026#34; /\u0026gt; \t\u0026lt;property name=\u0026#34;scala-compiler.jar\u0026#34; value=\u0026#34;${scala.home}/lib/scala-compiler.jar\u0026#34; /\u0026gt; \t\u0026lt;!-- classpath for the compiler task definition --\u0026gt; \t\u0026lt;path id=\u0026#34;scala.classpath\u0026#34;\u0026gt; \t\u0026lt;pathelement location=\u0026#34;${scala-compiler.jar}\u0026#34; /\u0026gt; \t\u0026lt;pathelement location=\u0026#34;${scala-library.jar}\u0026#34; /\u0026gt; \t\u0026lt;/path\u0026gt; \t\u0026lt;!-- classpath for project build --\u0026gt; \t\u0026lt;path id=\u0026#34;build.classpath\u0026#34;\u0026gt; \t\u0026lt;pathelement location=\u0026#34;${scala-library.jar}\u0026#34; /\u0026gt; \t\u0026lt;fileset dir=\u0026#34;${project.dir}/lib\u0026#34;\u0026gt; \t\u0026lt;include name=\u0026#34;*.jar\u0026#34; /\u0026gt; \t\u0026lt;/fileset\u0026gt; \t\u0026lt;pathelement location=\u0026#34;${build.dir}/classes\u0026#34; /\u0026gt; \t\u0026lt;/path\u0026gt; \t\u0026lt;!-- classpath for unit test build --\u0026gt; \t\u0026lt;path id=\u0026#34;test.classpath\u0026#34;\u0026gt; \t\u0026lt;pathelement location=\u0026#34;${scala-library.jar}\u0026#34; /\u0026gt; \t\u0026lt;pathelement location=\u0026#34;${scalatest.jar}\u0026#34; /\u0026gt; \t\u0026lt;pathelement location=\u0026#34;${build.dir}/classes\u0026#34; /\u0026gt; \t\u0026lt;/path\u0026gt; \t\u0026lt;!-- definition for the \u0026#34;scalac\u0026#34; and \u0026#34;scaladoc\u0026#34; ant tasks --\u0026gt; \t\u0026lt;taskdef resource=\u0026#34;scala/tools/ant/antlib.xml\u0026#34;\u0026gt; \t\u0026lt;classpath refid=\u0026#34;scala.classpath\u0026#34; /\u0026gt; \t\u0026lt;/taskdef\u0026gt; \t\u0026lt;!-- definition for the \u0026#34;scalatest\u0026#34; ant task --\u0026gt; \t\u0026lt;taskdef name=\u0026#34;scalatest\u0026#34; classname=\u0026#34;org.scalatest.tools.ScalaTestAntTask\u0026#34;\u0026gt; \t\u0026lt;classpath refid=\u0026#34;test.classpath\u0026#34;/\u0026gt; \t\u0026lt;/taskdef\u0026gt; \t\u0026lt;/target\u0026gt; \t\u0026lt;!-- delete compiled files --\u0026gt; \t\u0026lt;target name=\u0026#34;clean\u0026#34; depends=\u0026#34;init\u0026#34; description=\u0026#34;clean\u0026#34;\u0026gt; \t\u0026lt;delete dir=\u0026#34;${build.dir}\u0026#34; /\u0026gt; \t\u0026lt;delete dir=\u0026#34;${project.dir}/doc\u0026#34; /\u0026gt; \t\u0026lt;delete file=\u0026#34;${project.dir}/lib/scala-library.jar\u0026#34; /\u0026gt; \t\u0026lt;/target\u0026gt; \t\u0026lt;!-- compile project --\u0026gt; \t\u0026lt;target name=\u0026#34;build\u0026#34; depends=\u0026#34;init\u0026#34; description=\u0026#34;build\u0026#34;\u0026gt; \t\u0026lt;buildnumber /\u0026gt; \t\u0026lt;tstamp /\u0026gt; \t\u0026lt;mkdir dir=\u0026#34;${build.dir}/classes\u0026#34; /\u0026gt; \t\u0026lt;scalac srcdir=\u0026#34;${source.dir}\u0026#34; destdir=\u0026#34;${build.dir}/classes\u0026#34; classpathref=\u0026#34;build.classpath\u0026#34; force=\u0026#34;never\u0026#34; deprecation=\u0026#34;on\u0026#34;\u0026gt; \t\u0026lt;include name=\u0026#34;**/*.scala\u0026#34; /\u0026gt; \t\u0026lt;/scalac\u0026gt; \t\u0026lt;/target\u0026gt; \t\u0026lt;!-- run program --\u0026gt; \t\u0026lt;target name=\u0026#34;run\u0026#34; depends=\u0026#34;build\u0026#34; description=\u0026#34;run\u0026#34;\u0026gt; \t\u0026lt;java classname=\u0026#34;${main.class}\u0026#34; classpathref=\u0026#34;build.classpath\u0026#34; /\u0026gt; \t\u0026lt;/target\u0026gt; \t\u0026lt;!-- build unit tests --\u0026gt; \t\u0026lt;target name=\u0026#34;buildtest\u0026#34; depends=\u0026#34;build\u0026#34;\u0026gt; \t\u0026lt;mkdir dir=\u0026#34;${build.dir}/test\u0026#34; /\u0026gt; \t\u0026lt;scalac srcdir=\u0026#34;${test.dir}\u0026#34; destdir=\u0026#34;${build.dir}/test\u0026#34; classpathref=\u0026#34;test.classpath\u0026#34; force=\u0026#34;never\u0026#34; deprecation=\u0026#34;on\u0026#34;\u0026gt; \t\u0026lt;include name=\u0026#34;**/*.scala\u0026#34; /\u0026gt; \t\u0026lt;/scalac\u0026gt; \t\u0026lt;/target\u0026gt; \t\u0026lt;!-- run unit tests --\u0026gt; \t\u0026lt;target name=\u0026#34;test\u0026#34; depends=\u0026#34;buildtest\u0026#34; description=\u0026#34;test\u0026#34;\u0026gt; \t\u0026lt;scalatest runpath=\u0026#34;${build.dir}/test\u0026#34;\u0026gt; \t\u0026lt;reporter type=\u0026#34;stdout\u0026#34; config=\u0026#34;FWD\u0026#34;/\u0026gt; \t\u0026lt;reporter type=\u0026#34;file\u0026#34; filename=\u0026#34;${build.dir}/TestOutput.out\u0026#34; /\u0026gt; \t\u0026lt;membersonly package=\u0026#34;suite\u0026#34; /\u0026gt; \t\u0026lt;/scalatest\u0026gt; \t\u0026lt;/target\u0026gt; \t\u0026lt;!-- create a startable *.jar with proper classpath dependency definition --\u0026gt; \t\u0026lt;target name=\u0026#34;jar\u0026#34; depends=\u0026#34;build\u0026#34; description=\u0026#34;jar\u0026#34;\u0026gt; \t\u0026lt;mkdir dir=\u0026#34;${build.dir}/jar\u0026#34; /\u0026gt; \t\u0026lt;copy file=\u0026#34;${scala-library.jar}\u0026#34; todir=\u0026#34;${project.dir}/lib\u0026#34; /\u0026gt; \t\u0026lt;path id=\u0026#34;jar.class.path\u0026#34;\u0026gt; \t\u0026lt;fileset dir=\u0026#34;${project.dir}\u0026#34;\u0026gt; \t\u0026lt;include name=\u0026#34;lib/**/*.jar\u0026#34; /\u0026gt; \t\u0026lt;/fileset\u0026gt; \t\u0026lt;/path\u0026gt; \t\u0026lt;pathconvert property=\u0026#34;jar.classpath\u0026#34; pathsep=\u0026#34; \u0026#34; dirsep=\u0026#34;/\u0026#34;\u0026gt; \t\u0026lt;path refid=\u0026#34;jar.class.path\u0026#34;\u0026gt; \t\u0026lt;/path\u0026gt; \t\u0026lt;map from=\u0026#34;${basedir}${file.separator}lib\u0026#34; to=\u0026#34;lib\u0026#34; /\u0026gt; \t\u0026lt;/pathconvert\u0026gt; \t\u0026lt;jar destfile=\u0026#34;${build.dir}/jar/${ant.project.name}.jar\u0026#34; basedir=\u0026#34;${build.dir}/classes\u0026#34; duplicate=\u0026#34;preserve\u0026#34;\u0026gt; \t\u0026lt;manifest\u0026gt; \t\u0026lt;attribute name=\u0026#34;Main-Class\u0026#34; value=\u0026#34;${main.class}\u0026#34; /\u0026gt; \t\u0026lt;attribute name=\u0026#34;Class-Path\u0026#34; value=\u0026#34;${jar.classpath}\u0026#34; /\u0026gt; \t\u0026lt;section name=\u0026#34;Program\u0026#34;\u0026gt; \t\u0026lt;attribute name=\u0026#34;Title\u0026#34; value=\u0026#34;${ant.project.name}\u0026#34; /\u0026gt; \t\u0026lt;attribute name=\u0026#34;Build\u0026#34; value=\u0026#34;${build.number}\u0026#34; /\u0026gt; \t\u0026lt;attribute name=\u0026#34;Date\u0026#34; value=\u0026#34;${TODAY}\u0026#34; /\u0026gt; \t\u0026lt;/section\u0026gt; \t\u0026lt;/manifest\u0026gt; \t\u0026lt;/jar\u0026gt; \t\u0026lt;/target\u0026gt; \t\u0026lt;!-- create API documentation in doc folder --\u0026gt; \t\u0026lt;target name=\u0026#34;scaladoc\u0026#34; depends=\u0026#34;build\u0026#34; description=\u0026#34;scaladoc\u0026#34;\u0026gt; \t\u0026lt;mkdir dir=\u0026#34;${project.dir}/doc\u0026#34; /\u0026gt; \t\u0026lt;scaladoc srcdir=\u0026#34;${source.dir}\u0026#34; destdir=\u0026#34;${project.dir}/doc\u0026#34; classpathref=\u0026#34;build.classpath\u0026#34; doctitle=\u0026#34;${ant.project.name}\u0026#34; windowtitle=\u0026#34;${ant.project.name}\u0026#34; /\u0026gt; \t\u0026lt;/target\u0026gt; \t\u0026lt;!-- create a zip file with binaries for distribution --\u0026gt; \t\u0026lt;target name=\u0026#34;package\u0026#34; depends=\u0026#34;jar, scaladoc\u0026#34; description=\u0026#34;package\u0026#34;\u0026gt; \t\u0026lt;zip destfile=\u0026#34;${build.dir}/${ant.project.name}.zip\u0026#34;\u0026gt; \t\u0026lt;zipfileset dir=\u0026#34;${build.dir}/jar\u0026#34; includes=\u0026#34;${ant.project.name}.jar\u0026#34; /\u0026gt; \t\u0026lt;zipfileset dir=\u0026#34;${project.dir}\u0026#34; includes=\u0026#34;lib/*\u0026#34; /\u0026gt; \t\u0026lt;zipfileset dir=\u0026#34;${project.dir}\u0026#34; includes=\u0026#34;doc/*\u0026#34; /\u0026gt; \t\u0026lt;zipfileset dir=\u0026#34;${project.dir}/txt\u0026#34; includes=\u0026#34;*\u0026#34; /\u0026gt; \t\u0026lt;/zip\u0026gt; \t\u0026lt;/target\u0026gt; \u0026lt;/project\u0026gt; Integration with Maven Scale maven project can simply be created using ‘org.scala-tools.archetypes’ archetypes. Example show pom.xml file to build basic scala maven project For maven to compile and understand the Scala class files, we should follow the following folder structure\nproject/ pom.xml - Defines the project src/ main/ java/ - Contains all java code that will go in your final artifact. scala/ - Contains all scala code that will go in your final artifact. resources/ - Contains all static files that should be available on the classpath in the final artifact. webapp/ - Contains all content for a web application (jsps,css,images, etc.) test/ java/ - Contains all java code used for testing. scala/ - Contains all scala code used for testing. resources/ - Contains all static content that should be available on the classpath during testing. pom.xml\n\u0026lt;project xmlns=\u0026#34;http://maven.apache.org/POM/4.0.0\u0026#34; xmlns:xsi=\u0026#34;http://www.w3.org/2001/XMLSchema-instance\u0026#34; xsi:schemaLocation=\u0026#34;http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd\u0026#34;\u0026gt; \u0026lt;modelVersion\u0026gt;4.0.0\u0026lt;/modelVersion\u0026gt; \u0026lt;groupId\u0026gt;com.tk.scala\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;scalamavenpoc\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;1.0\u0026lt;/version\u0026gt; \u0026lt;name\u0026gt;${project.artifactId}\u0026lt;/name\u0026gt; \u0026lt;description\u0026gt;scala maven poc application\u0026lt;/description\u0026gt; \u0026lt;inceptionYear\u0026gt;2010\u0026lt;/inceptionYear\u0026gt; \u0026lt;properties\u0026gt; \u0026lt;encoding\u0026gt;UTF-8\u0026lt;/encoding\u0026gt; \u0026lt;scala.version\u0026gt;2.9.1\u0026lt;/scala.version\u0026gt; \u0026lt;/properties\u0026gt; \u0026lt;dependencies\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.scala-lang\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;scala-library\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;${scala.version}\u0026lt;/version\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;!-- Test --\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;junit\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;junit\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;4.8.1\u0026lt;/version\u0026gt; \u0026lt;scope\u0026gt;test\u0026lt;/scope\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.scala-tools.testing\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;specs_2.9.1\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;1.6.9\u0026lt;/version\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.scalatest\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;scalatest\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;1.2\u0026lt;/version\u0026gt; \u0026lt;scope\u0026gt;test\u0026lt;/scope\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;mysql\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;mysql-connector-java\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;5.1.12\u0026lt;/version\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;ch.qos.logback\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;logback-core\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;0.9.30\u0026lt;/version\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;ch.qos.logback\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;logback-classic\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;0.9.30\u0026lt;/version\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.slf4j\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;slf4j-api\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;1.6.2\u0026lt;/version\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;/dependencies\u0026gt; \u0026lt;build\u0026gt; \u0026lt;sourceDirectory\u0026gt;src/main/scala\u0026lt;/sourceDirectory\u0026gt; \u0026lt;testSourceDirectory\u0026gt;src/test/scala\u0026lt;/testSourceDirectory\u0026gt; \u0026lt;plugins\u0026gt; \u0026lt;plugin\u0026gt; \u0026lt;groupId\u0026gt;org.scala-tools\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;maven-scala-plugin\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;2.15.0\u0026lt;/version\u0026gt; \u0026lt;executions\u0026gt; \u0026lt;execution\u0026gt; \u0026lt;goals\u0026gt; \u0026lt;goal\u0026gt;compile\u0026lt;/goal\u0026gt; \u0026lt;goal\u0026gt;testCompile\u0026lt;/goal\u0026gt; \u0026lt;/goals\u0026gt; \u0026lt;configuration\u0026gt; \u0026lt;args\u0026gt; \u0026lt;arg\u0026gt;-make:transitive\u0026lt;/arg\u0026gt; \u0026lt;arg\u0026gt;-dependencyfile\u0026lt;/arg\u0026gt; \u0026lt;arg\u0026gt;${project.build.directory}/.scala_dependencies\u0026lt;/arg\u0026gt; \u0026lt;/args\u0026gt; \u0026lt;/configuration\u0026gt; \u0026lt;/execution\u0026gt; \u0026lt;/executions\u0026gt; \u0026lt;/plugin\u0026gt; \u0026lt;plugin\u0026gt; \u0026lt;groupId\u0026gt;org.apache.maven.plugins\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;maven-surefire-plugin\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;2.6\u0026lt;/version\u0026gt; \u0026lt;configuration\u0026gt; \u0026lt;useFile\u0026gt;false\u0026lt;/useFile\u0026gt; \u0026lt;disableXmlReport\u0026gt;true\u0026lt;/disableXmlReport\u0026gt; \u0026lt;includes\u0026gt; \u0026lt;include\u0026gt;**/*Test.*\u0026lt;/include\u0026gt; \u0026lt;include\u0026gt;**/*Suite.*\u0026lt;/include\u0026gt; \u0026lt;/includes\u0026gt; \u0026lt;/configuration\u0026gt; \u0026lt;/plugin\u0026gt; \u0026lt;/plugins\u0026gt; \u0026lt;/build\u0026gt; \u0026lt;/project\u0026gt; Building Scala Web Application This POC shows a simple web servlet using scala.\nScala web servlet example Creating a web project that compiles your Scala code requires a little tweaking.\n First, you should create a simple Dynamic Web Project. Then, open the .project file.\n Replace the org.eclipse.jdt.core.javabuilder with org.scala-ide.sdt.core.scalabuilder.\n Now add the Scala nature to your project:\n \u0026lt;nature\u0026gt;org.scala-ide.sdt.core.scalanature\u0026lt;/nature\u0026gt;. \u0026lt;?xml version=\u0026#34;1.0\u0026#34; encoding=\u0026#34;UTF-8\u0026#34;?\u0026gt; \u0026lt;projectDescription\u0026gt; \t\u0026lt;name\u0026gt;ScalaWebApplication\u0026lt;/name\u0026gt; \t\u0026lt;comment\u0026gt;\u0026lt;/comment\u0026gt; \t\u0026lt;projects\u0026gt; \t\u0026lt;/projects\u0026gt;\t\t\u0026lt;buildSpec\u0026gt; \t\u0026lt;buildCommand\u0026gt; \t\u0026lt;name\u0026gt;org.eclipse.wst.jsdt.core.javascriptValidator\u0026lt;/name\u0026gt; \t\u0026lt;arguments\u0026gt; \t\u0026lt;/arguments\u0026gt; \t\u0026lt;/buildCommand\u0026gt; \t\u0026lt;buildCommand\u0026gt; \t\u0026lt;name\u0026gt;org.scala-ide.sdt.core.scalabuilder\u0026lt;/name\u0026gt; \t\u0026lt;arguments\u0026gt; \t\u0026lt;/arguments\u0026gt; \t\u0026lt;/buildCommand\u0026gt; \t\u0026lt;buildCommand\u0026gt; \t\u0026lt;name\u0026gt;org.eclipse.wst.common.project.facet.core.builder\u0026lt;/name\u0026gt; \t\u0026lt;arguments\u0026gt; \t\u0026lt;/arguments\u0026gt; \t\u0026lt;/buildCommand\u0026gt; \t\u0026lt;buildCommand\u0026gt; \t\u0026lt;name\u0026gt;org.eclipse.wst.validation.validationbuilder\u0026lt;/name\u0026gt; \t\u0026lt;arguments\u0026gt; \t\u0026lt;/arguments\u0026gt; \t\u0026lt;/buildCommand\u0026gt; \t\u0026lt;/buildSpec\u0026gt; \t\u0026lt;natures\u0026gt; \t\u0026lt;nature\u0026gt;org.scala-ide.sdt.core.scalanature\u0026lt;/nature\u0026gt; \t\u0026lt;nature\u0026gt;org.eclipse.jem.workbench.JavaEMFNature\u0026lt;/nature\u0026gt; \t\u0026lt;nature\u0026gt;org.eclipse.wst.common.modulecore.ModuleCoreNature\u0026lt;/nature\u0026gt; \t\u0026lt;nature\u0026gt;org.eclipse.wst.common.project.facet.core.nature\u0026lt;/nature\u0026gt; \t\u0026lt;nature\u0026gt;org.eclipse.jdt.core.javanature\u0026lt;/nature\u0026gt; \t\u0026lt;nature\u0026gt;org.eclipse.wst.jsdt.core.jsNature\u0026lt;/nature\u0026gt; \t\u0026lt;/natures\u0026gt; \u0026lt;/projectDescription\u0026gt; If you made the right change, you should see a S instead of a J on your project’s icon. The last thing to do should be to add the Scala library to your path. Depending on your application server configuration, either add the Scala library to your build path (Build Path -\u0026gt; Configure Build Path -\u0026gt; Add Library -\u0026gt; Scala Library) or manually add the needed library to your WEB-INF/lib folder. Web Servlet Program\npackage com.tk.servlet.scala import javax.servlet.http.{ HttpServlet, HttpServletRequest, HttpServletResponse } class ScalaServlet extends HttpServlet { /* (non-Javadoc) * @see javax.servlet.http.HttpServlet#doGet(javax.servlet.http.HttpServletRequest, javax.servlet.http.HttpServletResponse) */ override def doGet(req: HttpServletRequest, resp: HttpServletResponse) = { resp.getWriter().print(\u0026#34;\u0026lt;HTML\u0026gt;\u0026#34; + \u0026#34;\u0026lt;HEAD\u0026gt;\u0026lt;TITLE\u0026gt;Hello, Scala!\u0026lt;/TITLE\u0026gt;\u0026lt;/HEAD\u0026gt;\u0026#34; + \u0026#34;\u0026lt;BODY\u0026gt;Hello, Scala! This is a servlet.\u0026lt;/BODY\u0026gt;\u0026#34; + \u0026#34;\u0026lt;/HTML\u0026gt;\u0026#34;) } /* (non-Javadoc) * @see javax.servlet.http.HttpServlet#doPost(javax.servlet.http.HttpServletRequest, javax.servlet.http.HttpServletResponse) */ override def doPost(req: HttpServletRequest, resp: HttpServletResponse) = { } } Web.xml\n\u0026lt;?xml version=\u0026#34;1.0\u0026#34; encoding=\u0026#34;UTF-8\u0026#34;?\u0026gt; \u0026lt;web-app xmlns:xsi=\u0026#34;http://www.w3.org/2001/XMLSchema-instance\u0026#34; xmlns=\u0026#34;http://java.sun.com/xml/ns/javaee\u0026#34; xmlns:web=\u0026#34;http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd\u0026#34; xsi:schemaLocation=\u0026#34;http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_3_0.xsd\u0026#34; id=\u0026#34;WebApp_ID\u0026#34; version=\u0026#34;3.0\u0026#34;\u0026gt; \u0026lt;display-name\u0026gt;ScalaWebApplication\u0026lt;/display-name\u0026gt; \u0026lt;welcome-file-list\u0026gt; \u0026lt;welcome-file\u0026gt;index.html\u0026lt;/welcome-file\u0026gt; \u0026lt;welcome-file\u0026gt;index.htm\u0026lt;/welcome-file\u0026gt; \u0026lt;welcome-file\u0026gt;index.jsp\u0026lt;/welcome-file\u0026gt; \u0026lt;welcome-file\u0026gt;default.html\u0026lt;/welcome-file\u0026gt; \u0026lt;welcome-file\u0026gt;default.htm\u0026lt;/welcome-file\u0026gt; \u0026lt;welcome-file\u0026gt;default.jsp\u0026lt;/welcome-file\u0026gt; \u0026lt;/welcome-file-list\u0026gt; \u0026lt;servlet\u0026gt; \u0026lt;servlet-name\u0026gt;helloWorld\u0026lt;/servlet-name\u0026gt; \u0026lt;servlet-class\u0026gt;com.tk.servlet.scala.ScalaServlet\u0026lt;/servlet-class\u0026gt; \u0026lt;/servlet\u0026gt; \u0026lt;servlet-mapping\u0026gt; \u0026lt;servlet-name\u0026gt;helloWorld\u0026lt;/servlet-name\u0026gt; \u0026lt;url-pattern\u0026gt;/sayHello\u0026lt;/url-pattern\u0026gt; \u0026lt;/servlet-mapping\u0026gt; \u0026lt;/web-app\u0026gt; Scala web framework - Lift Lift is a free web application framework that is designed for the Scala programming language. As Scala program code executes within the Java virtual machine (JVM), any existing Java library and web container can be used in running Lift applications. Lift web applications are thus packaged as WAR files and deployed on any servlet 2.4 engine (for example, Tomcat 5.5.xx, Jetty 6.0, etc.). Lift programmers may use the standard Scala/Java development toolchain including IDEs such as Eclipse, NetBeans and IDEA. Dynamic web content is authored via templates using standard HTML5 or XHTML editors. Lift applications also benefit from native support for advanced web development techniques such as Comet and Ajax. Lift does not prescribe the model–view–controller (MVC) architectural pattern. Rather Lift is chiefly modeled upon the so-called \u0026ldquo;View First\u0026rdquo; (designer friendly) approach to web page development inspired by the Wicket framework.\nLift Hello world application Follow below steps to create simple lift web project\n Create maven project with archetype ‘lift-archetype-blank_{version}’\n Enter GroupId, ArtifactId, version, package just like normal maven project\n This will create project structure with scala, lift and logback configured in it.\n Project Structure Details\n src/main - your application source src/main/webapp - contains files to be copied directly into your web application, such as html templates and the web.xml src/test - your test code target - where your compiled application will be created and packaged Boot.scala - defines the global setup for the application. The main things it does are: Defines the packages for Lift code Creates the sitemap – a menu and security access for the site Commet - Comet describes a model where the client sends an asynchronous request to the server. The request is hanging till the server has something interesting to respond. As soon as the server responds another request is made. The idea is to give the impression that the server is notifying the client of changes on the server. For more details refer: Wikipedia article. Snippet - in general is a programming term for a small region of re-usable source code, machine code, or text. Lift uses snippets to transform sections of the HTML page from static to dynamic. The key word is transform. Lift’s snippets are Scala functions: NodeSeq =\u0026gt; NodeSeq. A NodeSeq is a collection of XML nodes. A snippet can only transform input NodeSeq to output NodeSeq. Default generated code Explanation - Here is the servlet filter that runs a lift app\n\u0026lt;?xml version=\u0026#34;1.0\u0026#34; encoding=\u0026#34;ISO-8859-1\u0026#34;?\u0026gt; \u0026lt;!DOCTYPE web-app PUBLIC \u0026#34;-//Sun Microsystems, Inc.//DTD Web Application 2.3//EN\u0026#34; \u0026#34;http://java.sun.com/dtd/web-app_2_3.dtd\u0026#34;\u0026gt; \u0026lt;web-app\u0026gt; \u0026lt;filter\u0026gt; \u0026lt;filter-name\u0026gt;LiftFilter\u0026lt;/filter-name\u0026gt; \u0026lt;display-name\u0026gt;Lift Filter\u0026lt;/display-name\u0026gt; \u0026lt;description\u0026gt;The Filter that intercepts lift calls\u0026lt;/description\u0026gt; \u0026lt;filter-class\u0026gt;net.liftweb.http.LiftFilter\u0026lt;/filter-class\u0026gt; \u0026lt;/filter\u0026gt; \u0026lt;filter-mapping\u0026gt; \u0026lt;filter-name\u0026gt;LiftFilter\u0026lt;/filter-name\u0026gt; \u0026lt;url-pattern\u0026gt;/*\u0026lt;/url-pattern\u0026gt; \u0026lt;/filter-mapping\u0026gt;   Default Hello world snippet - HelloWorld.scala\npackage com.tk.lift.poc { /** * declaring package for snippet classes */ package snippet { import _root_.scala.xml.NodeSeq import _root_.net.liftweb.util.Helpers import Helpers._ /** * Defining snippet class HelloWorld */ class HelloWorld { /** * defining snippet with name \u0026#39;howdy\u0026#39; */ def howdy(in: NodeSeq): NodeSeq = /** * This snippet will replace \u0026#39;\u0026lt;b:time/\u0026gt; in html file * with current date string\u0026#39; */ Helpers.bind(\u0026#34;b\u0026#34;, in, \u0026#34;time\u0026#34; -\u0026gt; (new _root_.java.util.Date).toString) } } } Default html page - index.html\n\u0026lt;lift:surround with=\u0026#34;default\u0026#34; at=\u0026#34;content\u0026#34;\u0026gt; \u0026lt;h2\u0026gt;Welcome to your project!\u0026lt;/h2\u0026gt; \u0026lt;p\u0026gt; \u0026lt;lift:helloWorld.howdy\u0026gt; \u0026lt;span\u0026gt;Welcome to liftblank at \u0026lt;b:time/\u0026gt;\u0026lt;/span\u0026gt; \u0026lt;/lift:helloWorld.howdy\u0026gt; \u0026lt;/p\u0026gt; \u0026lt;/lift:surround\u0026gt; Default Boot.scala\n package bootstrap.liftweb import _root_.net.liftweb.common._ import _root_.net.liftweb.util._ import _root_.net.liftweb.http._ import _root_.net.liftweb.sitemap._ import _root_.net.liftweb.sitemap.Loc._ import Helpers._ /** * A class that\u0026#39;s instantiated early and run. It allows the application * to modify lift\u0026#39;s environment */ class Boot { def boot { // where to search snippet LiftRules.addToPackages(\u0026#34;com.nagarro.lift.poc\u0026#34;) // This piece of code build SiteMap on thie index.html page // we will see listing with \u0026#39;home\u0026#39; attribute there val entries = Menu(Loc(\u0026#34;Home\u0026#34;, List(\u0026#34;index\u0026#34;), \u0026#34;Home\u0026#34;)) :: Nil LiftRules.setSiteMap(SiteMap(entries: _*)) } } Running default web application To run this webapp Build the project with maven clean install goal This will generate war file in target folder Deploy the war file in any servers like tomcat etc Scala/Lift vs Java/Spring Let\u0026rsquo;s assume we\u0026rsquo;re equally comfortable in Scala and Java, and ignore the (huge) language differences except as they pertain to Spring or Lift.(Probably a big assumption for java refuges)\nSpring and Lift are almost diametrically opposed in terms of maturity and goals.\n Spring is about five years older than Lift Lift is monolithic and targets only the web; Spring is modular and targets both web and \u0026ldquo;regular\u0026rdquo; apps Spring supports a plethora of Java EE features; Lift ignores that stuff Scala MVC framework - Play Play is an open source web application framework, written in Scala and Java, which follows the model–view–controller (MVC)architectural pattern. It aims to optimize developer productivity by using convention over configuration, hot code reloading and display of errors in the browser. Support for the Scala programming language has been available since version 1.1 of the framework. In version 2.0, the framework core was rewritten in Scala. Build and deployment was migrated to Simple Build Tool and templates use Scala instead of Groovy.\nPlay - Hello World Install play framework Create new hello world application open command prompt and type following commands play new helloworld –with scala It will prompt you for the application full name. Type Hello world. The play new command creates a new directory helloworld/ and populates it with a series of files and directories, the most important being: Play Project Structure app/ contains the application’s core. It contains all the application *.scala files. You see that the default is a single controllers.scala file. You can of course use more files and organize them in several folders for more complex applications. Also the app/viewsdirectory is where template files live. A template file is generally a simple text, xml or html file with dynamic placeholders. conf/ contains all the application’s configuration files, especially the main application.conf file, the routes definition files and the messages files used for internationalization. lib/ contains all optional Scala libraries packaged as standard .jar files. public/ contains all the publicly available resources, which includes JavaScript, stylesheets and images directories. test/ contains all the application tests. Tests are either written either as ScalaTest tests or as Selenium tests. As a Scala developer, you may wonder where all the .class files go. The answer is nowhere: Play doesn’t use any class files but reads the Scala source files directly.\n The main entry point of your application is the conf/routes file. This file defines all of the application’s accessible URLs. If you open the generated routes file you will see this first ‘route’:\nGET\t/\tApplication.index That simply tells Play that when the web server receives a GET request for the / path, it must call the Application.index Scala method. In this case, Application.index is a shortcut for controllers.Application.index, because the controller’s package is implicit.\n A Play application has several entry points, one for each URL. We call these methodsaction methods. Action methods are defined in several objects that we call controllers. Open thehelloworld/app/controllers.scala source file:\npackage controllers import play._ import play.mvc._ object Application extends Controller { import views.Application._ def index = { html.index(\u0026#34;Your new Scala application is ready!\u0026#34;) } } The default index action is simple: it invoke the views.Application.html.index template and returns the generated HTML. Using a template is the most common way (but not the only one) to generate the HTTP response. Templates are special Scala source files that live in the /app/views directory.\n Open the helloworld/app/views/Application/index.scala.html file:\n@(title:String) @main(title) { @views.defaults.html.welcome(title) } The first line defines the template parameters: a template is a function that generates play.template.Html return value. In this case, the template has a single title parameter of type String, so the type of this template is (String) =\u0026gt; Html.\nThen this template call another template called main, that take two parameters: a String and an Html block.\n Open the helloworld/app/views/main.scala.html file that defines the main template:\n @(title:String = \u0026#34;\u0026#34;)(body: =\u0026gt; Html) \u0026lt;!DOCTYPE html\u0026gt; \u0026lt;html\u0026gt; \u0026lt;head\u0026gt; \u0026lt;title\u0026gt;@title\u0026lt;/title\u0026gt; \u0026lt;link rel=\u0026#34;stylesheet\u0026#34; href=\u0026#34;@asset(\u0026#34;public/stylesheets/main.css\u0026#34;)\u0026#34;\u0026gt; \u0026lt;link rel=\u0026#34;shortcut icon\u0026#34; href=\u0026#34;@asset(\u0026#34;public/images/favicon.png\u0026#34;)\u0026#34;\u0026gt; \u0026lt;script src=\u0026#34;@asset(\u0026#34;public/javascripts/jquery-1.5.2.min.js\u0026#34;)\u0026#34;\u0026gt;\u0026lt;/script\u0026gt; \u0026lt;/head\u0026gt; \u0026lt;body\u0026gt; @body \u0026lt;/body\u0026gt; \u0026lt;/html\u0026gt; Lift vs. Play Architecture: Play is based on MVC architecture and lift is basically View-first design approach. View First: View first simply means the view is what drives the creation or discovery of the view model. In view first scenarios, the view typically binds to the view model as a resource, uses a locator pattern. Productivity: Play is better than lift because of its learning curve. Security : Lift is better than any other framework Templates : Lift is better because you can really split up front-end dev work and back-end work Refactoring / maintainability : Lift is better : a MVC isn\u0026rsquo;t as good as a View-first design for that. Scalability : Play is slightly better and can be hosted on most cloud PaaS. Lift scales very well but you will need sticky sessions. Documentation : Play\u0026rsquo;s documentation is better. Activity : Play is more active because its more attractive for beginners.\\ Stateless/Stateful : Lift is very clear about where you need state and where you don\u0026rsquo;t. Lift can support stateless, partially stateful, and completely stateful apps. On a page-by-page and request-by-request basis, a Lift app can be stateful or stateless. Build environment: Lift uses Maven, sbt, Buildr, and even Ant. Lift is agnostic about the build environment and about the deploy environment (JEE container, Netty, whatever). This is important because it make Lift easier to integrate with the rest of your environment. Programming language: Play support java as well as scala but lift only supports scala language. Template language: Play uses groovy, Japid, Scala template engine as its template language while Liftweb uses HTML 5 its template language. Conclusion Scala designed to express common programming patterns in a concise, elegant, and type-safe way.\nScala is often compared with Groovy and Clojure, two other programming languages also built on top of the JVM. Among the main differences are:\nCompared with Clojure cala is statically typed, while both Groovy and Clojure are dynamically typed. This adds complexity in the type system but allows many errors to be caught at compile time that would otherwise only manifest at runtime, and tends to result in significantly faster execution.\nCompared with Groovy Compared with Groovy, Scala has more changes in its fundamental design. The primary purpose of Groovy was to make Java programming easier and less verbose, while Scala (in addition to having the same goal) was designed from the ground up to allow for a functional programming style in addition to object-oriented programming, and introduces a number of more \u0026ldquo;cutting-edge\u0026rdquo; features from functional programming languages like Haskell that are not often seen in mainstream languages\nReferences : Official Website http://www.scala-lang.org/ http://www.scala-lang.org/node/25 http://www.codecommit.com/blog/scala/roundup-scala-for-java-refugees http://www.scala-blogs.org/2007/12/dynamic-web-applications-with-lift-and.html http://stackoverflow.com/questions/7468050/how-to-create-a-simple-web-form-in-scala-lift SNIPET - http://simply.liftweb.net/index-3.4.html Comet - https://www.assembla.com/spaces/liftweb/wiki/Comet_Support Play - http://www.playframework.com/ This article is also supported by my friend Dhruv Bansal\n","id":193,"tag":"scala ; java ; introduction ; language ; play ; lift ; spring ; comparison ; technology","title":"Scala - Getting Started - Part 2","type":"post","url":"/post/scala-getting-started-part2/"},{"content":"This article help you to start with Scala, with step by step guide and series of examples. It starts with an overview and then covers in detail examples. In later articles, I will write about feature comparison with other languages. It article is helpful for people coming from Java background, how it is not the prerequisites.\nScala Overview Scala is a general purpose programming language designed to express common programming patterns in a concise, elegant, and type-safe way. It smoothly integrates features of object-oriented and functional languages, enabling Java and other programmers to be more productive.\nFeatures While the simplicity and ease of use is the leading principle of Scala here are a few nice features of Scala:\n Code sizes are typically reduced by a factor of two to three when compared to an equivalent Java application.\n As compared to java, Scala boost their development productivity, applications scalability and overall reliability.\n Seamless integration with Java : Existing Java code and programmer skills are fully re-usable. Scala programs run on the Java VM are byte code compatible with Java so you can make full use of existing Java libraries or existing application code. You can call Scala from Java and you can call Java from Scala, the integration is seamless.\n Scala Compiler Performance: The Scala compiler is mature and proven highly reliable by years of use in production environments, the compiler was written by Martin Odersky who also wrote the Java reference compiler and co-authored the generics. Scala compiler produces byte code that performs every bit as good as comparable Java code.\n Scala is object-oriented: Scala is a pure object-oriented language in the sense that every value is an object. Types and behavior of objects are described by classes and traits. Classes are extended by sub classing and a flexible mixing-based composition mechanism as a clean replacement for multiple inheritance.\nTraits are Scala\u0026rsquo;s replacement for Java\u0026rsquo;s interfaces. Interfaces in Java are highly restricted, able only to contain abstract function declarations. This has led to criticism that providing convenience methods in interfaces is awkward (the same methods must be re implemented in every implementation), and extending a published interface in a backwards-compatible way is impossible. Traits are similar to mixing classes in that they have nearly all the power of a regular abstract class, lacking only class parameters (Scala\u0026rsquo;s equivalent to Java\u0026rsquo;s constructor parameters), since traits are always mixed in with a class.\n Scala is functional: Scala is also a functional language in the sense that every function is a value.\n Scala provides a lightweight syntax for defining anonymous functions. It supports higher-order functions : These are functions that take other functions as parameters, or whose result is a function. It allows functions to be nested:. It supports currying: Methods may define multiple parameter lists. When a method is called with a fewer number of parameter lists, then this will yield a function taking the missing parameter lists as its arguments. Scala interoperates with Java and .NET: Scala is designed to interoperate well with the popular Java 2 Runtime Environment (JRE). In particular, the interaction with the mainstream object-oriented Java programming language is as smooth as possible. Scala has the same compilation model (separate compilation, dynamic class loading) like Java and allows access to thousands of existing high-quality libraries. Support for the .NET Framework (CLR) is also available. Working with Scala Lets start with few Scala examples. Here is what you need\n Eclipse 3.7 or Eclipse 4.2 Java 1.6 or above. Scala Binary - 2.10.1/(2.9.2 – for ant integration example only) Hello World (Main method) Lets understand basic scala program things to keen in mind\n A Scala source files ends with the extension .scala. Scala methods are public by default, which means there’s no public method modifier (private and protected are both defined) Scala doesn’t really have statics. It has a special syntax which allows you to easily define and use singleton classes. object keyword creating a Singleton object of a class. For every object in the code, an anonymous class is created, which inherits from whatever classes you declared Rest is all java :)\npackage com.tk.scala /** * There is nothing called \u0026#39;static\u0026#39; in scala, This way we are creating singleton object */ object MainClass { /** * args is a method parameter of type Array[String]. * That is to say, args is a string array. * * \u0026#39;:Unit\u0026#39; defines the return type of the main function * Unit is like void but its more than that. */ def main(args:Array[String]):Unit = { //declaration of String variable firstName var firstName: String = \u0026#34;Kuldeep\u0026#34;; // similar declaration as above String. However, we don’t explicitly // specify any type. // This is because we’re taking advantage of Scala’s type inference mechanism var lastName = \u0026#34;Singh\u0026#34;; // val represents the declaration of constant // Think of it like a shorter form of Java’s final modifier val greeting:String = \u0026#34;Hello\u0026#34; println(greeting + \u0026#34; \u0026#34; + firstName + \u0026#34; \u0026#34; + lastName) } } Hello World ( with Application class) Below program show the use of Application class (trait) in Scala. Inspite of declaring main function in above example we can take advantage of ‘Application’ class.\n The Application trait can be used to quickly turn objects into executable programs. Drawback of using Application class to make program executable is that there is no way to obtain the command-line arguments because all code in body of an object extending Application is run as part of the static initialization which occurs before Application\u0026rsquo;s main method even begins execution. package com.tk.scala object MainClass extends Application { //declaration of String variable firstName var firstName: String = \u0026#34;Kuldeep\u0026#34;; var lastName = \u0026#34;Singh\u0026#34;; val greeting:String = \u0026#34;Hello\u0026#34; println(greeting + \u0026#34; \u0026#34; + firstName + \u0026#34; \u0026#34; + lastName) }  Byte code generation .Scala file is converted into .class file as JVM only understands byte code. This is how ‘.class’ look file when decompiled with the java decompiler. We can see here the singleton class MainClass as mentioned above ‘Object’ keyword in Scala in equivalent to singleton class in java.\npackage com.tk.scala; import scala.Predef.; import scala.collection.mutable.StringBuilder; // Singleton class public final class MainClass$ { public static final MODULE$; static { new (); } public void main(String[] args) { String firstName = \u0026#34;Kuldeep\u0026#34;; String lastName = \u0026#34;Singh\u0026#34;; String greeting = \u0026#34;Hello\u0026#34;; Predef..MODULE$.println(new StringBuilder().append(greeting).append(\u0026#34; \u0026#34;).append(firstName).append(\u0026#34; \u0026#34;).append(lastName).toString()); } // private constructor of singleton class private MainClass$() { MODULE$ = this; } }  Encapsulation in Scala This example demonstrate simple encapsulation concept using Scala program\n Declaration of private member in scala and defining its setter getter package com.tk.scala class Product { // declaration of private Int variable price private var _price: Int = 0 // definition of getter method for price // method name price returning Int variable price def price = _price // above declaration is same as // def price:Int = {_price}; // definition of setter method // here underscore is acting like a reserve keyword def price_=(value: Int): Unit = _price = value } /** * Main object */ object AppMainClass { def main(args: Array[String]) = { var product = new Product(); // calling setter to set value product.price = 12; print(\u0026#34;Price of product is: \u0026#34;) println(product.price) } } Byte code This is how .class looks when decompiled with java decompiler. public class Product { private int _price = 0; private int _price() { return this._price; } private void _price_$eq(int x$1) { this._price = x$1; } public int price() { return _price(); } public void price_$eq(int value) { _price_$eq(value); } } public final class AppMainFx$ { public static final MODULE$; static { new (); } public void main(String[] args) { Product product = new Product(); product.price_$eq(12); Predef..MODULE$.print(\u0026#34;Price of product is: \u0026#34;); Predef..MODULE$.println(BoxesRunTime.boxToInteger(product.price())); } private AppMainFx$() { MODULE$ = this; } }  General OOPs in Scala This example demonstrate simple OOPS concept using Scala program\n Scala doesn’t force you to define all public classes individually in files of the same name. Scala actually lets you define as many classes as you want per file (think C++ or Ruby). Demonstration of inheritance using scala. Constructor’s declarations. Overriding the methods. package com.tk.scala /** * Abstract Class declaration */ abstract class Person { // constant declaration of type string using keyword ‘val’ val SPACE: String = \u0026#34; \u0026#34; /** * These are a public variable (Scala elements are public by default) * * Scala supports encapsulation as well, but its syntax is considerably * less verbose than Java’s. Effectively, all public variables become instance * properties. */ var firstName: String = null var lastName: String = null // abstract method def getDetails(): Unit } /** * Concrete Employee class extending person class * * Constructor: Employee class with parameterized constructor with two * parameters firstName and lastName inherited from Person class. * The constructor isn’t a special method syntactically like in java, * it is the body of the class itself. */ class Employee(firstName: String, lastName: String) extends Person { val profession:String = \u0026#34;Employee\u0026#34;; // implementation of abstract methods def getDetails() = { println(\u0026#34;Personal Details: \u0026#34; + firstName + SPACE + lastName) } /** * Override is actually a keyword. * It is a mandatory method modifier for any method with a signature which * conflicts with another method in a superclass. * Thus overriding in Scala isn’t implicit (as in Java, Ruby, C++, etc), but explicitly declared. */ override def toString = \u0026#34;[\u0026#34; + profession +\u0026#34;: firstName=\u0026#34; + firstName + \u0026#34; lastName=\u0026#34; + lastName + \u0026#34;]\u0026#34; } /** * Concrete Teacher class extending person class */ class Teacher(firstName: String, lastName: String) extends Person { val profession:String = \u0026#34;Teacher\u0026#34;; def getDetails() = { println(\u0026#34;Personal Details: \u0026#34; + firstName + SPACE + lastName) } override def toString = \u0026#34;[\u0026#34; + profession +\u0026#34;: firstName=\u0026#34; + firstName + \u0026#34; lastName=\u0026#34; + lastName + \u0026#34;]\u0026#34; } object App { def main(args: Array[String]) = { var emp = new Employee(\u0026#34;Kuldeep\u0026#34;, \u0026#34;Singh\u0026#34;); println(emp); var teacher = new Teacher(\u0026#34;Kuldeep\u0026#34;, \u0026#34;Thinker\u0026#34;) println(teacher) } } Collections in Scala This example demonstrates some Collections in scala.\n Scala Lists are quite similar to arrays which means, all the elements of a list have the same type but there are two important differences. Lists are immutable which means, elements of a list cannot be changed by assignment. Lists represent a linked list whereas arrays are flat. Scala Sets: There are two kinds of Sets, the immutable and the mutable. The difference between mutable and immutable objects is that when an object is immutable, the object itself can\u0026rsquo;t be changed. By default, Scala uses the immutable Set. If you want to use the mutable Set, you\u0026rsquo;ll have to import scala.collection.mutable.Set class explicitly. Scala Maps: Similar to sets there are two kinds of maps immutable and mutable. Scala Tuples: Scala tuple combines a fixed number of items together so that they can be passed around as a whole. Unlike an array or list, a tuple can hold objects with different types but they are also immutable. package com.tk.scala /** * Class demonstrating list */ class ScalaList { /** * Creation list of types string */ val fruit1: List[String] = List(\u0026#34;apples\u0026#34;, \u0026#34;oranges\u0026#34;, \u0026#34;pears\u0026#34;) /** * All lists can be defined using two fundamental building blocks, * a tail Nil and :: which is pronounced cons. Nil also represents the empty list. */ val fruit2 = \u0026#34;apples\u0026#34; :: (\u0026#34;oranges\u0026#34; :: (\u0026#34;pears\u0026#34; :: Nil)) val empty = Nil /** * Operations supported on list * As scala list is LinkList, operations as related to general link list operations/ * * Some operations */ def Operations(): Unit = { println(\u0026#34;Head of fruit : \u0026#34; + fruit1.head) println(\u0026#34;Tail of fruit : \u0026#34; + fruit1.tail) println(\u0026#34;Check if fruit is empty : \u0026#34; + fruit1.isEmpty) println(\u0026#34;Check if nums is empty : \u0026#34; + fruit1.isEmpty) println(\u0026#34;Check if nums is empty : \u0026#34; + empty.isEmpty) traverseList } def traverseList() = { val iterator = fruit1.iterator; while (iterator.hasNext) { print(iterator.next()) print(\u0026#34;-\u0026gt;\u0026#34;) } println } } /** * Class demonstrating sets */ class ScalaSet { // Empty set of integer type var emptySet: Set[Int] = Set() // Set of integer type var oddIntegerSet: Set[Int] = Set(1, 3, 5, 7) var evenIntegerSet: Set[Int] = Set(2, 4, 6, 8) def operations(): Unit = { println(\u0026#34;Head of fruit : \u0026#34; + oddIntegerSet.head) println(\u0026#34;Tail of fruit : \u0026#34; + oddIntegerSet.tail) println(\u0026#34;Check if fruit is empty : \u0026#34; + oddIntegerSet.isEmpty) println(\u0026#34;Check if nums is empty : \u0026#34; + oddIntegerSet.isEmpty) println(\u0026#34;Check if nums is empty : \u0026#34; + emptySet.isEmpty) //Concatenating Sets var concatinatedList = oddIntegerSet ++ evenIntegerSet println(\u0026#34;Concatinated List : \u0026#34; + concatinatedList) } } /** * Class demonstrating maps */ class ScalaMap { // Empty hash table whose keys are strings and values are integers: /** * Note: While defining empty map, the type annotation is necessary * as the system needs to assign a concrete type to variable. */ var emptyMap: Map[Char, Int] = Map() val colors = Map(\u0026#34;red\u0026#34; -\u0026gt; \u0026#34;#FF0000\u0026#34;, \u0026#34;azure\u0026#34; -\u0026gt; \u0026#34;#F0FFFF\u0026#34;, \u0026#34;peru\u0026#34; -\u0026gt; \u0026#34;#CD853F\u0026#34;) def operations() = { println(\u0026#34;Keys in colors : \u0026#34; + colors.keys) println(\u0026#34;Values in colors : \u0026#34; + colors.values) println(\u0026#34;Check if colors is empty : \u0026#34; + colors.isEmpty) println(\u0026#34;Check if nums is empty : \u0026#34; + emptyMap.isEmpty) tranverseMap } def tranverseMap() { println(\u0026#34;Printing key value pair for map:::\u0026#34;) // using for each loop to traverse map colors.keys.foreach { i =\u0026gt; print(\u0026#34;Key = \u0026#34; + i) println(\u0026#34; Value = \u0026#34; + colors(i)) } } } /** * Class demonstrating tuples */ class scalaTuple { val t = new Tuple3(1, \u0026#34;hello\u0026#34;, Console) //same tuple can be declared as val t1 = (1, \u0026#34;hello\u0026#34;, Console) val intergerTuple = (4, 3, 2, 1) def operations() = { println(\u0026#34;Print sum of elements in integer tuples\u0026#34;) val sum = intergerTuple._1 + intergerTuple._2 + intergerTuple._3 + intergerTuple._4 transverseTuple } def transverseTuple { println(\u0026#34;Printing elements in tuples:::\u0026#34;) intergerTuple.productIterator.foreach { i =\u0026gt; println(\u0026#34;Value = \u0026#34; + i) } } } object CollectionMain { def main(args: Array[String]) = { var scalaList = new ScalaList(); println(\u0026#34;Printing operations on list::::\u0026#34;) scalaList.Operations; var scalaSet = new ScalaSet(); println(\u0026#34;\u0026#34;); println(\u0026#34;Printing operations on Set::::\u0026#34;) scalaSet.operations; var scalaMap = new ScalaMap(); println(\u0026#34;\u0026#34;); println(\u0026#34;Printing operations on Map::::\u0026#34;) scalaMap.operations; var scalaTuple = new scalaTuple(); println(\u0026#34;\u0026#34;); println(\u0026#34;Printing operations on Tuple::::\u0026#34;) scalaTuple.operations; } }\tFile operations \u0026amp; Exception handling Exceptions in Scala: Scala doesn\u0026rsquo;t actually have checked exceptions. Scala allows you to try/catch any exception in a single block and then perform pattern matching against it using case blocks as shown below package com.tk.scala import java.io.IOException import java.io.FileNotFoundException import scala.io.Source import scala.reflect.io.Path import java.io.File import java.io.PrintWriter /** * Scala example to show read and write operations on file */ class FileOperations { /** * create and add content to file * * This example depicts how we use java API in scala * We can use any java class/API in scala */ def writeFile(filePath:String, content:String) = { // using java file object to create a file var file:File = new File(filePath); if (file.createNewFile()) { println(\u0026#34;New file created\u0026#34;) } else { println(\u0026#34;File alredy exists\u0026#34;) } // using java print writer object to add content in file var writer:PrintWriter = new PrintWriter(file); if (content != null) writer.write(content) writer.close(); } /** * reads the given file are return the content * of that file */ def readFile(filePath: String): String = { // here is scala io class to readfrom file var lines: String = null; // there is no checked exception in scala try { lines = scala.io.Source.fromFile(filePath).mkString // catch block for exception handing } catch { case ex: FileNotFoundException =\u0026gt; { println(\u0026#34;Missing file exception\u0026#34;) } case ex: IOException =\u0026gt; { println(\u0026#34;IO Exception\u0026#34;) } } finally { println(\u0026#34;Exiting finally...\u0026#34;) } // returning content of the file lines } } /** * Main object */ object FileOperationMain extends Application { val filePath = \u0026#34;e:\\\\test.txt\u0026#34; var fileOpts: FileOperations = new FileOperations; // calling api to create/write file fileOpts.writeFile(filePath, \u0026#34;File operation example for scala\u0026#34;); // calling api to read file println(fileOpts.readFile(filePath)); } Triat A trait encapsulates method and field definitions, which can then be reused by mixing them into classes. Unlike class inheritance, in which each class must inherit from just one superclass, a class can mix in any number of traits. A trait definition looks just like a class definition except that it uses the keyword trait as follows Explains how scala support mixed bases composition. package com.tk.scala /** * defining a trait * Trait can have abstract method as well * as well as concrete method */ trait Swimming { def swim() = println(\u0026#34;I can swim\u0026#34;) } /** * fly trait */ trait Fly { def fly() = println(\u0026#34;I can fly\u0026#34;) } /** * abstract class bird */ abstract class Bird { //abstract method def defineMe: String } /** * class having bird as well as trait properties */ class Penguin extends Bird with Swimming { /** * Short hand syntax of function defination * Following line means function flyMessage is * returing string value \u0026lt;I am good..\u0026gt; */ def defineMe = \u0026#34;I am a Penguin\u0026#34; /** * This piece of code is equivalent to above definition of function */ /*def defineMe() = { var str:String = \u0026#34;I am a Penguin\u0026#34; // returning string value (No return keyword required) str }*/ } /** * class having all the traits */ class Pigeon extends Bird with Swimming with Fly { /** * defining defineMe function */ def defineMe = \u0026#34;I am a Pigeon\u0026#34; } /** * AppMain class with main function */ object AppMain extends Application { // penguin instance have on swim trait var penguin = new Penguin println(penguin.defineMe) penguin.swim // pigeon instance have all traits var pigeon = new Pigeon println(pigeon.defineMe) pigeon.fly pigeon.swim } Scala and Java Interoperability This example shows how we can use scala program with java programs and vice versa.\n As we know the Scala classes also get compiled to .class file and generates the same bytecode as java class. Thus the interoperability is smooth between two languages. Java Class\npackage com.tk.java; import com.tk.scala.ScalaLearner; /** * The Class JavaLearner. */ public class JavaLearner { \tprivate String firstName; \tprivate String lastName; \tpublic JavaLearner(String firstName, String lastName) { \tthis.firstName = firstName; \tthis.lastName = lastName; \t} \tpublic void consoleAppender() { \tSystem.out.println(firstName + \u0026#34; \u0026#34; + lastName + \u0026#34; is a java learner\u0026#34;); \t} \tpublic static void main(String[] args) { \tJavaLearner printJava = new JavaLearner(\u0026#34;Kuldeep\u0026#34;, \u0026#34;Singh\u0026#34;); \tprintJava.consoleAppender(); \t// create instance of scala class \tScalaLearner printScala = new ScalaLearner(\u0026#34;Dhruv\u0026#34;, \u0026#34;Bansal\u0026#34;); \tprintScala.consoleAppender(); \t} } Scala Class\npackage com.tk.scala class ScalaLearner(firstName: String, lastName: String) { /** * Scala API prints message on console */ def consoleAppender() = println(firstName + \u0026#34; \u0026#34; + lastName + \u0026#34;i am in scala learner\u0026#34;) } /** * This Object is java representation of singleton object * this piece of code will generate class file with name * AppMain having singleton class AppMain having public * static void main method */ object AppMain { def main(args: Array[String]): Unit = { var scalaLearner = new ScalaLearner(\u0026#34;Kuldeep\u0026#34;, \u0026#34;Singh\u0026#34;); scalaLearner.consoleAppender; // import class in the scope of another class only import com.nagarro.jsag.java.JavaLearner // creating instance of java class file var javaLearner = new JavaLearner(\u0026#34;Dhruv\u0026#34;, \u0026#34;Bansal\u0026#34;); javaLearner.consoleAppender; } } ##Performance comparison of Scala with Java The performance measurement we did bubble sort on revert order array with same code in Scala script and Java Class file. All measurements were done on an Intel Core i7-3720QM CPU 2.60 GHz using JDK7u7 running on Windows 7 with Service Pack 1. I used Eclipse Juno with the Scala plugin and mentioned above.\nJava Bubble Sort Implementation Java Program\npackage com.tk.scala; /** * The Class SortAlgo. */ public class SortAlgo { \t/** * Implementation of bubble sort algorithm. * * @param inputArray * the input array */ \tpublic void bubbleSort(int inputArray[]) { \tint i, j, t = 0; \tint arryLength = inputArray.length; \tfor (i = 0; i \u0026lt; arryLength; i++) { \tfor (j = 1; j \u0026lt; (arryLength - i); j++) { \tif (inputArray[j - 1] \u0026gt; inputArray[j]) { \tt = inputArray[j - 1]; \tinputArray[j - 1] = inputArray[j]; \tinputArray[j] = t; \t} \t} \t} \t} } package com.tk.scala; /** * The Class AppMain. */ public class AppMain { \tpublic static void main(String[] args) { \tlong start = System.currentTimeMillis(); \tconsoleAppender(\u0026#34;Sort algo start time: \u0026#34; + start); \tInteger arraySize = 10000; \tconsoleAppender(\u0026#34;Population array...\u0026#34;); \tint[] array = new int[arraySize]; \tfor (int i = 0; i \u0026lt; arraySize; i++) { \tarray[i] = arraySize - i; \t} \tconsoleAppender(\u0026#34;Sorting array...\u0026#34;); \tSortAlgo sortAlgo = new SortAlgo(); \tsortAlgo.bubbleSort(array); \tlong end = System.currentTimeMillis(); \tconsoleAppender(\u0026#34;Sort algo end time: \u0026#34; + end); \tconsoleAppender(\u0026#34;Total time taken: \u0026#34; + (end - start)); \t} \tprivate static void consoleAppender(String strValue) { \tSystem.out.println(strValue); \t} } Scala program\npackage com.tk.scala class SortAlgo { // implementation of bubble sort def bubbleSort(a: Array[Int]): Unit = { var i:Int = 1; for (i \u0026lt;- 0 to (a.length - 1)) { for (j \u0026lt;- (i - 1) to 0 by -1) { if (a(j) \u0026gt; a(j + 1)) { val temp = a(j + 1) a(j + 1) = a(j) a(j) = temp } } } } } package com.tk.scala /** * Bubble sort singleton class with main method */ object BubbleSort { def main(args : Array[String]) : Unit = { // declaring and initiating local variable for performance measure var startTime:Long = java.lang.System.currentTimeMillis(); println(\u0026#34;Sorting start time: \u0026#34; + startTime) val arraySize = 10000; // declaring Integer array var arry : Array[Int] = new Array[Int](arraySize) println(\u0026#34;Populating array...\u0026#34;) for(i\u0026lt;- 0 to (arraySize - 1)){ arry(i) = arraySize - i; } println(\u0026#34;Sorting array...\u0026#34;) var algo:SortAlgo = new SortAlgo; // calling sort API algo.bubbleSort(arry) var endTime:Long = java.lang.System.currentTimeMillis(); println(\u0026#34;Sorting end time: \u0026#34; + endTime) println(\u0026#34;Total time taken:\u0026#34; + (endTime - startTime)) } } Comparison See below table for performance metric s of above two programs:\n Size Scala Java Time to populate Array 100000 70 ms 3 ms Time to Sort Array by Bubble sort 100000 26187 ms 12104 ms Time to populate Array 10000 69 ms 1 ms Time to Sort Array by Bubble sort 10000 339 ms 140 ms Conclusion We have learned basics of Scala tried benchmarking it against Java. We found the Scala produce less verbose code but it is slower in certain cases in comparison to java.\nThis article is also supported by my friend Dhruv Bansal\nReferences Official Website -http://www.scala-lang.org/ http://www.scala-lang.org/node/25 http://www.codecommit.com/blog/scala/roundup-scala-for-java-refugees http://www.tutorialspoint.com/scala/scala_lists.htm http://joelabrahamsson.com/learning-scala-part-seven-traits/ In the next article we will see how to build web applications using Scala.\n","id":194,"tag":"scala ; java ; introduction ; language ; comparison ; technology","title":"Scala - Getting Started - Part 1","type":"post","url":"/post/scala-getting-started-part1/"},{"content":"This article describes how to configure charset for a JVM.\nWe faced an issue on a Linux environment, a file containing some Swedish character was not being read correctly by underlying JVM since the default charset of JVM was picked as UTF-8 (which does not support some Swedish character). But on the other hand, in Windows environment, it was working fine since the default charset was picked as Windows-1252 (which does support these characters).\nHere are the steps to solve the issue ISO-8859-1 is the standard character encodings, generally intended for Western European languages. This charset can be set for JVM in Linux and Windows environments for Western European languages.\nSo, to tell JVM which charset should it pick, irrespective of the environment, can be defined from command line in JVM.\nExample for Tomcat server, make the following changes\n Open catalina.sh in your TOMCAT/bin directory and set the JAVA_OPTS to change default charset to ISO-8859-1 using below statement JAVAOPTS=\u0026quot;$JAVAOPTS -Dfile.encoding=ISO-8859-1\u0026quot; and then restart the server.\n","id":195,"tag":"charset ; java ; jvm ; customizations ; technology","title":"Charset configuration in JVM","type":"post","url":"/post/java-charset-configuration/"},{"content":"Document introduction contains a proof of concept for the research done on reading and writing OLE objects in Java. This research has been done using Apache POI. Please go thru the references for information about apache POI.\nThis article covers a POC to fetch OLE objects from RIF formatted XML. Not just that, we need to store them in a in some viewable format.\nFor a given XML in RIF format, parse it and read the embedded OLE object and represent in some viewable format using Java.\nXML was given in RIF(Requirement Interchange Format), it is used to exchange requirements along with the associated meta data. The given XML was generated from a tool DOORS by IBM. It embeds the attachments in BASH64 encoded under tag \u0026lt;rif-xhtml:object\u0026gt; as follows:\n\u0026lt;rif-xhtml:object name=\u0026#34;2cdb48aa776249cfa74c46ef450dffb9\u0026#34; classid=\u0026#34;00020906-0000-0000-c000000000000046\u0026#34;\u0026gt; 0M8R4KGxGuEAAAAAAAAAAAAAAAAAAAAAPgADAP7/CQAGAAAAAAAAAAAAAAAP AAAAAQAAAAAAAAAAEAAAAgAAAAEAAAD+////AAAAAAAAAAAEAAAA+AAAAPkA AAD6AAAA+wAAAPwAAAD9AAAA/gAAAP8AAAAAAQAAAQEAAAIBAAADAQAA/way .. .. .. \u0026lt;rif-xhtml:object\u0026gt; This matches well with the following information from the RIF 1.0 standard (p, 28)\n Object embedding in the XHTML name space With RIF, it is also possible to embed arbitrary objects (e.g. pictures) within text. This functionality is based on Microsoft’s clipboard and OLE (Object Linking and Embedding) technology and is realized within the XHTML name space by including the “Object Module”. The following table gives an overview of the XML element and attributes from the Object Module that are used for embedding objects:\n Element Attributes Attribute types Minimal Content Model Object classid URI (PCDATA data URI height Length name CDATA type ContentType width Length Each rif-xhtml:object tag have “name” and “classid” attribute, where classid = “00020906-0000-0000-c000000000000046” is classid for Microsoft Word Document according to Windows Registry Hacks So we will only be taking care of classId “00020906-0000-0000-c000000000000046” while parsing the XML.\nOLE File System Normally, these OLE documents are stored in subdirectories of the OLE filesystem. The exact location of the embeded documents will vary depending on the type of the master document, and the exact directory names will differ each time. To figure out exactly which directory to look in, you will either need to process the appropriate OLE 2 linking entry in the master document, or simple iterate over all the directories in the filesystem.\nAs a general rule, you will find the same OLE 2 entries in the subdirectories, as you would\u0026rsquo;ve found at the root of the filesystem were a document to not be embeded.\nApache POIFS File System POIFS file systems are essentially normal files stored on a Java-compatible platform\u0026rsquo;s native file system. They are typically identified by names ending in a four character extension noting what type of data they contain. For example, a file ending in \u0026ldquo;.xls\u0026rdquo; would likely contain spreadsheet data, and a file ending in \u0026ldquo;.doc\u0026rdquo; would probably contain a word processing document. POIFS file systems are called \u0026ldquo;file system\u0026rdquo;, because they contain multiple embedded files in a manner similar to traditional file systems. Along functional lines, it would be more accurate to call these POIFS archives. For the remainder of this document it is referred to as a file system in order to avoid confusion with the \u0026ldquo;files\u0026rdquo; it contains.\nPOIFS file systems are compatible with those document formats used by a well-known software company\u0026rsquo;s popular office productivity suite and programs outputting compatible data. Because the POIFS file system does not provide compression, encryption or any other worthwhile feature, its not a good choice unless you require interoperability with these programs.\nThe POIFS file system does not encode the documents themselves. For example, if you had a word processor file with the extension \u0026ldquo;.doc\u0026rdquo;, you would actually have a POIFS file system with a document file archived inside of that file system.\nProof of Concept Prerequisites/Dependecies Eclipse IDE commons-codec-1.7.jar – for base64 encoding/decoding poi-3.8-20120326.jar – Apache POI Jar poi-scratchpad-3.8-20120326.jar – Apache POI scratchpad jar which contains the MS World API. Sample XML in RIF format. The constants file\npackage com.tk.poc; public class Constants { public static final String CLASS_ID_OFFICE = \u0026#34;00020906-0000-0000-c000000000000046\u0026#34;; public static final String ATTR_NAME = \u0026#34;name\u0026#34;; public static final String ATTR_CLASS_ID = \u0026#34;classid\u0026#34;; public static final String TAG_OLE = \u0026#34;rif-xhtml:object\u0026#34;; } Parser Use SAX parser to parse the given XML to get content from tag “rif-xhtml:object”. This will parse the given xml and generate the separate file for each rif-xhtml:object tag having classid “00020906-0000-0000-c000000000000046”. It also decode the content before writing to files.\npackage com.tk.poc; ... public class Parser extends DefaultHandler { private StringBuilder stringBuilder; private String name; private List\u0026lt;String\u0026gt; generatedFiles = new ArrayList\u0026lt;String\u0026gt;(10); /** * Parse the xml file. and return the name of the files generated files * * @param filePath * for the xml. */ public List\u0026lt;String\u0026gt; parseXML(String filePath) { stringBuilder = new StringBuilder(); SAXParserFactory factory = SAXParserFactory.newInstance(); try { SAXParser parser = factory.newSAXParser(); parser.parse(filePath, this); } catch (ParserConfigurationException e) { System.out.println(\u0026#34;ParserConfig error\u0026#34;); } catch (SAXException e) { System.out.println(\u0026#34;SAXException : xml not well formed\u0026#34;); } catch (IOException e) { System.out.println(\u0026#34;IO error\u0026#34;); } return generatedFiles; } @Override public void startElement(String uri, String localName, String qName, Attributes attributes) throws SAXException { name = null; if (Constants.TAG_OLE.equalsIgnoreCase(qName) \u0026amp;\u0026amp; Constants.CLASS_ID_OFFICE.equals(attributes.getValue(Constants.ATTR_CLASS_ID))) { stringBuilder = new StringBuilder(); name = attributes.getValue(Constants.ATTR_NAME); } } @Override public void characters(char[] ch, int start, int length) throws SAXException { stringBuilder.append(new String(ch, start, length)); } @Override public void endElement(String uri, String localName, String qName) throws SAXException { if (name != null \u0026amp;\u0026amp; Constants.TAG_OLE.equalsIgnoreCase(qName)) { FileOutputStream outputStream = null; try { String filePath = name; outputStream = new FileOutputStream(filePath); byte[] base64 = Base64.decodeBase64(stringBuilder.toString()); outputStream.write(base64); generatedFiles.add(filePath); } catch (FileNotFoundException e) { System.out.println(\u0026#34;File not found : \u0026#34; + name); } catch (IOException e) { e.printStackTrace(); } finally { try { if (outputStream != null) { outputStream.close(); } } catch (IOException e) { e.printStackTrace(); } } } } } Reading OLE Object Using Apache POI You can reach the OLE attachment content by following command java -classpath poi-3.8-20120326.jar org.apache.poi.poifs.dev.POIFSDump \u0026lt;filename\u0026gt; It generates following structure for the ole object generated from above xml. You can read/write OLE object properties. Here is example to read and print the properties\n... public class OLEReader { public static void main(String[] args) throws FileNotFoundException, IOException { final String filename = \u0026#34;1797c9be9d034d8f907239afb20d6547\u0026#34;; POIFSReader r = new POIFSReader(); r.registerListener(new MyPOIFSReaderListener()); r.read(new FileInputStream(filename)); } static class MyPOIFSReaderListener implements POIFSReaderListener { public void processPOIFSReaderEvent(POIFSReaderEvent event) { PropertySet ps = null; try { ps = PropertySetFactory.create(event.getStream()); } catch (NoPropertySetStreamException ex) { System.out.println(\u0026#34;No property set stream: \\\u0026#34;\u0026#34; + event.getPath() + event.getName() + \u0026#34;\\\u0026#34;\u0026#34;); return; } catch (Exception ex) { throw new RuntimeException(\u0026#34;Property set stream \\\u0026#34;\u0026#34; + event.getPath() + event.getName() + \u0026#34;\\\u0026#34;: \u0026#34; + ex); } /* Print the name of the property set stream: */ System.out.println(\u0026#34;Property set stream \\\u0026#34;\u0026#34; + event.getPath() + event.getName() + \u0026#34;\\\u0026#34;:\u0026#34;); final long sectionCount = ps.getSectionCount(); System.out.println(\u0026#34; No. of sections: \u0026#34; + sectionCount); /* Print the list of sections: */ List\u0026lt;Section\u0026gt; sections = ps.getSections(); int nr = 0; for (Section section : sections) { System.out.println(\u0026#34; Section \u0026#34; + nr++ + \u0026#34;:\u0026#34;); String s = section.getFormatID().toString(); System.out.println(\u0026#34; Format ID: \u0026#34; + s); /* Print the number of properties in this section. */ int propertyCount = section.getPropertyCount(); System.out.println(\u0026#34; No. of properties: \u0026#34; + propertyCount); /* Print the properties: */ Property[] properties = section.getProperties(); for (int i2 = 0; i2 \u0026lt; properties.length; i2++) { /* Print a single property: */ Property p = properties[i2]; long id = p.getID(); long type = p.getType(); Object value = p.getValue(); System.out.println(\u0026#34; Property ID: \u0026#34; + id + \u0026#34;, type: \u0026#34; + type + \u0026#34;, value: \u0026#34; + value); } } } } } It will print something like below\n No property set stream: \u0026quot;\\0,4,2540,19841Table\u0026quot; No property set stream: \u0026quot;\\0,4,2540,1984_OlePres000\u0026quot; Property set stream \u0026quot;\\0,4,2540,1984_SummaryInformation\u0026quot;: No. of sections: 1 Section 0: Format ID: {F29F85E0-4FF9-1068-AB91-08002B27B3D9} No. of properties: 16 Property ID: 1, type: 2, value: 1252 Property ID: 2, type: 30, value: Property ID: 3, type: 30, value: Property ID: 4, type: 30, value: Caspari, Michael (415-Extern) Property ID: 5, type: 30, value: Property ID: 6, type: 30, value: Property ID: 7, type: 30, value: Normal.dotm Property ID: 8, type: 30, value: Caspari, Michael (415-Extern) Property ID: 9, type: 30, value: 2 Property ID: 18, type: 30, value: Microsoft Office Word Property ID: 12, type: 64, value: Wed Sep 19 15:32:00 IST 2012 Property ID: 13, type: 64, value: Wed Sep 19 15:33:00 IST 2012 Property ID: 14, type: 3, value: 1 Property ID: 15, type: 3, value: 6 Property ID: 16, type: 3, value: 38 Property ID: 19, type: 3, value: 0 Property set stream \u0026quot;\\0,4,2540,1984_DocumentSummaryInformation\u0026quot;: No. of sections: 1 Section 0: Format ID: {D5CDD502-2E9C-101B-9397-08002B2CF9AE} No. of properties: 12 Property ID: 1, type: 2, value: 1252 Property ID: 15, type: 30, value: ITI/OD Property ID: 5, type: 3, value: 1 Property ID: 6, type: 3, value: 1 Property ID: 17, type: 3, value: 43 Property ID: 23, type: 3, value: 917504 Property ID: 11, type: 11, value: false Property ID: 16, type: 11, value: false Property ID: 19, type: 11, value: false Property ID: 22, type: 11, value: false Property ID: 13, type: 4126, value: [B@19c26f5 Property ID: 12, type: 4108, value: [B@c1b531 No property set stream: \u0026quot;\\0,4,2540,1984WordDocument\u0026quot; No property set stream: \u0026quot;\\0,4,2540,1984_CompObj\u0026quot; No property set stream: \u0026quot;\\0,4,2540,1984_Ole\u0026quot; Wring OLE Object to viewable format OLE object can be identified by the class id and its properties. We have noticed that OLE object present in the stream parsed from xml contained inside one folder under ROOT folder also noticed that the OLE object is actually MSWord document. Here is program which converts the OLE object stream to MS Word document.\npackage com.tk.poc; ... /** * This file read the OLE document and convert to document. * * @author kuldeep * */ public class OLE2DocConvertor { /** * * @param filePath * @return path of generated document */ public static String convert(String filePath) { String output = filePath + \u0026#34;.doc\u0026#34;; FileInputStream is = null; FileOutputStream out = null; try { is = new FileInputStream(filePath); POIFSFileSystem fs = new POIFSFileSystem(is); // Assuming there is one folder under root which is an OLE object // such as \u0026#34;/0,1,13204,13811\u0026#34; , \u0026#34;/0,1,16267,7627\u0026#34; DirectoryNode firstFolder = (DirectoryNode) fs.getRoot().getEntries().next(); HWPFDocument embeddedWordDocument = new HWPFDocument(firstFolder); out = new FileOutputStream(output); embeddedWordDocument.write(out); } catch (FileNotFoundException e) { System.out.println(\u0026#34;File paths are not correct: \u0026#34; + e.getMessage()); } catch (IOException e) { System.out.println(\u0026#34;Error in coverting ole to doc\u0026#34; + e.getMessage()); } finally { try { if (is != null) { is.close(); } if (out != null) { out.close(); } } catch (IOException e) { e.printStackTrace(); } } return output; } } On the similar lines you can get actual content as well like image, excel, or any other.\n The Main - Here is the executor program which takes XML(RIF) as input and generate separate document for each embedded elements. package com.tk.poc; import java.util.List; /** * Main class * @author kuldeep * */ public class Main { \tpublic static void main(String[] args) { \tString xmlFileName = \u0026#34;Export_Muster.xml\u0026#34;; \tif(args.length\u0026gt;0){ \txmlFileName = args[0]; \t} \tParser parser = new Parser(); \tSystem.out.println(\u0026#34;Input file : \u0026#34; + xmlFileName); \tList\u0026lt;String\u0026gt; generatedFiles = parser.parseXML(xmlFileName); \tString outputFile = null; \tfor (String fileName : generatedFiles) { \toutputFile = OLE2DocConvertor.convert(fileName); \tSystem.out.println(\u0026#34;Embeded OLE object converted to document : \u0026#34; + outputFile); } \t}\t} Conclusion We were able to parse the RIF XML and able to separate out the embedded attachments in individual viewable/browsable files.\nReferences http://en.wikipedia.org/wiki/Requirements_Interchange_Format http://poi.apache.org/poifs/how-to.html http://poi.apache.org/poifs/fileformat.html http://msdn.microsoft.com/en-us/library/windows/desktop/ms693383(v=vs.85).aspx http://en.wikibooks.org/wiki/Windows_Registry_Hacks/Alphanumerical_List_of_Microsoft_Windows_Registry_Class_Identifiers http://poi.apache.org/hwpf/index.html ","id":196,"tag":"ole ; java ; apache poi ; embedded ; RIF ; XML ; IBM DOORS ; technology","title":"Working with Embedded OLE Objects in Java","type":"post","url":"/post/java-working-with-ole-objects/"},{"content":"Javascript Plugin Framework is implemented to add extra functionality in the existing Web based application without actually changing the application code. This framework is also helpful when some functionality need to control for different user profiles.\nFor example, to add one extra button in a menu we can write the plugin directly, by defining what that button is going to do on click, what all events should it handle, in one javascript file rather than making changes in the application code for the same.\nThe Plugin Framework The Plugin Class // Definition of a plugin. Plugin = function () { \tthis.id = \u0026#34;\u0026#34;; \tthis.customData = {}; } Plugin.prototype = { \t/** * Initialize method. */ \tinit : function () { \t\t}, \t/** * Handle event method. * @param eventId * @param payload */ \thandleEvent: function (eventId, payload) { \t\t} } Plugin Registry Plugin registry manage all the plugins and propagating event to the respective plugin.\n/** * Plugins.js * * It stores the plugins. This file defines a plugin and expose an API to register the plugin. */ Plugins = { \t/** * Plugins array ordered for every event. */ \tpluginsData: [], \t/** * Map of Event id and Index in plugin array. */ \tmapEventIndexes: {}, \t\t/** * Register plugin * @param {Plugin Object} - object of plugin. * @param {String} eventIds - comma separated events ids. */ \tregisterPlugin: function(plugin, eventIds){ \tvar eventIdArr = eventIds.split(\u0026#34;,\u0026#34;); \tfor ( var i = 0; i \u0026lt; eventIdArr.length; i++) { \tvar eventId = eventIdArr[i]; \tvar eventIndex = this.mapEventIndexes[eventId]; \t//check if event is already registered. \tif(eventIndex || eventIndex == 0){ \tvar pluginArr = this.pluginsData[eventIndex]; \tpluginArr[pluginArr.length] = plugin; \t} else { \t//Add new entry. \teventIndex = this.pluginsData.length; \tvar value = \u0026#34;{\u0026#34; + eventId +\u0026#34; : \u0026#34; + eventIndex + \u0026#34;}\u0026#34;; \tvalue = eval(\u0026#39;(\u0026#39; +value+ \u0026#39;)\u0026#39;); \tjQuery.extend(this.mapEventIndexes, value); \tthis.pluginsData[eventIndex] = [plugin]; \t} \t} \t}, \t\t/** * Call handle event of all plugins who has registered for the event. * @param {String} eventId - event identifier. * @param {Object} payload - event payload */ \thandleEvent : function (eventId, payload){ \tvar eventIndex = this.mapEventIndexes[eventId]; \tvar pluginArr = this.pluginsData[eventIndex]; \tif(pluginArr){ \tjQuery.each( pluginArr, function (index, plugin){ \tplugin.handleEvent(eventId, payload); \t}); \t} \t} }; Extending the existing website Lets extend the existing website using the plugin framework\n Define a plugin.\nimage_rotation.js\n/** * Plugin for image rotation functionality. */ var imageRotationPlugin = new Plugin(); imageRotationPlugin.id = \u0026#34;Image_Rotation_Plugin\u0026#34;; Define which all events the plugin would handle\n/** * Override Plugin.handleEvent * @See Plugins.js */ imageRotationPlugin.handleEvent = function(eventId, payload){ if(eventId == \u0026#34;rotate_image\u0026#34;) { rotateImage(payload.id) // call method to rotate a image on given id } }; Initialise the Plugin, where do you want to add this button/functionality.\n /** * Override Plugin.init * @See Plugins.js */ imageRotationPlugin.init = function(){ //Draw the button. var toolset = jQuery(\u0026#39;#menu\u0026#39;); toolset.prepend(\u0026#34;\u0026lt;li class=\u0026#39;imagerotate\u0026#39;\u0026gt;\u0026lt;a id=\u0026#39;src_img\u0026#39; class=\u0026#39;tooltip\u0026#39; title=\u0026#39;Rotate the selected image\u0026#39;\u0026gt;\u0026lt;/a\u0026gt;\u0026lt;/li\u0026gt;\u0026#34;); var imageRoateLink = jQuery(\u0026#39;#menu li.imagerotate a\u0026#39;); //Add event handler to download the image map. imageRoateLink.click(function(){ Plugins.handleEvent(\u0026#39;rotate_image\u0026#39;, {id:\u0026#34;image_source\u0026#34;}); }); } And, finally register the plugin\njQuery(document).ready(function(){ Plugins.registerPlugin(imageDownloadPlugin, \u0026#34;update_viewer_to_initial_state\u0026#34;); imageRotationPlugin.init(); }); This plugin will add a menu in existing website, which making changes in the main website code, we can manage the Plugins folder and keep extending the functionality.\nWe can call the Plugins.handleEvent('rotate_image', {id:\u0026quot;image_source\u0026quot;}) from any place of the code, and it will be receive the event. We can register our plugin to register for multiple events, even for th events raised globally or broadcasted by other plugins.\n","id":197,"tag":"JavaScript ; plugin ; framework ; extendability ; architecture ; technology","title":"JavaScript Plugin Framework","type":"post","url":"/post/javascript-plugin-framework/"},{"content":"We were getting following error on the production sever (Linux, Jboss). We were in the middle of user acceptance testing with the client, many users were accessing the application to simulate various scenarios in a live session.\nthe application was crashing just after 2-3 minute of start with above error, and no user were able to access the application. We were expected for fix the issue ASAP, as the users were assembled for the web session, all were waiting for the server up.\nERROR [org.apache.jasper.compiler.Compiler] Compilation error java.io.FileNotFoundException: /home/user/jboss/server/user/work/jboss.web/localhost/user/org/apache/jsp/jsp/common/header_jsp.java (Too many open files) at java.io.FileInputStream.open(Native Method) at java.io.FileInputStream.(FileInputStream.java:106) at java.io.FileInputStream.(FileInputStream.java:66) at org.apache.jasper.compiler.JDTCompiler$1CompilationUnit.getContents(JDTCompiler.java:103) Solving the Chaos! We had followed below approach and steps:\n Recorded the logs.\n Server Reboot – this was the quickest option.\n We had rebooted the server.\n But this option had not given us much time to analyze the recorded logs, rhe application crashed again after 5 minutes, as soon as the concurrent users increased.\n And once again we noticed above exception in the logs. “Too many open files”. Actually the exception was misleading “FileNotFoundException”.\n Then we borrowed some time from the users and continued the analysis.\n We had following points in mind:\n File/Resources were not being closed properly – I was sure that there was no file handling or resource handing related code introduced in that release and IO handing were proper in previous releases. We had not faced such issue before if there were some coding issue then we would have encountered it earlier.\n Next thing came in our mind was, what is the allowed limit for the number of open files? – In the release, numbers of JSP files were introduced, and it could be possible that compiler or OS allow only certain number of open files at a time.\n We started analyzing, problem looked more related to platform (OS, Hardware \u0026hellip;) as this was the only difference in the QA and Production server. Production server was owned by the client.\n and We found following :\n Run following command to know currently used files:\nls -l /proc/[jbosspid]/fd\n Replace the [jbosspid] by the jboss process id, get pid by : ps -ef | grep java\n[user@system ~]$ ps -ef | grep java user 23096 23094 0 Apr27 ? 00:04:55 java -Xms3072m -Xmx3072m -jar /opt/user/app/deploy/app-server.jar user 31090 29459 0 12:16 pts/2 00:00:00 grep java And locate the process id for jboss process. For the above it is 23096. Run the next command to get the list of file descriptors being used currently :\n[user@system ~]$ ls -l /proc/23096/fd total 0 lr-x------ 1 remate remate 64 May 1 12:13 0 -\u0026gt; /dev/null l-wx------ 1 remate remate 64 May 1 12:13 1 -\u0026gt; /opt/user/jboss/server/user/log/console.log lr-x------ 1 remate remate 64 May 1 12:13 10 -\u0026gt; /opt/user/jboss/lib/endorsed/jbossws-native-saaj.jar lr-x------ 1 remate remate 64 May 1 12:13 101 -\u0026gt; /usr/share/fonts/liberation/LiberationSerif-Italic.ttf lr-x------ 1 remate remate 64 May 1 12:13 102 -\u0026gt; /usr/share/fonts/dejavu-lgc/DejaVuLGCSans-BoldOblique.ttf lrwx------ 1 remate remate 64 May 1 12:13 105 -\u0026gt; socket:[33496164 .. .. We have analyzed the result of above commands and had not noticed any abnormality.\n System Level File Descriptor Limit Run following command: cat /proc/sys/fs/file-max\n[user@system ~]$ cat /proc/sys/fs/file-max 743813 It was already quite large; we did not find any need to change it. Any way it can be changed by adding \u0026lsquo;fs.file-max = 1000000\u0026rsquo; to /etc/sysctl.conf. “1000000” is the limit you want to set.\n User level File descriptor limit. Run following command: ulimit –n [user@system ~]$ ulimit -n 1024 ‘1024’ file descriptor per process for a user could be a low value for the concurrent jsp application, where lot of jsps need to be complied initially. From our analysis, we found that open FD were close to 1024 limit.\nWe changed this limit appending following lines in /etc/security/limits.conf\nuser soft nofile 16384 user hard nofile 16384 and then check again the with ulimit –n\n[user@system ~]$ ulimit -n 16384 Looks like we have solved it.\n We requested our client to apply changes mentioned steps on production server. After this change we have never faced the issue again. We were able to solve the issue in 20 minutes and in another 10minutes (taken by client’s IT process) the server were up again and we had good game play and the UAT gone well.\n References: http://eloquence.marxmeier.com/sdb/html/linux_limits.html http://www.cyberciti.biz/faq/linux-increase-the-maximum-number-of-open-files/ ","id":198,"tag":"jsp ; java ; compilation ; issues ; os ; tuning ; configuration ; technology","title":"Strange JSP Compilation Issue ","type":"post","url":"/post/jsp-compilation-issues/"},{"content":"On one of our production environments, we had an issue our application related to time, specially modified time and creation time of the entities. It was quite random in nature for some entities.\nWe suspected that there might be issue in the timezone configuration.\nAnalysing the problem On production environment, we had limited permissions. We were not allowed to change system timezone. We noticed that Jboss is automatically picking the system timezone “Europe/Berlin” (which was configured from root user) and if we check the timezone for the linux user for which we have access it shows GMT time. The MySQL database is picking the GMT timezone.\nThis mismatch of timezone were creating issues when creating record/updating record in database. Default current date generated from Java comes different than the “CURRENT_TIMESTAMP” of MySQL.\nHow JVM gets default timezone JVM gets the default timezone as follows:\n Looks to environment variable TZ if it is not set then JVM looks for the file /etc/sysconfig/clock and tries to find the ZONE entry. If the ZONE entry is not found, the JVM will compare contents of /etc/localtime with the contents of every file in /usr/share/zoneinfo recursively. If found it will assume that timezone. [admin@localhost ~]$ echo $TZ Europe/Berlin [admin@localhost ~]$ or\n[admin@localhost ~] $ cat /etc/sysconfig/clock ZONE=”Europe/Berlin” UTC=false ARC=false [admin@localhost ~] $ Set timezone in MySQL database Let’s set the timezone in MySQL database.\nIf you find any value configured in above command , then you need to set the same value for MySQL timezone.\nFor example, if you find “Europe/Berlin” as shown in the above example, then the value of MYSQL timezone can be changed as follows:\n Verify MySQL timezone database:\nIf you find a record as below\nmysql\u0026gt; SELECT * from mysql.time_zone_name t where name = ‘Europe/Berlin’; +----------------------+--------------------+ | Name | Time_zone_id | +----------------------+---------------------+ | Europe/Berlin | 403 | +---------------------+---------------------+ 1 row in set ( 0.00 sec) then go for step 3 to setup mysql timezone otherwise go to step 2 to load mysql timezone from Operating System’s timezone database.\n Load Mysql timezone database from Operating System’s timezone database using following command :\nmysql_tzinfo_to_sql [time_zone_db_path] | mysql –u[username] -p[password] mysql where [username] and [password] are replaced by the actual values, and username here should be root which is having the mysql database administration access. The [time_zone_db_path] should be replaced with actual path for Operating System’s timezone database (the default path on linux is /usr/share/zoneinfo)\nAs shown above, this command will ignore some timezones which are not supported by mysql. After running the above command, please verify mysql timezone in the database again (as shown step 1). The timezone should be present now. However, if you still do not find any record in mysql database of your interest, then please contact the system administrator of Linux, to install the correct timezones in operating system’s database. If everything works correctly, please move to the next step of setting the MySQL timezone.\n Setting MySQL timezone\na. Login into the system with user root.\nb. Open file /etc/init.d/mysqld\nc. Locate following code\n# Pass all the options determined above, to ensure consistent behavior. # In many cases mysqld_safe would arrive at the same conclusions anyway # but we need to be sure. ` /usr/bin/mysqld_safe --datadir=\u0026#34;$datadir\u0026#34; --socket=\u0026#34;$socketfile\u0026#34; \\ --log-error=\u0026#34;$errlogfile\u0026#34; --pid-file=\u0026#34;$mypidfile\u0026#34; \\ /dev/null 2\u0026gt;\u0026amp;1 \u0026amp;` d. Add --timezone=\u0026quot;Europe/Berlin\u0026quot; as follow :\n/usr/bin/mysqld_safe --datadir=\u0026#34;$datadir\u0026#34; --timezone=\u0026#34;Europe/Berlin\u0026#34;--socket=\u0026#34;$socketfile\u0026#34; \\ --log-error=\u0026#34;$errlogfile\u0026#34; --pid-file=\u0026#34;$mypidfile\u0026#34; \\ /dev/null 2\u0026gt;\u0026amp;1 \u0026amp; More details on how to setup timezone on linux can be found at Linux Tips and Default Timezone in Java\n ","id":199,"tag":"database ; mysql ; java ; jvm ; timezone ; configuration ; technology","title":"Default Timezone in Java and MySQL","type":"post","url":"/post/java-mysql-timezone-configuration/"},{"content":"We have faced \u0026lsquo;out of space\u0026rsquo; issue on disk even after clearing old data on database server(MySQL) and temporary data on application. We noticed that 90% of the space was occupied by the database tables which were used to store very large de-normalized statistical data and when we track the database table we noticed there is not much data in the database in the tables but they were still occupying space on disk.\nKnowing the problem After analysis we came to know that MYISAM tables leave the memory holes when delete some rows from the tables and this lead to fragments and require more memory on disk. Run Following command on MySQL prompt to know fragmented space in the table:\nSHOW TABLE STATUS and refer to column Data_free.\nSolving the chaos! Increased the disk space. (This was not a solution but just to delay the problem until next de-fragmentation).\n Executed following command to defrag the tables wherever we face the issue.\nOPTIMIZE [NOWRITETOBINLOG | LOCAL] TABLE tblname [, tbl_name] ... But this has following issues\n Above operation is a heavy operation When this process is in progress, those database tables remain inaccessible. So above operation need to be executed in off hours when nobody is using that system. Also this activity needs to be done in some frequent interval. Solution to above issues:\n Implemented a Java program which runs optimization command on database for the database tables provided in a CSV file. Divided the heavy tables in two set to equalized memory size of table. Schedule the java program to run alternate weekend one set of tables and on next weekend run another set of tables. This way java program runs on every weekend early morning and optimizes one set of tables. In other word, every states table gets optimized bi-weekly. This approach has fixed most of the out of space issues.\nBut we have faced more.\n Optimization/Defragmentation/Repair of table requires free space on disk. And if any optimization fails in between then it may corrupt the table, which need to be repaired by running REPAIR command.\nWe had faced an issue on our production server where a big table(40+ GB data and 10GB index ) got corrupted as optimization fails on the tables, because optimization and repair table uses system’s temporary space to repair a table indexes and temp space( /tmp folder which was a mounted drive) on system was just 9GB. It leads to out of space issue.\nSpace requirement for table on temp folder can be calculated as :\n (largestkey + rowpointerlength) * numberof_rows * 2 See http://dev.mysql.com/doc/refman/5.1/en/myisamchk-memory.html for more details\nAlso to optimize a table equal amount on space as table size is required on disk where table actually exists.\n To fix the issue we have increased the temp space by extending the tmp directory as follows on other drive which is having more space:\n Create a folder say /var/lib/mysql/tmp Give permission to this folder so that mysql can write temporary files. Edit mysql configuration file /etc/my.cnf and add following line tmpdir=/tmp/:/var/lib/mysql/tmp/ start mysql server.\n Go to mysql prompt and check the variable tmpdir.\n mysql\u0026gt; show variables like \u0026#34;tmpdir\u0026#34;; +---------------+---------------------------+ | Variable_name | Value | +---------------+---------------------------+ | tmpdir | /tmp/:/var/lib/mysql/tmp/ | +---------------+---------------------------+ 1 row in set (0.00 sec) Also updated the java program to calculate available disk size before optimizing table if current state is not optimal to optimize the table then skip the optimization for that table notify administrator about the error. Also Java program updated to send status of optimization by email which includes the space recovered after optimization, tables optimized etc.\n After above, we have fixed 95% of the issues, bit still more: Since optimization is a heavy operation, it is taking 4 hours average. We have encountered swap memory being used on the system even when nobody is using the application. It’s true that this is a very memory intensive application as it needs to execute lot simulation data in memory. we noticed that after 2-3 months the system started using lot of swap memory which lead to slow performance. After analysis we come to know that above java program does not release the memory for long time and garbage collector does not behave correctly for above program for so long. Also it put lot of load on CPU.\n Above problem has been fixed by implementing shell-script in-place of java program, now shell script does all the operation which was earlier done by java. It work great and we never faced that out of space issue again for the same set of data.\n References: http://articles.techrepublic.com.com/5100-10878_11-5193721.html, http://dev.mysql.com/doc/refman/5.0/en/myisamchk-repair-options.html http://forums.mysql.com/read.php?34,174262,174262 http://dev.mysql.com/doc/refman/5.0/en/temporary-files.html http://dev.mysql.com/doc/refman/5.1/en/myisamchk-table-info.html ","id":200,"tag":"database ; mysql ; tuning ; optimization ; technology","title":"MySQL Tuning","type":"post","url":"/post/mysql-tuning/"},{"content":"This article let you start with Rule Engine Drools, with step by step guide and series of examples. It starts with an overview and then covers in detail examples.\nRule Engine Overview The underlying idea of a rule engine is to externalize the business or application logic. A rule engine can be viewed as a sophisticated interpreter of if-then statements. The if-then statements are the rules. A rule is composed of two parts, a condition and an action: When the condition is met, the action is executed. The if portion contains conditions (such as amount \u0026gt;=$100), and the then portion contains actions (such as offer discount 5%). The inputs to a rule engine are a collection of rules called a rule execution set and data objects. The outputs are determined by the inputs and may include the original input data objects with modifications, new data objects, and possible side effects (such as sending email to the customer.\nRule engines should be used for applications with highly dynamic business logic and for applications that allow end users to author business rules. A rule engine is a great tool for efficient decision making because it can make decisions based on thousands of facts quickly, reliably, and repeatedly.\nRule engines are used in applications to replace and manage some of the business logic. They are best used in applications where the business logic is too dynamic to be managed at the source code level \u0026ndash; that is, where a change in a business policy needs to be immediately reflected in the application Rules can fit into applications such as the following:\n At the application tier to manage dynamic business logic and the task flow At the presentation layer to customize the page flow and work flow, as well as to construct custom pages based on session state Adopting a rule-based approach for your applications has the following advantages:\n Rules that represent policies are easily communicated and understood. Rules retain a higher level of independence than conventional programming languages. Rules separate knowledge from its implementation logic. Rules can be changed without changing source code; thus, there is no need to recompile the application\u0026rsquo;s code A Standard API for Rule Engines in Java In November, the Java Community approved the final draft of the Java Rule Engine API specification (JSR-94). This new API gives developers a standard way to access and execute rules at runtime. As implementations of this new spec ripen and are brought to the market, programming teams will be able to pull executive logic out of their applications.\nA new generation of management tools will be needed to help executives define and refine the behaviour of their software systems. Instead of rushing changes in application behaviour through the development cycle and hoping it comes out correct on the other end, executives will be able to change the rules, run tests in the staging environment and roll out to production as often as becomes necessary.\nWorking with Drools Rule Engine Drools is a business rule management system (BRMS) with a forward chaining inference based rules engine, using an enhanced implementation of the Rete algorithm.\nDrools supports the JSR-94 standard for its business rule engine and enterprise framework for the construction, maintenance, and enforcement of business policies in an organization, application, or service.\nA Simple Example Create a simple Java Project.\n Add a package com.tk.drool\n Create a class in this package and name it RuleRunner.java\npackage com.tk.drool; import java.io.InputStreamReader; import org.drools.RuleBase; import org.drools.RuleBaseFactory; import org.drools.WorkingMemory; import org.drools.compiler.PackageBuilder; import org.drools.rule.Package; public class RuleRunner { public RuleRunner(){} public void runRules(String[] rules, Object... facts) throws Exception { RuleBase ruleBase = RuleBaseFactory.newRuleBase(); PackageBuilder builder = new PackageBuilder(); for(String ruleFile : rules) { System.out.println(\u0026#34;Loading file: \u0026#34;+ruleFile); builder.addPackageFromDrl( new InputStreamReader( this.getClass().getResourceAsStream(ruleFile))); } Package pkg = builder.getPackage(); ruleBase.addPackage(pkg); WorkingMemory workingMemory = ruleBase.newStatefulSession(); for(Object fact : facts) { System.out.println(\u0026#34;Inserting fact: \u0026#34;+fact); workingMemory.insert(fact); } workingMemory.fireAllRules(); } public static void simple1() throws Exception { new RuleRunner().runRules( new String[]{\u0026#34;/simple/rule01.drl\u0026#34;}); } public static void main(String[] args) throws Exception { simple1(); } } Create a folder name “simple” and add a rule file containing rules\npackage simple rule \u0026#34;Rule 01\u0026#34; when eval (1==1) then System.out.println(\u0026#34;Rule 01 Works\u0026#34;); end Create a folder lib and add the following jars downloaded using this link http://www.jboss.org/drools/downloads.html\n drools-api-5.0.1.jar drools-compiler-5.0.1.jar drools-core-5.0.1.jar Download org.eclipse.jdt.core_3.4.4.v_894_R34x.jar from http://www.java2s.com/Code/Jar/o/Downloadorgeclipsejdtcore344v894R34xjar.htm\n Download antlr-3.1.1-runtime.jar from http://www.java2s.com/Code/Jar/ABC/Downloadantlr311runtimejar.htm\n Download mvel2-2.0.7.jar from http://mirrors.ibiblio.org/pub/mirrors/maven2/org/mvel/mvel2/2.0.7/mvel2-2.0.7.jar\n Add all these jars to buildpath.\n Run RuleRunner.java as a Java application and you will get the output as show below.\n This is how you run the rule.\nA More Complex Example Create a simple Java Project.\n Add a package com.tk.drool\n Create a class in this package and name it RuleRunner.java\npackage com.tk.drool; public class RuleRunner { public RuleRunner(){} public void runRules(String[] rules, Object... facts) throws Exception { RuleBase ruleBase = RuleBaseFactory.newRuleBase(); PackageBuilder builder = new PackageBuilder(); for(String ruleFile : rules) { System.out.println(\u0026#34;Loading file: \u0026#34;+ruleFile); builder.addPackageFromDrl( new InputStreamReader( this.getClass().getResourceAsStream(ruleFile))); } Package pkg = builder.getPackage(); ruleBase.addPackage(pkg); WorkingMemory workingMemory= ruleBase.newStatefulSession(); for(Object fact : facts) { System.out.println(\u0026#34;Inserting fact: \u0026#34;+fact); workingMemory.insert(fact); } workingMemory.fireAllRules(); } public static void simple1() throws Exception { Object[] flows = { new Myflow(new SimpleDate(\u0026#34;01/01/2012\u0026#34;), 300.00), new Myflow(new SimpleDate(\u0026#34;05/01/2012\u0026#34;), 100.00), new Myflow(new SimpleDate(\u0026#34;11/01/2012\u0026#34;), 500.00), new Myflow(new SimpleDate(\u0026#34;07/01/2012\u0026#34;), 800.00), new Myflow(new SimpleDate(\u0026#34;02/01/2012\u0026#34;), 400.00), }; new RuleRunner().runRules(new String[]{\u0026#34;/simple/rule01.drl\u0026#34;}, flows); } public static void main(String[] args) throws Exception { simple1(); } class SimpleDate extends Date { public SimpleDate(String datestr) throws Exception { SimpleDateFormat format = new SimpleDateFormat(\u0026#34;dd/MM/yyyy\u0026#34;); this.setTime(format.parse(datestr).getTime()); } } } Create a Myflow class in the same package\npackage com.tk.drool; import java.util.Date; public class Myflow { private Date date; private double amount; public Cashflow(){} public Cashflow(Date date, double amount) { this.date = date; this.amount = amount; } public Date getDate() { return date; } public void setDate(Date date) { this.date = date; } public double getAmount() { return amount; } public void setAmount(double amount) { this.amount = amount; } public String toString() { return \u0026#34;Cashflow[date=\u0026#34;+date+\u0026#34;,amount=\u0026#34;+amount+\u0026#34;]\u0026#34;; } } Create rule01.drl class containing rules in folder named “simple”.\npackage simple import com.tk.drool.*; rule \u0026#34;Rule 01\u0026#34; when $cashflow : Myflow( $date : date, $amount : amount ) not Myflow( date \u0026lt; $date) then System.out.println(\u0026#34;Cashflow: \u0026#34;+$date+\u0026#34; :: \u0026#34;+$amount); retract($cashflow); end Run RuleRunner.java as Java application. It will print all the executed rules.\n Rules UI Editor | Gunvor Drools Guvnor is a centralized repository for Drools Knowledge Bases, with rich web based GUIs, editors, and tools to aid in the management of large numbers of rules. There is easy way to m create, manage and delete rules in Guvnor, also consuming these rules in the java client application and dynamically reading POJO models and firing rules over them is possible directly on the UI.\nEclipse also has a great UI plugin - drools eclipse GUI editor.\nIt is A business rules manager that allows people to manage rules in a multi user environment; it is a single point of truth for your business rules, allowing change in a controlled fashion, with user friendly interfaces.\nGuvnor is the name of the web and network related components for managing rules with drools. This combined with the core drools engine and other tools form the business rules manager.\n Download Guvnor web application using the following link - http://download.jboss.org/drools/release/5.3.0.Final/drools-distribution-5.3.0.Final.zip Deploy the downloaded war file stored at the location \\guvnor-distribution-5.3.0.Final\\binaries\\guvnor-5.3.0.Final-tomcat-6.0.war in tomcat 6.0 Run the application using following address - http://localhost:8080/guvnor-5.3.0.Final-tomcat-6.0/org.drools.guvnor.Guvnor/Guvnor.jsp You can now create directly UI, and can upload a jar file with fact models, that can be used to define rules. Gunvor host the repository of rules, you can access the rule repository in Eclipse Gurvor Plugin or directly in the applications.\nReferences http://www.theserverside.com/news/1365158/An-Introduction-to-the-Drools-Project http://www.ibm.com/developerworks/java/library/j-drools/#when http://java.sun.com/developer/technicalArticles/J2SE/JavaRule.html http://en.wikipedia.org/wiki/Drools http://www.packtpub.com/article/guided-rules-with-the-jboss-brms-guvnor http://docs.jboss.org/drools/release/5.3.0.CR1/drools-guvnor-docs/html/ch11.html#d0e3131 http://docs.redhat.com/docs/en-US/JBoss_Enterprise_SOA_Platform/4.3/html/JBoss_Rules_Reference_Guide/chap-JBoss_Rules_Reference_Manual-The_Eclipse_based_Rule_IDE.html ","id":201,"tag":"rule-engine ; java ; introduction ; drools ; technology","title":"Rule Engine - Getting Started with Drools","type":"post","url":"/post/rule-engine-getting-started/"},{"content":"We have faced multiple issues while compiling a native c/c++ code and using it with JNI in our Java application\n How to write a JNI program. jni_md.h not found. Incompatible data types. On widows cygwin1.dll is required when running Java code on the native library. and more. Actually we were compiling NLP Solver’s native library (IPOPT) on windows 32 bit and on 64bit linux machine.\nSolving the chaos! Here are the steps we have followed to solve the above issues:\n We had prepared a POC by following the instructions to write a JNI program.\n Now to compile C code we need to install a c/c++ compiler for windows/linux. There are many compilers available for each OS. We tried following.\n GNU g++ for linux. GNU g++ for window on Cygwin. While compiling c/c++ Java native code we get error “jnimd.h” not found. jnimd.h can be located in the folder /include/win32, so you need to include win32 folder in the include path of c compilation.\n-Ic:/JDK5.0/include -Ic:/JDK5.0/include/win32 Similarly we can locate the file in linux.\n After fixing the last error we got lot of incompatible data type issues. This may be fixed by adding standard parameter to compiler as follows:\n-O3 -fomit-frame-pointer -pipe -DNDEBUG -pedantic-errors -Wimplicit -Wparentheses -Wreturn-type -Wcast-qual -Wall -Wpointer-arith -Wwrite-strings -Wconversion -Wno-unknown-pragmas -Wl,--add-stdcall-alias | sed -e s\u0026#39;|-pedantic-errors||\u0026#39; Then on windows we got cygwin1.dll not found issue. Cygwin dependencies can be removed by parameter -mno-cygwin.\n We have fixed the compiling issues in NLP Solver’s native library (IPOPT) by making above changes in make file.\n","id":202,"tag":"JNI ; java ; jvm ; c++ ; native ; NLP Solver ; AI ; optimization ; data engineering ; technology","title":"Compiling Java Native C/C++ Code","type":"post","url":"/post/java-jni-cpp/"},{"content":"In the previous article, we learned about ESB and deployed our first OSGi bundle on FuseESB.\nIn this article, we will learn how to build Apache CXF and Apache Camel component and deploy them on FuseESB.\nLets understand what are these components :\nApache Camel ™ Powerful open source integration framework based on known Enterprise Integration Patterns with powerful Bean Integration.\nCamel lets you create the Enterprise Integration Patterns to implement routing and mediation rules in either a Java based Domain Specific Language (or Fluent API), via Spring or Blueprint based Xml Configuration files or via the Scala DSL. This means you get smart completion of routing rules in your IDE whether in your Java, Scala or XML editor.\nApache CFX Apache CXF is an open source services framework. CXF helps you build and develop services using frontend programming APIs, like JAX-WS and JAX-RS. These services can speak a variety of protocols such as SOAP, XML/HTTP, RESTful HTTP, or CORBA and work over a variety of transports such as HTTP, JMS or JBI.\nOSGI Apache CXF packaging and deployment Create the CXF OSGi Bundle Create the Maven project using servicemix-cxf-code-first-osgi-bundle archetype. Run Following command.\nmvn archetype:create -DarchetypeGroupId=org.apache.servicemix.tooling -DarchetypeArtifactId=servicemix-cxf-code-first-osgi-bundle -DarchetypeVersion=2011.02.1-fuse-00-08 -DgroupId=com.tk.example.cxf -DartifactId=cxf-example -Dversion=1.0 It will generate the maven project as folder “cxf-example” in the directory where you run the above command.\nNote : Make sure you have added fusesource repository to the maven, refer to http://fusesource.com/docs/esb/4.3.1/esb_deploy_osgi/Build-GenerateMaven.html#Build-GenerateMaven-AddRepos\n Import the “cxf-example” in eclipse as maven project\n Select File \u0026gt; Import \u0026gt; Existing Maven Projects Browse the folder “cxf-example” as Root Directory Finish Brief discussion about the main files:\nPerson.java - This is the main web service interface. Interface is written under @WebService to make it JAX-WS webservice.\npackage com.tk.example.cxf; import javax.jws.WebMethod; import javax.jws.WebParam; import javax.jws.WebService; import javax.xml.bind.annotation.XmlSeeAlso; import javax.xml.ws.RequestWrapper; import javax.xml.ws.ResponseWrapper; @WebService @XmlSeeAlso({com.nagarro.example.cxf.types.ObjectFactory.class}) public interface Person { @RequestWrapper(localName = \u0026#34;GetPerson\u0026#34;, className = \u0026#34;com.nagarro.example.cxf.types.GetPerson\u0026#34;) @ResponseWrapper(localName = \u0026#34;GetPersonResponse\u0026#34;, className = \u0026#34;com.tk.example.cxf.types.GetPersonResponse\u0026#34;) @WebMethod(operationName = \u0026#34;GetPerson\u0026#34;) public void getPerson( @WebParam(mode = WebParam.Mode.INOUT, name = \u0026#34;personId\u0026#34;) javax.xml.ws.Holder\u0026lt;java.lang.String\u0026gt; personId, @WebParam(mode = WebParam.Mode.OUT, name = \u0026#34;ssn\u0026#34;) javax.xml.ws.Holder\u0026lt;java.lang.String\u0026gt; ssn, @WebParam(mode = WebParam.Mode.OUT, name = \u0026#34;name\u0026#34;) javax.xml.ws.Holder\u0026lt;java.lang.String\u0026gt; name ) throws UnknownPersonFault; } PersonImpl.java – This is the implementation class of the Person interface.\nbeans.xml – Spring bean class define the service end point.\n\u0026lt;jaxws:endpoint id=\u0026#34;HTTPEndpoint\u0026#34; implementor=\u0026#34;com.tk.example.cxf.PersonImpl\u0026#34; address=\u0026#34;/PersonServiceCF\u0026#34;/\u0026gt; pom.xml - The maven build file to build as jar file.\nClient.java to test the web service.\n Run maven “clean install” on the attached project.\n mvn clean install OR Right click on pom.xml and select Run As \u0026gt; Maven clean and then again Run As \u0026gt; Maven install.\nIt will generate cxf-example-1.0.jar in target folder.\n You have created the OSGi bundle successfully. Now deploy these in next steps.\n Deploy the CXF OSGi Bundle in FuseESB Run following command on FuseESB console to install the bundle\n osgi:install -s file:\u0026lt;PROJECT_DIR\u0026gt;/target/cxf-example-1.0.jar osgi:install -s file:D:\\cxf-camel-osgi\\cxf-example\\target\\cxf-example-1.0.jar or Just drop the cxf-example-1.0.jar in deploy folder.\n Now access the WSDL on following URL\n http://\u0026lt;hostname\u0026gt;:\u0026lt;port\u0026gt;/cxf/\u0026lt;WSName\u0026gt;?wsdl eg. http://localhost:8181/cxf/PersonServiceCF?wsdl\n Run Client.java after correcting the url and port.\n Invoking getPerson... getPerson._getPerson_personId=Guillaume getPerson._getPerson_ssn=000-000-0000 getPerson._getPerson_name=Guillaume Let’s try to build our own helloworld service. HelloWorldWS.java package com.tk.example.hellowrold; import javax.jws.WebService; /** * HelloWorld Web Service. * @author kuldeep */ @WebService public interface HelloWorldWS { \t/** * Method to say Hi * @param text * @return Hello text :) */ \tString sayHi(String text); } HelloWorldWSImpl.java package com.tk.example.hellowrold; import javax.jws.WebService; /** * HelloWorld Web Service implementation * @author kuldeep */ @WebService(endpointInterface = \u0026#34;com.tk.example.hellowrold.HelloWorldWS\u0026#34;) public class HelloWorldWSImpl implements HelloWorldWS { \tpublic String sayHi(String text){ \tSystem.out.println(\u0026#34;Inside say Hi\u0026#34;); \treturn \u0026#34;Hello \u0026#34; + text + \u0026#34;:)\u0026#34;;\t\t} } Define service end point in beans.xml \u0026lt;jaxws:endpoint id=\u0026#34;helloworld\u0026#34; implementor=\u0026#34;com.tk.example.hellowrold.HelloWorldWSImpl\u0026#34; address=\u0026#34;/HelloWorld\u0026#34;/\u0026gt; Build the project again and deploy it to FuseESB Open URL http://localhost:8181/cxf/HelloWorld?wsdl to see the wsdl Let create a test class to consume the HelloWorld web service package com.tk.example.hellowrold.test; import org.apache.cxf.jaxws.JaxWsProxyFactoryBean; import com.nagarro.tk.hellowrold.HelloWorldWS; public class WSTest { \tpublic static void main(String[] args) { \tJaxWsProxyFactoryBean factory = new JaxWsProxyFactoryBean(); factory.setServiceClass(HelloWorldWS.class); factory.setAddress(\u0026#34;http://localhost:8181/cxf/HelloWorld\u0026#34;); HelloWorldWS client = (HelloWorldWS)factory.create(); String response = client.sayHi(\u0026#34;User\u0026#34;); System.out.println(response); \t} } Run this class. It should print on console\n Hello User:) OSGI Apache Camel packaging and deployment Create the Camel OSGi Bundle Create the Maven project using servicemix-camel-osgi-bundle archetype. Run Following command.\nmvn archetype:create -DarchetypeGroupId=org.apache.servicemix.tooling -DarchetypeArtifactId=servicemix-camel-osgi-bundle -DarchetypeVersion=2011.02.1-fuse-00-08 -DgroupId=com.tk.example.camel -DartifactId=camel-example -Dversion=1.0 It will generate the maven project as folder “camel-example” in the directory where you run the above command.\nNote : Make sure you have added fusesource repository to the maven, refer to http://fusesource.com/docs/esb/4.3.1/esb_deploy_osgi/Build-GenerateMaven.html#Build-GenerateMaven-AddRepos\n Import the “camel-example” in eclipse as maven project as described in last section and It should create following structure.\n Brief discussion about the main files: MyTransform.java - This class will act as log generator, Its will log the message “\u0026raquo;\u0026gt; set body : \npackage com.tk.example.camel; import java.util.Date; import java.util.logging.Logger; public class MyTransform { private static final transient Logger LOGGER = Logger.getLogger(MyTransform.class.getName()); private boolean verbose = true; private String prefix = \u0026#34;MyTransform\u0026#34;; public Object transform(Object body) { String answer = prefix + \u0026#34; set body: \u0026#34; + new Date(); if (verbose) { System.out.println(\u0026#34;\u0026gt;\u0026gt;\u0026gt;\u0026gt; \u0026#34; + answer); } LOGGER.info(\u0026#34;\u0026gt;\u0026gt;\u0026gt;\u0026gt; \u0026#34; + answer); return answer; } public boolean isVerbose() { return verbose; } public void setVerbose(boolean verbose) { this.verbose = verbose; } public String getPrefix() { return prefix; } public void setPrefix(String prefix) { this.prefix = prefix; } } camel-context.xml – This file is a spring configuration file, it define the camel routing.\n\u0026lt;?xml version=\u0026#34;1.0\u0026#34; encoding=\u0026#34;UTF-8\u0026#34;?\u0026gt; \u0026lt;!-- Generated by Apache ServiceMix Archetype --\u0026gt; \u0026lt;beans xmlns=\u0026#34;http://www.springframework.org/schema/beans\u0026#34; xmlns:xsi=\u0026#34;http://www.w3.org/2001/XMLSchema-instance\u0026#34; xmlns:osgi=\u0026#34;http://camel.apache.org/schema/osgi\u0026#34; xmlns:osgix=\u0026#34;http://www.springframework.org/schema/osgi-compendium\u0026#34; xmlns:ctx=\u0026#34;http://www.springframework.org/schema/context\u0026#34; xsi:schemaLocation=\u0026#34; http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.0.xsd http://camel.apache.org/schema/spring http://camel.apache.org/schema/spring/camel-spring.xsd http://camel.apache.org/schema/osgi http://camel.apache.org/schema/osgi/camel-osgi.xsd http://www.springframework.org/schema/osgi-compendium http://www.springframework.org/schema/osgi-compendium/spring-osgi-compendium.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context.xsd \u0026#34;\u0026gt; \u0026lt;osgi:camelContext xmlns=\u0026#34;http://camel.apache.org/schema/spring\u0026#34;\u0026gt; \u0026lt;route\u0026gt; \u0026lt;from uri=\u0026#34;timer://myTimer?fixedRate=true\u0026amp;amp;period=2000\u0026#34;/\u0026gt; \u0026lt;bean ref=\u0026#34;myTransform\u0026#34; method=\u0026#34;transform\u0026#34;/\u0026gt; \u0026lt;to uri=\u0026#34;log:ExampleRouter\u0026#34;/\u0026gt; \u0026lt;/route\u0026gt; \u0026lt;/osgi:camelContext\u0026gt; \u0026lt;bean id=\u0026#34;myTransform\u0026#34; class=\u0026#34;com.tk.example.camel.MyTransform\u0026#34;\u0026gt; \u0026lt;property name=\u0026#34;prefix\u0026#34; value=\u0026#34;${prefix}\u0026#34;/\u0026gt; \u0026lt;/bean\u0026gt; \u0026lt;osgix:cm-properties id=\u0026#34;preProps\u0026#34; persistent-id=\u0026#34;com.tk.example.camel\u0026#34;\u0026gt; \u0026lt;prop key=\u0026#34;prefix\u0026#34;\u0026gt;MyTransform\u0026lt;/prop\u0026gt; \u0026lt;/osgix:cm-properties\u0026gt; \u0026lt;ctx:property-placeholder properties-ref=\u0026#34;preProps\u0026#34; /\u0026gt; \u0026lt;/beans\u0026gt; This creates route between component “timer” and “log”. EndPoint URI are defined as follows: scheme:contextPath[?queryOptions] Where “scheme” identifies the protocol/component and the “contextPath” is the details that will be interpreted by provided protocol/component and the “queryOptions” are the parameters. List of components can be found at http://fusesource.com/docs/esb/4.4.1/camel_component_ref/Intro-ComponentsList.html\nSo in above example the producer timer generates a message from the myTransform.transform() and propagate it to logger.\npom.xml - The maven build file to build as jar file.\n Run maven “clean install” on the attached project.\nmvn clean install OR Right click on pom.xml and select Run As \u0026gt; Maven clean and then again Run As \u0026gt; Maven install.\nIt will generate “camel-example-1.0.jar” in “target” folder.\n You have created the Camel OSGi bundle successfully. Now deploy these in next steps.\n Deploy the Camel OSGi Bundle in FuseESB This example uses the camel-core feature (Ref to http://fusesource.com/docs/esb/4.4.1/camel_component_ref/Intro-ComponentsList.html ) for timer and log component. Verify that camel-core is installed on FuseESB.\n Run following command on Fuse ESB console.\n$ features:list | grep camel-core [installed] [2.8.0-fuse-01-13] camel-core camel-2.8.0-fuse-01-13 It should list a row on console with status installed something like above.\nIf it does not then install the camel-core by running following command\n$ features:install camel-core Run following command on FuseESB console to install the bundle\n osgi:install -s file:\u0026lt;PROJECT_DIR\u0026gt;/target/camel-example-1.0.jar For example\nosgi:install -s file:D:/cxf-camel-osgi/camel-example/target/camel-example-1.0.jar or Just drop the cxf-example-1.0.jar in deploy folder.\n It will start printing logs on console as well as in log file\n\u0026gt;\u0026gt;\u0026gt;\u0026gt; MyTransform set body: Thu Jan 19 14:02:28 IST 2012 \u0026gt;\u0026gt;\u0026gt;\u0026gt; MyTransform set body: Thu Jan 19 14:02:30 IST 2012 \u0026gt;\u0026gt;\u0026gt;\u0026gt; MyTransform set body: Thu Jan 19 14:02:32 IST 2012 \u0026gt;\u0026gt;\u0026gt;\u0026gt; MyTransform set body: Thu Jan 19 14:02:34 IST 2012 \u0026gt;\u0026gt;\u0026gt;\u0026gt; MyTransform set body: Thu Jan 19 14:02:36 IST 2012 You can locale log for “ExampleRouter” category in log file.\n Exercise 1 – Write message to files. Let’s try to add another route in above example which will direct the log to a file. \u0026lt;route\u0026gt; \u0026lt;from uri=\u0026#34;timer://myTimer?fixedRate=true\u0026amp;amp;period=2000\u0026#34;/\u0026gt; \u0026lt;bean ref=\u0026#34;myTransform\u0026#34; method=\u0026#34;transform\u0026#34;/\u0026gt; \u0026lt;to uri=\u0026#34;file:C:/timerLog\u0026#34;/\u0026gt; \u0026lt;/route\u0026gt; Build and install the bundle. It will write each message to a file in the given directory path. Exercise 2 – Inbox/Outbox Lets write a route which picks files from one folder and put them to other folder.\n Add following route\n\u0026lt;route\u0026gt; \u0026lt;from uri=\u0026#34;file:C:/inbox\u0026#34;/\u0026gt; \u0026lt;to uri=\u0026#34;file:C:/outbox\u0026#34;/\u0026gt; \u0026lt;/route\u0026gt; Build and install the bundle Now try to put any file in input box folder, c:/inbox, it will be immediately moved to c:/outbox folder.\n Exercise 3 – Filter Filter the files based on extension and put them in respective folders.\n Add following route\n\u0026lt;route\u0026gt; \u0026lt;from uri=\u0026#34;file:C:/inbox\u0026#34;/\u0026gt; \u0026lt;choice\u0026gt; \u0026lt;when\u0026gt; \u0026lt;simple\u0026gt;${file:name.ext} == \u0026#39;png\u0026#39; or ${file:name.ext} == \u0026#39;bmp\u0026#39; or ${file:name.ext} == \u0026#39;gif\u0026#39;\u0026lt;/simple\u0026gt; \u0026lt;to uri=\u0026#34;file:C:/outbox/images\u0026#34;/\u0026gt; \u0026lt;/when\u0026gt; \u0026lt;when\u0026gt; \u0026lt;simple\u0026gt;${file:name.ext} == \u0026#39;txt\u0026#39;\u0026lt;/simple\u0026gt; \u0026lt;to uri=\u0026#34;file:C:/outbox/text\u0026#34;/\u0026gt; \u0026lt;/when\u0026gt; \u0026lt;when\u0026gt; \u0026lt;simple\u0026gt;${file:name.ext} == \u0026#39;doc\u0026#39; or ${file:name.ext} == \u0026#39;docx\u0026#39;\u0026lt;/simple\u0026gt; \u0026lt;to uri=\u0026#34;file:C:/outbox/office\u0026#34;/\u0026gt; \u0026lt;/when\u0026gt; \u0026lt;otherwise\u0026gt; \u0026lt;to uri=\u0026#34;file:C:/outbox/other\u0026#34;/\u0026gt; \u0026lt;/otherwise\u0026gt; \u0026lt;/choice\u0026gt; Build and install the bundle\n Now try to put any file in input box folder, c:/inbox, it will be immediately moved to respective folders.\n OSGI Apache Camel and CXF packaging and deployment Integrate Camel and CXF User the same eclipse project of example POC - OSGI Apache CXF packaging and deployment\n Copy the project “cxf-example” and Paste as “camel-cxf-example”\n Add camel dependency in pom.xml\n\u0026lt;properties\u0026gt; \u0026lt;camel.version\u0026gt;2.8.0-fuse-00-08\u0026lt;/camel.version\u0026gt; \u0026lt;/properties\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.apache.camel\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;camel-core\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;${camel.version}\u0026lt;/version\u0026gt; \u0026lt;/dependency\u0026gt; Edit beans.xml - Use following schema definition\n\u0026lt;beans xmlns=\u0026#34;http://www.springframework.org/schema/beans\u0026#34; xmlns:aop=\u0026#34;http://www.springframework.org/schema/aop\u0026#34; xmlns:xsi=\u0026#34;http://www.w3.org/2001/XMLSchema-instance\u0026#34; xmlns:camel=\u0026#34;http://camel.apache.org/schema/spring\u0026#34; xmlns:util=\u0026#34;http://www.springframework.org/schema/util\u0026#34; xmlns:tx=\u0026#34;http://www.springframework.org/schema/tx\u0026#34; xmlns:amq=\u0026#34;http://activemq.apache.org/schema/core\u0026#34; xmlns:cxf=\u0026#34;http://camel.apache.org/schema/cxf\u0026#34; xmlns:jaxws=\u0026#34;http://cxf.apache.org/jaxws\u0026#34; xsi:schemaLocation=\u0026#34;http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.5.xsd http://camel.apache.org/schema/spring http://camel.apache.org/schema/spring/camel-spring-2.1.0.xsd http://www.springframework.org/schema/util http://www.springframework.org/schema/util/spring-util-2.5.xsd http://camel.apache.org/schema/cxf http://activemq.apache.org/camel/schema/cxf/camel-cxf.xsd http://cxf.apache.org/jaxws http://cxf.apache.org/schemas/jaxws.xsd\u0026#34;\u0026gt; Define the service end point as camel-cxf end point instead of jax-ws as follows Change from\n\u0026lt;jaxws:endpoint id=\u0026#34;helloworld\u0026#34; implementor=\u0026#34;com.tk.example.hellowrold.HelloWorldWSImpl\u0026#34; address=\u0026#34;/HelloWorld\u0026#34;/\u0026gt; to\n\u0026lt;cxf:cxfEndpoint id=\u0026#34;helloworld\u0026#34; address=\u0026#34;/HelloWorld\u0026#34; serviceClass=\u0026#34;com.tk.example.hellowrold.HelloWorldWS\u0026#34; /\u0026gt; Lets define camel route which will direct every web service call to a file.\n\u0026lt;camel:camelContext xmlns=\u0026#34;http://camel.apache.org/schema/spring\u0026#34;\u0026gt; \u0026lt;camel:route\u0026gt; \u0026lt;camel:from uri=\u0026#34;cxf:bean:helloworld?synchronous=true\u0026#34; /\u0026gt; \u0026lt;camel:to uri=\u0026#34;file:C:/outbox/messages\u0026#34;/\u0026gt; \u0026lt;/camel:route\u0026gt; \u0026lt;/camel:camelContext\u0026gt;\t Run maven build and deploy the Jar in FuseESB ( I assume, you have learned the deployment by now, so not mentioning steps again)\n It will write whatever message you sent to hello world service to the file and return the same\n This completes the article. You have learned basics for CXF and Camel and working them together. Try more complex examples\n","id":203,"tag":"ESB ; java ; Apache Service Mix ; Apache CFX ; FuseESB ; XML ; Apache Camel ; enterprise ; integration ; technology","title":"Using Apache CFX and Apache Camel in ESB","type":"post","url":"/post/esb-introduction-part2/"},{"content":"Calendars are used to store formation for various events. Calendar viewer displays the month-date view and for each day displays the events/reminders information.\nInvites are the events in the form of email. Once the invite for an event sent to a user via email then that event will be stored in the calendar view and same can be seen in the day information of the calendar view.\nCalendars Calendar data can be stored in files; there are some standard formats for storing calendars iCalendar is a computer file format which allows Internet users to send meeting requests and tasks to other Internet users, via email, or sharing files with an extension of .ics. Recipients of the iCalendar data file (with supporting software, such as an email client or calendar application) can respond to the sender easily or counter propose another meeting date/time.\nCore object iCalendar\u0026rsquo;s design was based on the previous file format vCalendar created by the Internet Mail Consortium (IMC).\nBEGIN:VCALENDAR VERSION:2.0 PRODID:-//hacksw/handcal//NONSGML v1.0//EN BEGIN:VEVENT UID:uid1@example.com DTSTAMP:19970714T170000Z ORGANIZER;CN=John Doe:MAILTO:john.doe@example.com DTSTART:19970714T170000Z DTEND:19970715T035959Z SUMMARY:Bastille Day Party END:VEVENT END:VCALENDAR Other tags Events (VEVENT). To-do (VTODO), VJOURNAL, VFREEBUSY, etc Supported Products iCalendar is used and supported by a large number of products, including Google Calendar, Apple iCal, GoDaddy Online Group Calendar, IBM Lotus Notes, Yahoo! Calendar, Evolution (software) , KeepandShare, Lightning extension for Mozilla Thunderbird and SeaMonkey, and by Microsoft Outlook.\nJava Solution We can export events information in a .ics file as per the iCalendar format, using Java File IO apis. Similarly Invites/Meeting requests can be sent as email using Java Mail API. Here is flow diagram.\nImplementation steps for Sending Email Invites Register calendar mime type in java\n// register the text/calendar mime type MimetypesFileTypeMap mimetypes = (MimetypesFileTypeMap)MimetypesFileTypeMap.getDefaultFileTypeMap(); mimetypes.addMimeTypes(\u0026#34;text/calendar ics ICS\u0026#34;); // register the handling of text/calendar mime type MailcapCommandMap mailcap = (MailcapCommandMap) MailcapCommandMap.getDefaultCommandMap(); mailcap.addMailcap(\u0026#34;text/calendar;;x-java-content-handler=com.sun.mail.handlers.text_plain\u0026#34;); Create message and assign recipients and sender.\n// create a message MimeMessage msg = new MimeMessage(session); // set the from and to address InternetAddress addressFrom = new InternetAddress(from); msg.setFrom(addressFrom); InternetAddress[] addressTo = new InternetAddress[recipients.length]; for (int i = 0; i \u0026lt; recipients.length; i++) { addressTo[i] = new InternetAddress(recipients[i]); } msg.setRecipients(Message.RecipientType.TO, addressTo); Create an alternative body part.\n// Create an alternative Multipart Multipart multipart = new MimeMultipart(\u0026#34;alternative\u0026#34;); Set body of invite.\n// part 1, html text MimeBodyPart descriptionPart = new MimeBodyPart(); String content = \u0026#34;\u0026lt;font size=\u0026#39;\\\u0026#39;2\\\u0026#39;\u0026#39;\u0026gt;\u0026#34; + emailMsgTxt + \u0026#34;\u0026lt;/font\u0026gt;\u0026#34;; descriptionPart.setContent(content, \u0026#34;text/html; charset=utf-8\u0026#34;); multipart.addBodyPart(descriptionPart); Add Calendar part\n// Add part two, the calendar BodyPart calendarPart = buildCalendarPart(); multipart.addBodyPart(calendarPart); The method calendar part can be generated in following ways.\n Using Plain Text SimpleDateFormat iCalendarDateFormat = new SimpleDateFormat(\u0026#34;yyyyMMdd\u0026#39;T\u0026#39;HHmm\u0026#39;00\u0026#39;\u0026#34;); BodyPart calendarPart = new MimeBodyPart(); String calendarContent = \u0026#34;BEGIN:VCALENDAR\\n\u0026#34; + \u0026#34;METHOD:REQUEST\\n\u0026#34; + \u0026#34;PRODID: BCP - Meeting\\n\u0026#34; + \u0026#34;VERSION:2.0\\n\u0026#34; + \u0026#34;BEGIN:VEVENT\\n\u0026#34; + \u0026#34;DTSTAMP:\u0026#34; + iCalendarDateFormat.format(currentdate) + \u0026#34;\\n\u0026#34; + \u0026#34;DTSTART:\u0026#34; + iCalendarDateFormat.format(startdatetime) + \u0026#34;\\n\u0026#34; + \u0026#34;DTEND:\u0026#34; + iCalendarDateFormat.format(enddatetime) + \u0026#34;\\n\u0026#34; + \u0026#34;SUMMARY:Test 123\\n\u0026#34; + \u0026#34;UID:324\\n\u0026#34; + \u0026#34;ATTENDEE;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=TRUE:MAILTO:kuldeep@nagarro.com\\n\u0026#34; + \u0026#34;ORGANIZER:MAILTO:kuldeep@nagarro.com\\n\u0026#34; + \u0026#34;LOCATION:gurgaon\\n\u0026#34; + \u0026#34;DESCRIPTION:learn some stuff\\n\u0026#34; + \u0026#34;SEQUENCE:0\\n\u0026#34; + \u0026#34;PRIORITY:5\\n\u0026#34; + \u0026#34;CLASS:PUBLIC\\n\u0026#34; + \u0026#34;STATUS:CONFIRMED\\n\u0026#34; + \u0026#34;TRANSP:OPAQUE\\n\u0026#34; + \u0026#34;BEGIN:VALARM\\n\u0026#34; + \u0026#34;ACTION:DISPLAY\\n\u0026#34; + \u0026#34;DESCRIPTION:REMINDER\\n\u0026#34; + \u0026#34;TRIGGER;RELATED=START:-PT00H15M00S\\n\u0026#34; + \u0026#34;END:VALARM\\n\u0026#34; + \u0026#34;END:VEVENT\\n\u0026#34; + \u0026#34;END:VCALENDAR\u0026#34;; calendarPart.addHeader(\u0026#34;Content-Class\u0026#34;,\u0026#34;urn:content-classes:calendarmessage\u0026#34;); calendarPart.setContent(calendarContent, \u0026#34;text/calendar;method=CANCEL\u0026#34;); Using ICal4J Library ( Ref to Next section for more detail on ICal4J) // Create a TimeZone TimeZoneRegistry registry = TimeZoneRegistryFactory.getInstance().createRegistry(); TimeZone timezone = registry.getTimeZone(\u0026#34;America/Mexico_City\u0026#34;); java.util.Calendar startDate = new GregorianCalendar(); startDate.setTimeZone(timezone); startDate.set(java.util.Calendar.MONTH, java.util.Calendar.JANUARY); startDate.set(java.util.Calendar.DAY_OF_MONTH, 18); startDate.set(java.util.Calendar.YEAR, 2012); startDate.set(java.util.Calendar.HOUR_OF_DAY, 9); startDate.set(java.util.Calendar.MINUTE, 0); startDate.set(java.util.Calendar.SECOND, 0); // End Date is on: April 1, 2008, 13:00 java.util.Calendar endDate = new GregorianCalendar(); endDate.setTimeZone(timezone); endDate.set(java.util.Calendar.MONTH, java.util.Calendar.JANUARY); endDate.set(java.util.Calendar.DAY_OF_MONTH, 18); endDate.set(java.util.Calendar.YEAR, 2012); endDate.set(java.util.Calendar.HOUR_OF_DAY, 13); endDate.set(java.util.Calendar.MINUTE, 0);\tendDate.set(java.util.Calendar.SECOND, 0); // Create the event String eventName = \u0026#34;Progress Meeting\u0026#34;; DateTime start = new DateTime(startDate.getTime()); DateTime end = new DateTime(endDate.getTime()); VEvent meeting = new VEvent(start, end, eventName); // add attendees.. Attendee dev1 = new Attendee(URI.create(\u0026#34;mailto:dev1@mycompany.com\u0026#34;)); dev1.getParameters().add(Role.REQ_PARTICIPANT); dev1.getParameters().add(new Cn(\u0026#34;Developer 1\u0026#34;)); meeting.getProperties().add(dev1); Attendee dev2 = new Attendee(URI.create(\u0026#34;mailto:dev2@mycompany.com\u0026#34;)); dev2.getParameters().add(Role.OPT_PARTICIPANT); dev2.getParameters().add(new Cn(\u0026#34;Developer 2\u0026#34;)); meeting.getProperties().add(dev2); VAlarm reminder = new VAlarm(new Dur(0, -1, 0, 0)); // repeat reminder four (4) more times every fifteen (15) minutes.. reminder.getProperties().add(new Repeat(4)); reminder.getProperties().add(new Duration(new Dur(0, 0, 15, 0))); // display a message.. reminder.getProperties().add(Action.DISPLAY); reminder.getProperties().add(new Description(\u0026#34;Progress Meeting\u0026#34;)); reminder.getProperties().add(new Trigger(new Dur(0, 0, 15, 0))); Transp transp = new Transp(\u0026#34;OPAQUE\u0026#34;); meeting.getProperties().add(transp); Location loc = new Location(\u0026#34;Gurgaon\u0026#34;); meeting.getProperties().add(loc); meeting.getAlarms().add(reminder); // Create a calendar net.fortuna.ical4j.model.Calendar icsCalendar = new net.fortuna.ical4j.model.Calendar(); icsCalendar.getProperties().add(new ProdId(\u0026#34;-//Events Calendar//iCal4j 1.0//EN\u0026#34;)); icsCalendar.getProperties().add(new Method(\u0026#34;REQUEST\u0026#34;)); icsCalendar.getProperties().add(new Sequence(0)); icsCalendar.getProperties().add(new Priority(5)); icsCalendar.getProperties().add(new Status(\u0026#34;CONFIRMED\u0026#34;)); icsCalendar.getProperties().add(new Uid(\u0026#34;324\u0026#34;)); // Add the event and print icsCalendar.getComponents().add(meeting); String calendarContent = icsCalendar.toString(); calendarPart.addHeader(\u0026#34;Content-Class\u0026#34;,\u0026#34;urn:content-classes:calendarmessage\u0026#34;); calendarPart.setContent(calendarContent, \u0026#34;text/calendar;method=CANCEL\u0026#34;); Put above multipart content in message\n// Put the multipart in message msg.setContent(multipart); And finally send above Mime message as email\nTransport transport = session.getTransport(\u0026#34;smtp\u0026#34;); transport.connect(); transport.sendMessage(msg, msg.getAllRecipients()); transport.close(); Follow the references for more detail on creating email session and using java Mail API.\nMore on ICal4J library iCal4j is used for modifying existing iCalendar data or creating new iCalendar data from scratch.\n Parsing an iCalendar file - Support for parsing and building an iCalendar object model is provided by the CalendarParser and ContentHandler interfaces. You can provide your own implementations of either of these, or just use the default implementations as provided by the CalendarBuilder class\nFileInputStream fin = new FileInputStream(\u0026#34;mycalendar.ics\u0026#34;); CalendarBuilder builder = new CalendarBuilder(); Calendar calendar = builder.build(fin); Creating a new Calendar - Creating a new calendar is quite straight-forward, in that all you need to remember is that a Calendar contains a list of Properties and Components. A calendar must contain certain standard properties and at least one component to be valid. You can verify that a calendar is valid via the method Calendar.validate(). All iCal4j objects also override Object.toString(), so you can verify the resulting calendar data via this mechanism.\nCalendar calendar = new Calendar(); calendar.getProperties().add(new ProdId(\u0026#34;-//Ben Fortuna//iCal4j 1.0//EN\u0026#34;)); calendar.getProperties().add(Version.VERSION_2_0); calendar.getProperties().add(CalScale.GREGORIAN); // Add events, etc.. Output BEGIN:VCALENDAR PRODID:-//Ben Fortuna//iCal4j 1.0//EN VERSION:2.0 CALSCALE:GREGORIAN END:VCALENDAR Creating an Event - One of the more commonly used components is a VEvent. To create a VEvent you can either set the date value and properties manually or you can make use of the convenience constructors to initialise standard values.\njava.util.Calendar cal = java.util.Calendar.getInstance(); cal.set(java.util.Calendar.MONTH, java.util.Calendar.DECEMBER); cal.set(java.util.Calendar.DAY_OF_MONTH, 25); VEvent christmas = new VEvent(new Date(cal.getTime()), \u0026#34;Christmas Day\u0026#34;); // initialise as an all-day event.. christmas.getProperties().getProperty(Property.DTSTART).getParameters().add(Value.DATE); Output: BEGIN:VEVENT DTSTAMP:20050222T044240Z DTSTART;VALUE=DATE:20051225 SUMMARY:Christmas Day END:VEVENT Saving an iCalendar File - When saving an iCalendar file iCal4j will automatically validate your calendar object model to ensure it complies with the RFC2445 specification. If you would prefer not to validate your calendar data you can disable the validation by calling CalendarOutputter.setValidating(false).\nFileOutputStream fout = new FileOutputStream(\u0026#34;mycalendar.ics\u0026#34;); CalendarOutputter outputter = new CalendarOutputter(); outputter.output(calendar, fout); References http://en.wikipedia.org/wiki/ICalendar http://en.wikipedia.org/wiki/VCal http://www.imc.org/pdi/vcal-10.txt http://java.sun.com/developer/onlineTraining/JavaMail/contents.html http://wiki.modularity.net.au/ical4j/index.php?title=Main_Page ","id":204,"tag":"microsoft ; java ; exchange ; integration ; mail ; enterprise ; technology","title":"Managing Calendar and Invites in Java","type":"post","url":"/post/java-calendar-and-invite/"},{"content":"This article starts from basic terms of ESB worlds and then provides details of the FuseESB with various examples.\nIt contains step by step guide to install FuseESB, and develop and deploy OSGi bundle on FuseESB.\nLets understand terminologies of the ESB world :\nESB An Enterprise Service Bus (ESB) is a software architecture model used for designing and implementing the interaction and communication between mutually interacting software applications in Service Oriented Architecture. As software architecture model for distributed computing, it is a specialty variant of the more general client server software architecture model and promotes strictly asynchronous message oriented design for communication and interaction between applications.\nOSGi The Open Services Gateway initiative (OSGi) framework is a module system and service platform for the Java programming language that implements a complete and dynamic component model.\nApplications or components (coming in the form of bundles for deployment) can be remotely installed, started, stopped, updated and uninstalled without requiring a reboot; management of Java packages/classes is specified in great detail. Application life cycle management (start, stop, install, etc.) is done via APIs that allow for remote downloading of management policies. The service registry allows bundles to detect the addition of new services, or the removal of services, and adapt accordingly.\nMANIFEST.MF file with OSGi Headers:\n Bundle-Name: Defines a human-readable name for this bundle, Simply assigns a short name to the bundle. Bundle-SymbolicName: The only required header, this entry specifies a unique identifier for a bundle, based on the reverse domain name convention (used also by the java packages) Bundle-Description: A description of the bundle\u0026rsquo;s functionality. Bundle-ManifestVersion: This little known header indicates the OSGi specification to use for reading this bundle. Bundle-Version: Designates a version number to the bundle. Bundle-ClassPath: Class path of classes. Bundle-Activator: Indicates the class name to be invoked once a bundle is activated. Export-Package: Expresses what Java packages contained in a bundle will be made available to the outside world. Import-Package: Indicates what Java packages will be required from the outside world, in order to fulfill the dependencies needed in a bundle. Apache KARAF Apache Karaf is a small OSGi based runtime which provides a lightweight container onto which various components and applications can be deployed Key points of Karaf, Hot deployment, Dynamic configuration, Logging System, Provisioning, Remote access, Security framework, Managing instances\nApache ServiceMix Apache ServiceMix is an enterprise-class open-source distributed enterprise service bus (ESB) and service-oriented architecture (SOA) toolkit. and released under the Apache License.\nServiceMix 4 fully supports OSGi and Java Business Integration (JBI) specification JSR 208. ServiceMix is lightweight and easily embeddable, has integrated Spring support and can be run at the edge of the network (inside a client or server), as a standalone ESB provider or as a service within another ESB. You can use ServiceMix in Java SE or a Java EE application server.\nServiceMix uses ActiveMQ to provide remoting, clustering, reliability and distributed failover. The basic frameworks used by ServiceMix are Spring and XBean.\nServiceMix is often used with Apache ActiveMQ, Apache Camel and Apache CXF in SOA infrastructure projects.\nEnterprise subscriptions for ServiceMix is available from independent vendors including the FuseSource Corp. The FuseSource team offers an enterprise version of ServiceMix called Fuse ESB that is tested, certified and supported.\nServiceMix Runtime At the core of ServiceMix 4.x is the small and lightweight OSGi based runtime. This provides a small lightweight container onto which various bundles can be deployed.\nThe directory layout of the ServiceMix Runtime is as follows\n servicemix/ bin/ servicemix.sh # shell scripts to boot up SMX servicemix.bat wrapper/ # Java Service Wrapper binaries system/ # the system defined OSGi bundles (i.e. the stuff shipped in the distro) which generally shouldn't be edited by users deploy/ # where bundles should be put to be hot-deployed - in jar or expanded format (the latter is good to let folks edit spring.xml files etc) etc/ # some config files for describing which OSGi container to use; also a text file to be able to describe the maven repos \u0026amp; mvn bundles to deploy data/ # directory used to store transient data (logs, activemq journal, transaction log, embedded DB, generated bundles) logs/ generated-bundles/ activemq/ Apache Service Mix NMR Apache ServiceMix NMR is the core Bus of Apache ServiceMix 4. It is built as a set of OSGi bundles and meant to be deployed onto any OSGi runtime, but mainly built on top of Apache ServiceMix Kernel. The NMR project is also where the Java Business Integration 1.0 (JBI) specification is implemented.\nWorking with Fuse ESB Fuse ESB is certified, productized and fully supported by the people who wrote the code. Fuse ESB has a pluggable architecture that allows organizations to use their preferred service solutions in their SOA. Any standard JBI or OSGi-compliant service engine or binding component – including BPEL, XSLT or JMX engines – may be deployed to a Fuse ESB container, and Fuse ESB components may be deployed to other ESBs.\nInstall and Run FuseESB Windows\n Unzip the downloaded zip(from http://fusesource.com/products/enterprise-servicemix/) at C:\\ApacheServiceMix, let’s call it FUSE_ESB_HOME. Open a command line console and change directory to FUSE_ESB_HOME. To start the server, run the following command in Windows servicemix.bat Linux\n Download Fuse ESB binaries for Linux from http://fusesource.com/product_download_now/fuse-esb-apache-servicemix/4-4-1-fuse-01-13/unix\n Extract the archive “apache-servicemix-4.4.1-fuse-01-13.tar.gz” to installation directory, let’s say it is ‘/usr/local/FuseESB’\nGo to Installation directory ‘/usr/local/FuseESB’ Extract the content:\n$ cd /usr/local/FuseESB $tar -zxvf /root/apache-servicemix-4.4.1-fuse-01-13.tar.gz We will refer the installation directory “/usr/local/FuseESB /apache-servicemix-4.4.1-fuse-01-13“as FUSE_ESB_HOME in this document.\n Run Fuse ESB on Linux as follows :\n Go to FUSE_ESB_HOME/bin Run following command to start FuseESB console Run following command to start FuseESB console in background\n ./start Use following command to re-connect the FuseESB server as client.\n ./client -a 8101 -h localhost -u smx -p smx OSGI WAR packaging and deployment Create an OSGi WAR using eclipse Create Dynamic Web Project\n Open Eclipse Galileo for a Workspace (say D:/Workspaces/ESBRnD) Go to File \u0026gt; New \u0026gt; Project \u0026gt; Web \u0026gt; Dynamic Web Project Name it , say “ESB_Test” Create a JSP file\n Right click on Project and Select New \u0026gt; JSP Name it, say “index.jsp” Put some content in jsp file. \u0026lt;% page language=”java” contentType=”text/html”\u0026gt; \u0026lt;!DOCTYPE html\u0026gt; \u0026lt;html lang=\u0026#34;en\u0026#34;\u0026gt; \u0026lt;head\u0026gt; \u0026lt;meta charset=\u0026#34;UTF-8\u0026#34;\u0026gt; \u0026lt;title\u0026gt;This is first ESB test page.\u0026lt;/title\u0026gt; \u0026lt;/head\u0026gt; \u0026lt;body\u0026gt; \u0026lt;h1\u0026gt;This is first Apache Service Mix Project\u0026lt;/h1\u0026gt; \u0026lt;h2\u0026gt;Welcome to the world of ESB and OSGi Packaging\u0026lt;/h2\u0026gt; \u0026lt;/body\u0026gt; \u0026lt;/html\u0026gt; Create a Bundle Activator\n Add required osgi jars in class path, right click on your project choose: Properties \u0026gt; Java Build Path \u0026gt; Add External JARs Choose the file org.eclipse.osgi_*.jar from the eclipse plugin directory Now create a class for Bundle Activator, Right click on Project and Select New \u0026gt; Class Name the class and package, say MyActivator under package com.tk.bundle and implement it from org.osgi.framework.BundleActivator , click finish Provide implementation for unimplemented methods package com.tk.bundle; import org.osgi.framework.BundleActivator public class MyActivator implements BundleActivator { @Override public void start(BundleContext ctx) throws Exception { System.out.println(\u0026#34;MyActivator.start()\u0026#34;); } @Override public void start(BundleContext ctx) throws Exception { System.out.println(\u0026#34;MyActivator.stop()\u0026#34;); } } Update Menifest File as per OSGi Bundle - Provide all bundle properties as described in OSGi section. /WebContent/META-INF/MANIFEST.MF\nManifest-Version: 1.0 Bundle-Name: ESB_Test Bundle-SymbolicName: ESB_Test Bundle-Description: This is the first OSGi war Bundle-ManifestVersion: 2 Bundle-Version: 1.0.0 Bundle-ClassPath: .,WEB-INF/classes Bundle-Activator: com.tk.bundle.MyActivator Import-Package: org.osgi.framework:version=\u0026#34;1.3.0\u0026#34; Create WAR\n Export the project as WAR, right click on project and select Export\u0026gt;WAR Name the war file, say it “ESB_Test.war” Create an OSGi WAR using Maven Other way is to create the WAR using maven.\n Create Maven Web Project\n Open Eclipse Galileo for a Workspace (say D:/Workspaces/ESBRnD) Go to File \u0026gt; New \u0026gt; Project \u0026gt; Web \u0026gt; Maven Project Select “Create Simple Project” provide the data and Finish Create a JSP file as described in last section. In ‘webapp’ folder.\n Create a web.xml in ‘webapp’ folder, copy the default web app generated from last section.\n\u0026lt;?xml version=\u0026#34;1.0\u0026#34; encoding=\u0026#34;UTF-8\u0026#34;?\u0026gt; \u0026lt;web-app id=\u0026#34;WebApp_ID\u0026#34; version=\u0026#34;2.4\u0026#34; xmlns=\u0026#34;http://java.sun.com/xml/ns/j2ee\u0026#34; xmlns:xsi=\u0026#34;http://www.w3.org/2001/XMLSchema-instance\u0026#34; xsi:schemaLocation=\u0026#34;http://java.sun.com/xml/ns/j2ee http://java.sun.com/xml/ns/j2ee/web-app_2_4.xsd\u0026#34;\u0026gt; \u0026lt;display-name\u0026gt;ESB_Test\u0026lt;/display-name\u0026gt; \u0026lt;welcome-file-list\u0026gt; \u0026lt;welcome-file\u0026gt;index.html\u0026lt;/welcome-file\u0026gt; \u0026lt;welcome-file\u0026gt;index.htm\u0026lt;/welcome-file\u0026gt; \u0026lt;welcome-file\u0026gt;index.jsp\u0026lt;/welcome-file\u0026gt; \u0026lt;welcome-file\u0026gt;default.html\u0026lt;/welcome-file\u0026gt; \u0026lt;welcome-file\u0026gt;default.htm\u0026lt;/welcome-file\u0026gt; \u0026lt;welcome-file\u0026gt;default.jsp\u0026lt;/welcome-file\u0026gt; \u0026lt;/welcome-file-list\u0026gt; \u0026lt;/web-app\u0026gt; Now update pom.xml for following - Add dependency for osgi.core\n\u0026lt;dependencies\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.apache.felix\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;org.osgi.core\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;1.0.0\u0026lt;/version\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;/dependencies\u0026gt; Add Build configuration for war and OSGi Bundle.\n\u0026lt;build\u0026gt; \u0026lt;defaultGoal\u0026gt;install\u0026lt;/defaultGoal\u0026gt; \u0026lt;plugins\u0026gt; \u0026lt;plugin\u0026gt; \u0026lt;groupId\u0026gt;org.apache.maven.plugins\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;maven-compiler-plugin\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;2.3.2\u0026lt;/version\u0026gt; \u0026lt;configuration\u0026gt; \u0026lt;source\u0026gt;1.6\u0026lt;/source\u0026gt; \u0026lt;target\u0026gt;1.6\u0026lt;/target\u0026gt; \u0026lt;encoding\u0026gt;UTF-8\u0026lt;/encoding\u0026gt; \u0026lt;/configuration\u0026gt; \u0026lt;/plugin\u0026gt; \u0026lt;plugin\u0026gt; \u0026lt;groupId\u0026gt;org.apache.maven.plugins\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;maven-resources-plugin\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;2.4.3\u0026lt;/version\u0026gt; \u0026lt;configuration\u0026gt; \u0026lt;encoding\u0026gt;UTF-8\u0026lt;/encoding\u0026gt; \u0026lt;/configuration\u0026gt; \u0026lt;/plugin\u0026gt; \u0026lt;plugin\u0026gt; \u0026lt;artifactId\u0026gt;maven-war-plugin\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;2.1-alpha-1\u0026lt;/version\u0026gt; \u0026lt;executions\u0026gt; \u0026lt;execution\u0026gt; \u0026lt;id\u0026gt;default-war\u0026lt;/id\u0026gt; \u0026lt;phase\u0026gt;package\u0026lt;/phase\u0026gt; \u0026lt;goals\u0026gt; \u0026lt;goal\u0026gt;war\u0026lt;/goal\u0026gt; \u0026lt;/goals\u0026gt; \u0026lt;/execution\u0026gt; \u0026lt;/executions\u0026gt; \u0026lt;configuration\u0026gt; \u0026lt;archive\u0026gt; \u0026lt;manifestFile\u0026gt;${project.build.outputDirectory}/META-INF/MANIFEST.MF\u0026lt;/manifestFile\u0026gt; \u0026lt;/archive\u0026gt; \u0026lt;/configuration\u0026gt; \u0026lt;/plugin\u0026gt; \u0026lt;plugin\u0026gt; \u0026lt;groupId\u0026gt;org.apache.felix\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;maven-bundle-plugin\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;2.2.0\u0026lt;/version\u0026gt; \u0026lt;extensions\u0026gt;true\u0026lt;/extensions\u0026gt; \u0026lt;executions\u0026gt; \u0026lt;execution\u0026gt; \u0026lt;id\u0026gt;bundle-manifest\u0026lt;/id\u0026gt; \u0026lt;phase\u0026gt;process-classes\u0026lt;/phase\u0026gt; \u0026lt;goals\u0026gt; \u0026lt;goal\u0026gt;manifest\u0026lt;/goal\u0026gt; \u0026lt;/goals\u0026gt; \u0026lt;/execution\u0026gt; \u0026lt;/executions\u0026gt; \u0026lt;configuration\u0026gt; \u0026lt;supportedProjectTypes\u0026gt; \u0026lt;supportedProjectType\u0026gt;jar\u0026lt;/supportedProjectType\u0026gt; \u0026lt;supportedProjectType\u0026gt;bundle\u0026lt;/supportedProjectType\u0026gt; \u0026lt;supportedProjectType\u0026gt;war\u0026lt;/supportedProjectType\u0026gt; \u0026lt;/supportedProjectTypes\u0026gt; \u0026lt;instructions\u0026gt; \u0026lt;Bundle-SymbolicName\u0026gt;${project.artifactId}\u0026lt;/Bundle-SymbolicName\u0026gt; \u0026lt;Bundle-Activator\u0026gt;com.tk.bundle.MyActivator\u0026lt;/Bundle-Activator\u0026gt; \u0026lt;Bundle-ClassPath\u0026gt;.,WEB-INF/classes\u0026lt;/Bundle-ClassPath\u0026gt; \u0026lt;Bundle-RequiredExecutionEnvironment\u0026gt;J2SE-1.5\u0026lt;/Bundle-RequiredExecutionEnvironment\u0026gt; \u0026lt;Import-Package\u0026gt;org.osgi.framework;version=\u0026#34;1.3.0\u0026#34;\u0026lt;/Import-Package\u0026gt; \u0026lt;/instructions\u0026gt; \u0026lt;/configuration\u0026gt; \u0026lt;/plugin\u0026gt; \u0026lt;/plugins\u0026gt; \u0026lt;/build\u0026gt; Now execute maven to run goal install\nmvn clean install or right click on pom.xml and select Run As \u0026gt; Maven Install, it will generate the war file ‘esb_test1-0.0.1-SNAPSHOT.war’ in target folder.\n Deploy WAR Console Based deployment\n Go to FuseESB Console Install WAR file to Apache Server Mix by running following command osgi:install war:\u0026lt;File URL\u0026gt; It will give a bundle id ( say 221), Run following command to start the installed bundle\n$ osgi:start \u0026lt;bundle id\u0026gt; $ osgi:install war:file://kuldeeps/test/Esb_Test.war Bungle ID : 221 $ osgi:start 221 MyActivator.start() Manual Deployment (alternative to console based deployment) Just drop the war file in FUSE_ESB_HOME\u0026gt;/deploy directory. It will invoke the Bundle Activator’s Start method. See the console log\n After you see above message, try to access following URL in browser\nhttp://localhost:8181/ESB_Test/\nIt will open the web application deployed here.\n You may also try osgi:update, osgi:uninstall, osgi.stop commands.\nThis completes and basic introduction, to ESB. In the next article we will cover how apache camel and CXF based webservices are deployed on ESB.\n","id":205,"tag":"ESB ; java ; Apache Service Mix ; OSGi ; FuseESB ; XML ; Apache Camel ; enterprise ; integration ; technology","title":"Working with Enterprise Service Bus","type":"post","url":"/post/esb-introduction-part1/"},{"content":"MailMerge is a process to create personalized letters and pre-addressed envelopes or mailing labels for mass mailings from a word processing document (Template). A template contains placeholders (Fields) along with the actual content. The placeholders are filled by the data source which is typically a spreadsheet or a database having column for each place holders.\nThis article talks about the java approach for generating multiple documents from a single MS Word template.\nA typical mail merge: Microsoft Words generates individual documents, email or prints as a result of the mail merge. It does following :\n Process template w.r.t each record available in data source and generate content for each record. Generate individual documents or Email/Print the generated content. Java Solution using MS Word Mail Merge. We can use a 3rd party library “Aspose.Words for Java” which is an advanced class library for Java that enables you to perform a great range of document processing tasks directly within your Java applications. We can generate multiple documents from a single template in the same way as MS word does; output of this process is the documents, one for one record in data source, not the email or print. We can generate output documents in various formats (doc, docx, mhtml, msg etc) but for sending email or sending these documents to print, we need to develop custom solution.\nSolution Diagram: In the demo example, we want to generate some invitation letters for the given template.\nWe have following data and keys in the template\n ContactName ContactTitle Address1 CourseName Kuldeep Singh Professional Address1 Test User 2 Technical operator Test Ready User 3 Expert Regular Normal Here is the java file which generate one document for each record (MailMerge.java) :\npackage com.aspose.words.demos; import java.io.File; import java.net.URI; import java.text.MessageFormat; import com.aspose.words.Document; /** * Demo Class. */ public class MailMerge { \tprivate static String [] fields = {\u0026#34;ContactName\u0026#34;, \t\u0026#34;ContactTitle\u0026#34;, \t\u0026#34;Address1\u0026#34;, \t\u0026#34;CourseName\u0026#34;}; \tprivate static Object[][] valueRows = \t{{\u0026#34;Kuldeep Singh\u0026#34;, \u0026#34;Professional\u0026#34;, \u0026#34;Address1\u0026#34;, \u0026#34;Test\u0026#34;}, \t{\u0026#34;User 2\u0026#34;, \u0026#34;Technical operator\u0026#34;, \u0026#34;Test\u0026#34;, \u0026#34;Read\u0026#34;}, \t{\u0026#34;User 3\u0026#34;, \u0026#34;Expert\u0026#34;, \u0026#34;Regular\u0026#34;, \u0026#34;Normal\u0026#34;}}; \t public static void main(String[] args) throws Exception { //Sample infrastructure. URI exeDir = MailMerge.class.getResource(\u0026#34;\u0026#34;).toURI(); String dataDir = new File(exeDir.resolve(\u0026#34;data\u0026#34;)) + File.separator; produceMultipleDocuments(dataDir, \u0026#34;CourseInvitationDemo.doc\u0026#34;); } public static void produceMultipleDocuments(String dataDir, String srcDoc) throws Exception { // Open the template document. Document doc = new Document(dataDir + srcDoc); // Loop though all records in the data source. for (int i = 0; i \u0026lt; valueRows.length; i++) { // Clone the template instead of loading it from disk (for speed). Document dstDoc = (Document)doc.deepClone(true); // Execute mail merge. dstDoc.getMailMerge().execute(fields, valueRows[i]); dstDoc.save(MessageFormat.format(dataDir + \u0026#34;TestFile Out {0}.doc\u0026#34;, i+1)); System.out.println(\u0026#34;Done \u0026#34; + i); } } } It will generate\nTestFile Out1.doc TestFile Out2.doc TestFile Out3.doc\nand so on. “Aspose.Words” can generate these output documents in various formats and streams. We can develop logic to direct the data streams to email or print.\nAspose.Words Aspose.Words for Java is an advanced class library for Java that enables you to perform a great range of document processing tasks directly within your Java applications. Aspose.Words for Java supports DOC, OOXML, RTF, HTML and OpenDocument formats. With Aspose.Words you can generate, modify, and convert documents without using Microsoft Word\nReferences http://www.aspose.com/categories/java-components/aspose.words-for-java/default.aspx http://www.javaworld.com/javaworld/jw-10-2000/jw-1020-print.html http://java.sun.com/developer/onlineTraining/JavaMail/contents.html ","id":206,"tag":"microsoft ; java ; mail ; integration ; Aspose ; world ; technology","title":"Mail Merge in Java","type":"post","url":"/post/java-mail-merge/"},{"content":"Last year we have migrated one of our application from Java 5 to Java 6. The Java 6(OpenJDK) installation was done by client on a fresh system. After installing our application (under JBoss app), we found the fonts on JFreeCharts and on the images generated from AWT were not correct.\nWe have solved the issue by configuring the fonts at JVM level. Let understand it better.\nHow AWT / JFreeCharts shows the fonts AWT rely on native fonts, it usages default Java fonts (which best fits) if no native fonts mapping found. JFree also usage the AWT library to generates chart images.\nJVM Font Configuration JVM has font configurations, where font mapping to the native fonts can be defined.\nFonts configuration exists in [JRE Installation]/lib/fontconfig.properties.src or look for the similar file fontconfig.properties\nIn our application, we noticed an error at the logs saying that the font mapping was defined in the fontconfig.properties.src but font files did not exists on the mentioned paths.\nWe then installed the missing font library and font issue get fixed.\n","id":207,"tag":"awt ; java ; jvm ; jfreechart ; fonts ; customizations ; technology","title":"Font configuration in JVM","type":"post","url":"/post/java-font-mapping/"}]