[{"content":"Recently, I have been invited to share my 3 years Thoughtworks Journey, at “Impact Series” by Social Change Group TW, India.\n \u0026ldquo;Striving for positive social change is at the heart of our purpose, culture and work\u0026rdquo; — /thoughtworks\n I believe an impactful change is the change that changes you, and any social change start from within, our own journey. I shared my change journey, and received number of comments, and feedbacks. Fellow thoughtworkers, shared their views on how my impact stories is impactful for them, and I thought of sharing the same to wider audience. This is a continuation of last year’s journey.\nMy Theory of Change Change is the only constant, doesn’t matter we like it or not it take place. First step of a change journey is to accept the change, for acceptance we need to analyse impact of the change, and know what are anchors (things that blocks your path) and engines (things that let you move) for the change. Some the baggage you carry may blocks your path, read about success bias in my earlier article change to become more successful.\nNext step after acceptance is Observe, it helps to navigate through the change, and learn on the go. Don’t react to situations too soon, give some time, observe them more. Start Absorbing things that are aligned, you need to believe in the change, and then become the change. This theory has helped me navigate through my Thoughtworks journey, continue reading…\nCheck the ground - Believe in the change Initial days of Thoughtworks (TW), when I was asked to meet lot people in the organization, and I was just observing what people are saying and why are they saying so, you get lot of advice, opinions on do this, do that. I was checking the ground of TW, and TW was checking mine. It was huge change for the me to absorb and believe in. You will get lot of new terms, and focus on social capital. To believe in the TW more, I asked operations for a project where I can start as a developer and see things on ground. To my surprise, I was allowed, and appreciated for my decision. Luckily I was able to contribute well there (I think so :), — based on feedback, that\u0026rsquo;s very important part here)\nWhile observing things on ground, I absorbed TW practices, culture, and become a TWer. I learned that we own our own journey, we need to define it and travel it, the way we want. You can only lead with your influence here, and not with your title or designation. Democratic decisions at different levels, autonomy at ground, and leaders needs to work more as enablers, and many more.. you may find more on it in my earlier article.\nOpinions Matters — Build your own brand Opinions are flowing like anything, we will hear and talking about it a lot here, everyone has opinions, and you are asked to share your opinion too. On the floor, you will find people, who are speaking at conferences, sharing about books, writers of books, and sharing their thoughts and more. To matchup, I started reading books, and get addicted to writing too. I realized that I have mentored and trained so many people in past but never shared things to improve my one knowledge and my social brand value. I started writing on Medium, and then setup my personal blog site\n Kuldeep’s Space — the world of learning, sharing, and caring — thinkuldeep.com\n It is now a home of 100+ articles on technology, motivational, take-aways and events. I started writing for me, without really too much worrying about audience like, how would they feel? and started getting very good response from people. Just like social media addiction, reading, writing and sharing is also an addiction, a good addition to go for. My articles start getting good traction and so far published for ThoughtWorks Insights, DZone, LinkedIn Pulse, XR Practices Publication, German Magazine TechTag.de, AnalyticsVidhya Magazine, and DataQuest India.\nSharing thoughts, opinion with others has not only helped me creating my brand value at Thoughtworks and outside, but also helped me to navigate through another important change coming next..\nWhat is your WHY? — Bring the purpose ‘Why’ is so commonly used word here, be it discussing business with client, or starting a new project, or scaling the organization. People ask Why? Even we ask, Why does Thoughtworks exists? So it becomes a culture to focus on purpose for everything we do. This helped me to think, what is my Why? why do I need to share? whom should I be sharing with? etc.\nI started collaborating more with students and enthusiasts needing guidance, by judging and mentoring them in hackathons, conducting guest lectures, webinars for universities and communities. Built a community of 500 thoughtworkers and socially connected with 1000s of enthusiasts outside Thoughtworks by contributing at Nasscom, The VRAR Association, TechGig, Great International Developer Summit (GIDS), Google DevFests and more. People ask me how did I do it, answer is, it is result of the change I described in last section.\nOne more thing that changed my view about life, is when I got chance to mentor Julie at PeriFerry. Thoughtworks does talk about diversity, inclusion and social justice, but experiencing it on ground is life changing. Social contribution at IndeedFoundation, dreamentor.in also gives me a purpose to my work. Bring purpose and balance in life, it helps us passing the difficult years that demands changes.\nAgility in leadership — Use your strength Recently I read an forbes article, that tells how important it is to train your leaders to drive the agile transformation. Thoughtworks understand it well. I am participating in a leadership development program. It has not only connected me with global leaders in the organization but also helped me in changing my perspective. Learned about stakeholder mapping, prioritizations, delegation, time management, difficult conversation, effective communication and more.\nThe topic of navigating through organization politics was very interesting, it has changed my perspective about politics. I have shared few learning about coaching — coaching is not what you think. I have learned about my strengths and how can I use them well to expand my leadership skills. Before this, I never thought about the things that are so natural to me are actually my strengths eg. Adaptability, Action oriented, Connectedness etc. It also helped me to know about side effects of my strengths or the blind spots where I need to be little conscious when applying my strength.\nAnother best part was Personal Brand Exercise, that helps me define “Me Now” and “Me Next” and plan around it.\nWith this, I would like to conclude this reflection of 3 years journey with statement that I saw recently at a hospital while getting vaccinated.\n We can’t change the time so it’s time to change ourself.\n This article is originally published at Medium\n","id":0,"tag":"general ; thoughtworks ; impact ; motivational ; life ; change ; reflection ; takeaway","title":"My impactful change stories","type":"post","url":"https://thinkuldeep.com/post/my-impactful-change-stories/"},{"content":"A good consultant, have to be a good story-teller. I feel you become a storyteller when you start making your own stories, so that you can explain the point of view in better way. I started writing stories while practicing storytelling and found that story writing is so powerful and it helps bring our imagination into reality. Here I am sharing a story of the BlackBox, a character that inspire me the most, and will inspire you too.\nLet’s start the story with my friend Sreedhar, the youngest and a pampered kid from childhood, grew up in the small town of Uttarakhand (India). He was considered the brightest child in his school, and managed to get into the premier engineering institute of India for graduation.\nSreedhar and I used to enjoy attending English lectures in standing positions if not thrown out of the class. Picture Source\nThis brought us closer, and he always gave me pride that my English is better than at least one person. The brightest kid of school time was somehow lost in college. He generally kept quiet, and year on year was accumulating courses to be cleared. He used to bunk most of the classes but was famous for two things. One was smoking, he had wrapped walls of his hostel room with stacks of cigarette boxes, and the second is black screen shell programming in Unix.\nHe painted the room black, and most of the time we saw his lips sucking cigarettes and his fingers dancing on the keyboard. Everyone else from the computer branch was called Box (“Dabba” in Hindi) but he got the title BlackBox, who was always in search to solve the mystery of college computer networks, specially when our hostel network crashed. Just like BlackBox in aircraft keeps the record of all the flight data that is useful in troubleshooting emergencies.\nFinal year came, and most of us were busy preparing for campus interviews or celebrating selections, while few still had courage to go for higher studies. He was in none of them.\n He used to tell me “I do things that add value to me”.\n I was not sure of the value but I was sure that the number of subjects he had accumulated to clear will keep him busy for next few years. The final year passed away so quickly, we landed up in the corporate world, and left the Blackbox in his own created lost world, a black room full of cigarettes, cables and a PC.\nTwo years later, we got together on convocation, the blackbox was still there, playing with cables and the camera’s placed in the convocation hall. It was nothing less than magic to see him controlling all the cameras from his room at that time.\n He smiled after looking at my degree and said I will not find him there. He was indicating that he wasn’t there for this piece of paper.\n I did not give much attention then and left college as a proud engineer. Years passed on, we all settled in our daily life, trying to face every problem of life in the name of agility and transformation.\nPicture Source\nI was at one of the biggest conferences in the Java world in the US, to present how we can automate Java migrations (a hot topic in those days). I was excited for my first talk on such a big platform but 5 minutes before the start I realized some hiccups in the network between my standard laptop and some advanced Audio/Visual system from some BeeBee company.\n Unknowingly I said “Where is the BlackBox?”, a fellow next to me given some surprising look. I was thinking about the BlackBox at that moment who used to help us in such situations.\n A minute later, a person wearing a BeeBee T-shirt came to me asking my consent to apply a patch on my PC from their handheld device, and I had no reason to say no. It was magic, the issue is fixed before I could raise it. The BeeBee person handed me a card. I noticed that BeeBee logo was all over, from entry, id-cards, stalls/showcases, exits, and all was automated. I turned the card and I see BeeBee means “BlackBox”.\nDisclaimer :The story is not linked to the Picture Source\nQuickly I finished my talks, started searching on yahoo! (popular search engine at that time) about BeeBee, and come to know that it is a company that produces and manages advanced A/V systems for large-scale events. I noticed a person called me from behind, holding a BeeBee tracker, I turned and guess what! I found the BlackBox again, founder of BeeBee Mr. Sridhar Kodian, holding 50 patents in the area of network and wireless communication.\n And, I understood what he meant when he used to say “He does the thing that adds value”.\n This story is originally published at My Medium Profile\n","id":1,"tag":"story ; motivational ; consulting ; storywriting ; storytelling","title":"The BlackBox!","type":"post","url":"https://thinkuldeep.com/post/story-blackbox/"},{"content":"eXtended Reality or XR is an umbrella term for scenarios where AR, VR, and MR devices interact with peripheral devices such as ambient sensors, actuators, 3D printers and internet enabled smart-devices such as home appliances, industrial connected machines, connected vehicles and more. Find here more details\nAt Thoughtworks, we have been talking about Rise of XR for enterprises, and evolving human interactions thru our lens of looking glasses.\n Evolving interactions - Consumers aren’t just demanding availability and accessibility — they expect interactions to be seamless, and richer. Enterprises can deliver this experience through evolving interfaces that blend speech, touch and visuals.\n Recently our marketing specialist Shalini Jagadish interviewed me, and we had interesting conversation on how evolving human behaviour adopting XR for enterprises, and need of ethical XR. Key excerpt are here.\nWhat exciting technological development are we seeing in the XR space? How is it going to change the world around us? It is really exciting to talk about this technology, with advancements in hardware and internet speed, specially computer vision technologies, I feel digital transformation is getting to a new level completely.\n “Smartphones are getting more smarter, glasses are getting smarter and empowered with computer vision tech that can sense the environment”\n Things like, lidar sensors are coming to mobile devices, that can 3D scan the physical world and replicate in digital world in real time. These capable devices can now track your hand, face, lips movement, eyes and whole body, and all this brings new ways of interactions. For example, your hands might be busy working, and you do hands-free interactions with smart devices with head movement, gestures or just by talking to that device.\nXR is bringing the tech right in the context where it is needed the most. if I need see a help manual while working in the field, it will just brings the digital manual in front of your eyes with almost no interruption to my work. If I need to take take help from an expert sitting remotely, I can just share what I see in the field to the expert live, and the expert can guide me right in the field by markers and annotations. We can record and repeat the situation of physical world with almost no time and cost, and troubleshoot them better.\nWe will soon start believing the digital virtual world as real physical world.\nHow are the enterprises adopting the XR technology? I call the traditional digital business as unextended digital business, where they are loosing around 20–40% of productivity due to traditional old technologies. Enterprise are now focusing on building XR platform that can smoother the adaption of Enterprise XR (eg. ThinkReality platform). Enterprises are working on enabling their existing products and services with XR (eg. Flipkart brings Virtual Product Trial). The below article explains top five things to consider when business leaders embark on their Enterprise eXtended Reality journeys.\n Making pandemic-proof workplaces a reality with XR - Enterprise XR is gaining traction by making the pandemic-proof workplaces a reality. Here we describe the key considerations of enterprise XR journey.\n XR has already a proven tech for entertainment segment, and now it is going beyond entertainment, and getting adopted faster. For example industrial, manufacturing and automation, it is being explored for use-cases like training, remote collaboration, task based workflows, compliance, service and maintenance.\nHealthcare is also using it for training the staff and also experimenting as VR therapy where patient is introduced to a virtual environment that can cures the anxiety or psychological problems. Try before buying, fitments are popular use cases for XR in Retail and Real Estate domain. Indoor positioning is another big area where industry is exploring opportunity using XR tech. Online/Digital education segment is also growing big in this time, and bringing more realistic interactions with real looking 3D objects as compare to traditional paper pictures, it is helping the students to understand and memorize the concept faster.\nShould businesses consider investing in this technology in the Post-Covid World where people are still sceptical to go out? We are going in a new normal, where human behaviour toward many things will be changing. Read more here top 5 behaviour changes —\n The future of shopping - eXtended Reality (XR) that includes AR and VR technologies have been silently disrupting the gaming industry for more…\n We will start believing on digital world more than earlier, if technology can ease the life, we tend to accept it with its pros and cons. For example, today going with strangers on a carpool in private taxi Uber/OLA is quite common, it wasn’t the case earlier. Online schools are running for more than 1 year now, and we have accepted them as a normal thing. Working from home or work from anywhere is becoming a common thing in enterprises, and it wasn’t the case two year ago.\nToday to buy a furniture for my home, I don’t feel to go to store, do measurement and check fitment, rather I would prefer to try the furniture directly at home with XR app, and believe it as like I am placing a real furniture, this is much safer option today. Same goes for repairing a home appliances, if I can fix it with help of a remote expert, I would probably go with that and pay the expert online. If I can teleport the person in 3D in front of me collaborate at my comfort, then I don’t really feel a need to travel. If I can work from home at my comfort then I don’t really feel a need of a permanent physical office. So things are changing, XR tech is also evolving at faster speed, enterprises need to stay relevant and it is even more important to include such tech in the our digital transformation journeys.\nWhat is Thoughtworks doing in XR? We understand that XR is moving from Nicety to Necessity, it is tech that is no more a hype, it has move away from Gartner’s hype cycle. We have observed serious use cases for our clients, where they are getting benefited with this tech.\nWe understand XR adoption as a journey than just a few projects, we are helping our clients to define a XR journey, identify the right use cases that make their products and services XR enabled, and integrate with their existing enterprise services, and we bring our clients on a roadmap for XR immersive ecosystem.\nLooking at the data we unconsciously provide to these technologies, Should we be worried about Robot takeover or Mind-Control with XR? Yes, I understand your concern. It is a very valid concern. XR tech become more powerful when it gets integrated with other tech such as IoT, AI, 3D printing, robotics, wearables solutions, 5G, Brain Computer Interfaces. Advancements in other technologies is making XR more advanced. Today’s smart glass will be tomorrow’s smart contact lens, and today’s smartphone will be tomorrow’s wearable compute unit charged with body energy. We will living the tech.\nAs discussed earlier, XR is all about changing the belief, XR solutions are designed in such a way that we believe virtual replica of physical world as a real object, and get immersed in that. Just like any other tech, this tech also comes with some side-effects. This tech can produce virtual fake environments and fake situations so easily, and it is a mind changing too. This can be easily misused to brainwash people, and let them follow some unwanted motives. As XR getting more powerful, we need technology to identify what is fake and what is real. In a way, industry need to invest in producing standards and practices that allow XR devices to allow on Ethical XR, and stop this technology to get misused.\nIt is originally published at XR Practices Medium Publication\n","id":2,"tag":"xr ; ar ; vr ; mr ; enterprise ; covid19 ; thoughtworks ; practice","title":"Enterprise XR to Ethical XR","type":"post","url":"https://thinkuldeep.com/post/enterprise-to-ethical-xr/"},{"content":"This article is originally published at Thoughtworks Insights, co-authored with Aileen Pistorius, Julie Woods-Moss and Anna Bäckström.\nTime was when virtual and augmented reality was regarded as simply gaming technology — the preserve of fun, consumer applications with little serious consideration for the enterprise. But that’s in the past, in the post-COVID world, remote working will be ubiquitous — forcing enterprise leaders to find innovative ways to improve collaboration. This, it transpires, is the sweet spot for what’s termed enterprise XR (extended reality) — a group of related technologies that enable us to blend the physical and digital worlds.\nThis urgent need to enrich remote working practices has been accompanied by the rapid evolution of XR-enabled devices. The once ugly, heavy and impractical smart glasses and heads-up displays are being superseded by a new generation of sleek devices, which have dissolved the barrier to adoption.\nRemote fatigue COVID-19 has forced us into new behaviours, one of which is working from home, leading to long hours on Zoom, or other video conferencing platforms and little to no personal interaction with our colleagues .\nMeetings, workshops, conferences, and webinars are happening in the same virtual — and often monotonous — ways. We’re stuck with a screen in almost the same pose and posture. Most of the time sitting down, which may well have longer term health consequences. “According to the American Psychological Association (APA) remote, and in many parts of the world, isolated online working also has psychological effects: feelings of apathy, anxiety and even depression have been reported since the onset of COVID-19.\nMore specifically, why do we experience the virtual meetings as more energy consuming than meeting our colleagues and clients face-to-face? Psychologists emphasise that this is because of lack of social and nonverbal cues, which can form as much as 80% of human communication. Combined with social isolation, it is all the more important to focus on mental health alongside physical health”, says Anna Bäckström, Principal of Consumer Science at Fourkind, part of ThoughtWorks, and Adjunct Professor of Social Psychology at the University of Helsinki.\nEven now with the re-opening of offices around the world, it will be difficult to go back to business as usual, while maintaining the social distancing norms. With constraints here to stay for the foreseeable future, it’s time to shape and rethink new ways of working and interactions.\nCan XR help here? XR provides a strong opportunity to extend our world from 2D to 3D, and, by adding wearables to the experience, to simulate a ‘real life’ experience. But could this be used in a valuable and productive way in a work environment?\nAt ThoughtWorks, we built an XR incubation center that lets us conduct XR experiments, while collaborating on agile development practices like TDD (test driven development) and CI/CD (continuous integration/continuous delivery) for XR, we have also open-sourced a test automation framework named Arium to fast track XR application development. All this work with XR tech has helped us understand the space more deeply. Recently our CMO sponsored the incubator to explore how XR could improve team working in a virtual environment. This resulted in us building an app called ThoughtArena, an augmented reality-based mobile app, that allows users to create a multi media space with virtual boards in which to collaborate. With this experiment, we wanted to test our hypothesis that XR can improve engagement, enjoyment, creativity and productivity while working remotely.\nWe engaged with just under 60 users to test their experience and reactions to the ThoughtArena tool. The testing was split into two parts, a qualitative and quantitative test. The quantitative one focused on usability and the experience itself, and the qualitative one focused on the behaviour and expression of the users while they were using the app.\nUsers found collaborating on a whiteboard in an AR (augmented reality) environment more interactive, enjoying the freedom of being able to move around their space to do this wherever they were, in their living room, office, kitchen or even outside. It’s common knowledge that changing your environment and moving around leads to improved creativity and productivity, which is made possible with AR’s flexible usage.\nThis improved interaction is driven by the fact that XR tech’s spatial solutions allow users to fully immerse themselves in the virtual environment and interact in a more natural way. In comparison to a video call, or more traditional methods of online training or meetings, which mostly require people to be in the physical environment, the AR equivalent is only limited by the individual\u0026rsquo;s imagination.\nThe user testing showed that the augmented board gave people a sense of a real physical board. Furthermore users stated an increase in creativity and productivity due to the opportunity to share ideas via images, videos or text notes and not just sticky notes on a physical whiteboard. Some users videoed their experience while testing the app as you can see here.\nChallenges of an evolving industry While XR spatial apps may provide a remedy for Zoom fatigue and lead to a more healthy and interactive work style, there are some challenges to be aware of too.\nWe need to be careful while designing such interactions, for several reasons. One is that too much movement may cause physical fatigue by holding or wearing devices that bring the user into the extended reality and this could negatively affect the users creativity and concentration levels.\nWe must also be aware of the physical space available in order to minimise the risk that users might fall if they are in an immersive environment and not conscious of the real environment while interacting with virtual objects. With advanced phones with Lidar sensors, you can also detect objects in the environment and warn users of obstacles,or guide the user forwards safely.\nAnd last, but not least, phone based XR applications, where the user needs to hold a phone in an awkward position for a longer duration, could cause a strain onhands and arms. This can be addressed by focusing on HMDs (head mounted displays), rather than purely phone-based applications. However, these HMDs are still a long way away from becoming mass market ready. We see a very good first attempt with Oculus Quest, however this is just the first of its new generation and older versions still lack precision. We are looking forward to seeing more progress in this space.\nXR tech is evolving at such a rate that the untapped potential boggles the mind. The standards and practices are growing from the learnings and experiments the industry is undertaking, but there are still many standards and practices that need to be defined for real immersion.\nWhat\u0026rsquo;s next and where to start? Hybrid and home working is likely to be one of the structural changes that persists well beyond 2020 (Forrester, Predictions 2021: Remote Work, Automation, And HR Tech Will Flourish). Technology will have a role to enable this mode or work beyond the video conferencing experiences of today. From our study, whilst some challenges were uncovered, AR technologies will no doubt have a value creating role going forward. If you or your company decides to experiment with XR as a potential solution to tackle the changing customer experience and expectations that you face now, start small.\nStart by creating awareness. Experience the devices, set up studio environments in the organization to give people the opportunity to learn via incubated practices or hackathons. By doing this, you will drive innovation and generate ideas and MVPs (minimum viable products) for products and solutions that are customer first, after all, XR is all about the user experience. From there invest in capability building, and up-skilling your people in the sales/pre-sales area. In an evolving digital landscape, XR technology will play an ever-increasing role, so the time to begin your exploration is now.\nThis article is originally published at Thoughtworks Insights, co-authored with Aileen Pistorius, Julie Woods-Moss and Anna Bäckström.\n","id":3,"tag":"xr ; ar ; vr ; mr ; thoughtworks ; technology ; covid19 ; learnings ; futureofwork","title":"Navigating the new normal with augmented reality","type":"post","url":"https://thinkuldeep.com/post/navigating-new-normal-xr/"},{"content":"The IMF called the COVID-19 crisis ‘unlike any other,’ where the global growth contraction for 2020 was estimated at -3.5 percent. Image source : Using Lenovo ThinkReality A6 Glass at Workplace\nHere is a quick overview of the biggest challenges that CXOs (and their businesses) have faced over the recent past, to better illustrate how the pandemic is affecting businesses:\n Businesses are grappling with slowing global supply chains Sales and production growth is further impacted with a subsequent dive in opportunities for partnerships and growth The flattened or negative growth market result in less manpower and resources to work with, further affecting productivity ​The five biggest challenges that CXOs are facing today\nOn the other hand, consumer behaviour has irrevocably changed– the way people shop, consume education, travel etc. In trying to meet these challenges, leaders are crafting comprehensive business continuity plans that include what’s known as the ‘pandemic-proof infrastructure.’\nPandemic or crisis proof infrastructure is characterized by contactless, hygienic, location agnostic, remote and/or virtual working environments.\nSuch a setup will help business leaders:\n Adapt to and upgrade work environments to better suit the new normal Onboard new and old employees into a new work environment Better understand and work with quickly evolving business and customer priorities Given the scenario described above, we clearly see the potential that emerging tech lik eXtended Reality (XR) has to help enterprises crisis-proof their workspaces.\nEnterprise XR (EXR) goes mainstream XR tech has progressed from Gartner’s hype cycle and the confines of the consumer (gaming) industry to become a mature technology ready for business consumption.\n According to Digi-Capital - “Enterprise VR active users are beginning to reach critical mass in the training vertical\u0026hellip;Enterprise VR training company, Strivr partnered with Walmart to roll out 17,000 Oculus Go headsets loaded with its software to 4,700 stores and 1 million employees, as well as Verizon using it to train 22,000 employees across the US.”\n And, when we look at reports like IDC’s prediction of global AR/VR spends reaching $136.9 billion by 2024, or news of Microsoft winning a U.S. Army contract for augmented reality headsets, worth up to $21.9 billion – it’s evident that (public and/or private) enterprise-wide adoption of XR is on the rise.\nSo, can XR really crisis-proof workplaces? XR tech can simulate physical environments in immersive, virtual ones, thus creating a life-like experience. This helps entire workforces collaborate across remote locations, building on the ‘work from anywhere’ norm. Those principles also allow XR to be an efficient medium for remote upskilling of a large workforce.\n A recent HBR Analytic Services study stated, “[XR is] a key part of enterprise digital transformation initiatives. It is emerging as an important tool to improve employee productivity, training, and customer service.”\n Five considerations for EXR journeys ThoughtWorks has put together the top five things to consider when business leaders embark on their Enterprise eXtended Reality journeys:\n Enterprise integration: EXR solutions have to integrate with various enterprise systems - from Identity and Access Management (IAM) to Enterprise Risk Management (ERM) to Customer Relationship Management (CRM) and other proprietary services.\nThey must comply with organizational standards, processes and policies by integrating with Mobile Device Management (MDM) solutions. Additionally, EXR solutions should allow for a consistent user experience (and design) across an organization’s systems (e.g. a single sign across websites, mobile or desktop apps).\n XR content management: Conversations around XR content (like what’s used in gaming) are largely focused on the consumption stage i.e. the visualization and immersive quality of the content. However, the journey begins earlier, at the generation and publication stages, where EXR is yet to make significant progress.\nIt’s ideal for Enterprise XR to support entire content management pipelines – generating multi-format communique leveraging CAD, CAM, 3D Model, 360° videos etc. This ensures proactive and seamless integration across media platforms/channels. It also encourages efficient accessibility, publishing and managing of the content.\n Customizations: EXR adoption will grow when solutions are software and hardware agnostic and customizable to the needs of enterprises. For instance, EXR must be compatible with software apps and web services running on cloud, on-premise, on hybrid setups, and with evolving infrastructure.\nEnterprise XR adoption benefits from a vibrant partner ecosystem as that guarantees integration with a variety of devices, development tools and frameworks like Unity, Unreal, Vuforia, and others, alongside industry leading components like Google ARCore, Apple ARKit, MRTK etc. EXR also expects interoperability to enable existing mobile and web apps with XR, and this helps leverage existing investments.\n Monetizing solutions: EXR solutions can be a critical use case with measurable ROI. Organizations can implement turnkey XR solutions for remote assistance, guided workflow and training; remote data visualization, design collaboration and compliance. These solutions can be monetized by offering them to wider industrial usage.\n Standardization: The Enterprise XR space should implement software development best practices like Agile principles, open source, etc. Standardization matters because there are a multitude of tooling and tech stacks used for EXR solutions.\n While XR has the potential to reorient the enterprise environment, enterprises face multiple challenges with the XR development lifecycle. One gap is the automation of testing XR applications. ThoughtWorks developed Arium – an open-source, lightweight, extensible, platform-agnostic framework that helps write automation tests that are built specifically for XR applications.\nUse case: smart XR technology for the enterprise Earlier this year, Lenovo introduced the ThinkReality A3 - “The Most Versatile Smart Glasses Ever Designed for the Enterprise.” One of the market’s most advanced and versatile enterprise smart glasses, the ThinkReality A3 is part of a comprehensive digital solution offering to deliver intelligent transformation in business and bring smarter technology to more people.\nImage Source - Lenovo ThinkReality A3\nVikram Sharma, Director Software Development at Lenovo, explains how their ThinkReality Platform is designed to build custom XR solutions for a diverse range of hardware and software stacks, that can easily integrate with existing enterprise infrastructure. It comes with a powerful Software Development Kit (SDK) coupled with the management portal that bakes in cloud and device services.\nThinkReality headsets are loaded with enterprise ready services where users can login, work in tandem with the assigned task workflow and log out upon task completion. The device is compatible with dynamic enterprise device usage policies that can be enforced via integrated MDM or from Lenovo’s cloud services.\nVikram elaborates on how the ThinkReality platform works well with holo|one for workflow management, and with Uptale for onboarding and re-skilling the workforce. There are multiple usage opportunities with such a tool. For example, enterprises can use it to train personnel on new products, support troubleshooting or repairs from anywhere in the world.\n Lenovo’s ThinkReality Platform is a great example of EXR gaining traction, investment and subsequent advancement and maturity. Lenovo’s use case is a clear signifier that business recovery, distributed workforces and hybrid work models are pushing small and large businesses to adopt new technologies like XR for smart collaboration, increased efficiency, and lower downtimes.\nWould like to conclude this article with a quotation from ThoughtWorker, Margaret Plumley, Product Manager and Researcher -\n “Most people when they think of AR, VR, they think of games, they think of consumers, but they\u0026rsquo;ve actually been used in enterprises with really positive returns. You\u0026rsquo;re seeing actual cases where time on task is reduced by anywhere from eight to 30% in industry. So, there are some very real enterprise level applications out there with very real results.”\n This article is co-authored by Raju Kandaswamy and originally published on ThoughtWorks Insights\n","id":4,"tag":"xr ; ar ; vr ; mr ; usecases ; thoughtworks ; technology ; lenovo ; enterprise ; thinkreality","title":"Making pandemic-proof workplaces a reality with XR","type":"post","url":"https://thinkuldeep.com/post/making-pandemic-proof-workplaces-a-reality-xr/"},{"content":"When the word “Coaching” comes into my mind, one name always pops up, the cricket legend Sachin Tendulkar and his coach Mr. Ramakant Achrekar who gave him cricket. This example also helps us break the myths that coaching is needed when there are performance issues or people who are under-performing need the most.\nSachin Tendulkar with his coach — Source\nIf Sachin Tendulkar needs a coach then we all need a coach in our life to get going. The corporate world is also realizing the importance of coaching. Organizations like ThoughtWorks, focus on coaching as one of the key parts of its cultivation culture. Being a ThoughtWorker, I am expected to coach people and I also have a coach, to get going. We do have mentors, trainers, consultants but in this article, I am restricting only to coaching.\nAs a part of our cultivation programs, we also go thru workshops and training to understand what effective coaching means to the corporate world.\n Is it about advising people? Is it about sharing our own experience and let them follow? Is it about providing solutions to the problems of a coachee? is it the same as what a sports coach does?\n Recently attended a coaching skills workshop as part of our talent growth programs, we discussed a number of such questions in the workshop, and it was hosted by Priya Kher from Collective Quest. Here are my key takeaways from the workshop.\nWord “Coaching” does not come from Sports Yes, it may surprise you, but coaching in the corporate world or at ThoughtWorks is not the same as coaching in sports. The word “coach” comes from a coach in a train, that lets you move and takes you to the destination that you have defined. So as a coach, we are not supposed to define where the coachee should go, but rather we focus on letting them go in the direction where they want to go.\nCoaching is a “no advice” zone Yes, coaching is purely a no advice zone, as a human this could be a difficult part, as we tend to advise people based on our rich experience, especially when someone come across a similar situation that we have already gone thru. We try to provide solutions to the problems and issues that the coachee comes with. But this is not coaching, it may come under mentoring, training, or consulting side of the cultivation based on what we do.\nAs a coach, we don’t need to have all the answers and let go our habit of providing fix based on our own assumptions.\nOne size doesn\u0026rsquo;t fit all, or our glasses may not fix the eyesight of others just because they have fixed our eyesight.\nGet insights It is important to first get insights of the ask, before really getting into the actions. It requires building a good rapport with the coachee so that they open up, the coach has to listen actively and emphatically. Some time most undervalued tool “paraphrasing” helps a lot. It boosts the confidence of the coachee that they are doing fine, and they can do it. Body language does play a great role here.\nWe have also discussed some good and bad coaching examples.\n Remember, or believe in that the coachee already has all resources and skills to handle the situation. We just need to ask the powerful questions that the coachee get provoked to find more ways to solve their problems on their own. Let them take time to understand the questions well and come up with answers.\nCoaching framework and tools There are some good coaching frameworks such as “GROW” — Goal, Reality, Options, and Will (or Way forward). It gives more structure to the conversations and may make the coaching more effective.\nGROW Model — Source\nWe can also use a spider chart, scale (0–10) to give weightage to different options or facts.\n Roleplay for GROW model All the situations can not be solved by Coaching Remember we can not solve all the situations with our coaching skills, we may need to switch between the other side of mentoring or training, managerial conversation based on the situation. For example, you may not want to discuss the promotion or salary review concerns with your coaching skills. So, It is important to set the right expectation before the conversations from both sides.\nConclusion In the conclusion, I would like to say that everyone needs a good coach that can provoke us to find ways, and choose the optimal way out. A coach also needs to practice effective coaching, there are some good tools that can help and structure the coaching conversations. It is in the best interest of the coachee and coach to set the right expectation for the conversations.\nHappy coaching!\nThis article was originally published on My Medium Profile\n","id":5,"tag":"thoughtworks ; motivational ; workshop ; takeaways ; coaching ; mentoring","title":"Coaching is not what you think!","type":"post","url":"https://thinkuldeep.com/post/coaching-workshop-key-takaways/"},{"content":"Self driving cars are a good example of what artificial intelligence can accomplish today. Such tech-centered advances are built on the back of highly trained Machine Learning (ML) algorithms.\nThese algorithms require exhaustive data that help the AI effectively model all possible scenarios. Accessing and pooling all that data needs complex and very expensive real-world hardware setups, aside from the extensive time it takes to gather the necessary training-data in the cloud (to train ML). Additionally during the learning process, hardware is prone to accidents, wear and tear which raises the cost and lead time in training the AI.\nTechnology leaders like GE have side stepped this hurdle by simulating real world scenarios using controlled-environment setups. This controlled environment simulations coined the term ‘Digital Twins’. A digital twin can model a physical object, process, or person with abstractions that might be necessary to bring the real world behaviour in digital simulation. This is useful to data scientists who model and generate the needed data to train complex AI in a cost effective way.\nAn example is GE’s Asset Digital Twin – the software representation of a physical asset (like turbines). Infact, GE was able to demonstrate a savings of $1.6 billion by using digital twins for remote monitoring. Research predicts the digital twin market will hit $26 billion by 2025, and major consumers of this technology being the automotive and manufacturing sector.\nTools at hand for digital twins Through experimentation, Extended Reality (XR) and tools used to create XR have been found best suited to leverage digital twins with ease. Also, one of the most popular tools used to create digital twins is [Unity3D[(https://unity.com/)], also mentioned in ThoughtWorks Technology Radar.\nUnity3D ensures an easier production of AI for prototyping in robotics, autonomous vehicles and other industrial applications.\nUse case: autonomous vehicle parking Self driving cars have a wide domain of variables to consider — lane detection, lane following, signal detection, obstacle detection etc. In our experiment at ThoughtWorks, we attempted to solve one of the problems; train a typical sedan-like-car to autonomously park itself in a parking lot through XR simulation.\nThe outcome of this experiment only produced signals such as the extent of throttle, brake and steering needed to successfully park the car in an empty lot. Other parameters like the clutch and gear shift were not calculated as they were not needed in the simulation. The outcome was applied in the simulation environment by a car controller script.\nThis simulation was made up of two parts, one was the car or the agent and the second was the environment within which we were training the car. In our project, the car was prefabricated (CAD model of car) and attached with a Rigidbody physics element (that simulates mass and gravity during execution).\nThe simulated car was given the following values:\n A total body mass was about 1500kg A car controller with front wheel drive Brake on all 4 wheels Motor Torque of 400 Brake Torque of 200 Max. steering angle of 30 degrees Ray cast sensors (rays projecting out from the car body for obstacle detection) in all four directions of the car. In the real world, ray cast sensors would be the LIDAR-like vision sensors. The picture below, displays the ray cast sensors attached to the car:\nRaycast sensors in all sides of the car to detect obstacles\nRay cast sensors in action during simulation\nThe car’s digital twin was set up in the learning environment. A neural network configuration of four hidden layers and 512 neurons per hidden layer was configured. The reinforcement training was configured to run 10 million cycles.\nThe Reinforcement Learning (RL) was set up to run in episodes that would end or re-start based on the following parameters:\n The agent has successfully parked in the parking lot The agent has met with an accident (hitting a wall, tree or another car) The episode length has exceeded the given number of steps For our use case, an episode length of 5000 was chosen.\nThe rewarding scheme for RL was selected as follows:\n The agent receives a penalty if it meets with any accident and the episode re-starts The agent receives a reward if it parks itself in an empty slot The ‘amount of reward’ for successful parking was based on: How aligned the car is to the demarcations in the parking lot If parked facing the wall, the agent would receive a marginal reward If parked facing the road (so that leaving the lot is easier), the agent would receive a large reward A training farm was set up with 16 agents to speed up the learning process. A simulation manager randomized the training environment with parking lots having varying lot occupancy and the agent was placed at random spots in the parking area between episodes - to ensure the learning covered a wide array of scenarios.\nReinforcement learning with multiple agents parallelly\nThe tensor graph after 5 million cycles looked like this: Not only had the agent increased its reward, it had also optimized the number of steps to reach the optimal solution (right hand side of graph). On an average it took 40 signals (like accelerate, steer, brake) to take the agent from the entrance to the empty parking lot.\nHere are some interesting captures taken during training: ​ Agent has leant to park both side of the parking lot\nAgent learnt to take elegant maneuvers by reversing\nAgent learnt to park from random locations of the lot\nWay forward We’ve just looked at how digital twins can effectively fast track production of artificial intelligence for use in autonomous vehicles at a prototyping stage. This applies to the industrial robotics space as well.\nIn robotics, specifically, the digital twins approach can help train better path-planning and optimization for moving robots. If we model the rewarding points around the Markov decision process, we can save energy, reduce load and improve cycle time for repetitive jobs - because the deep learning mechanisms maximize rewards in a lesser number of steps where possible.\nIn conclusion, XR simulation using digital twins to train ML models can be used to fast track prototyping AI for industrial applications.\nThis article is originally published at ThoughtWorks Insights and derived from XR Practices Medium Publication. It is co-authored by Raju K and edited by Thoughtworks content team.\n","id":6,"tag":"xr ; ar ; vr ; simulation ; autonomous ; vehicle ; thoughtworks ; enterprise","title":"Training autonomous vehicles with XR simulation","type":"post","url":"https://thinkuldeep.com/post/training-autonomous-vehicles-in-xr/"},{"content":"The year 2020 was the year of change, it may be painful for some, but at the same time, it has taught us lessons in life. I have written about it earlier that we have to accept it, and it will be remembered and will have an impact on our whole life and after our life, no matter what we do. So it is important that we live the present fully.\n Change is the only constant, early we accept, early we get ready for the future. 2020 is a year of change, accept it, and change our lifestyle and get ready for the future. - 2020 - The year of change\n I am ending this year by reading the book “Power of Will” by Frank Channing Haddock. It explains the power of Will;\n The strong will is master of the body. We can achieve above and beyond if our actions are directed from strong and right will.\n The author talks about educating the Will, by cultivating memory, imagination, perception, and self-control. There are few exercises mentioned in the book that may help train the Will. I can very well relate to a few instances of the year 2020, where my Will played a bigger role, and unknowingly I was training my Will by observations, imaginations, actions, reasoning, physical senses, and actions. In this article, I reflect on them and in 2021, I will focus on consciously training my Will. I believe, we all have such instances, list them down, and educate the Will. With a well trained Will, we can achieve much more than what we think.\nA Better Start Always plan for a better start, it sets good motives to boost the Will. We started with a quickly planned trip to Ganga, at Rishikesh a holy place, with my cousins. They traveled from 500 km to join me, and from there we drove to Rishikesh overnight. We had been observing the situation in hand and few motives of being together, thoughtful conversations and imagination of driving near the riverbed. It boosted our Will, such strong that we took the trip in just a few minutes and the rest of the planning was done on the go.\nThis trip definitely gave us positive vibes that helped us go through to strongly face the challenge of 2020 and come out stronger. We were very clear about the flexibility needed for such a little unplanned trip. This gives us the capability to handle the unplanned. The positivity of the trip remains throughout the year and Ashok Ostwal, Gourav Ostwal in picture employed in 2020. Books also help in building point of views and strengthen thoughts, read here my take, Success Mantra from Amazon, change to become more successful from \u0026ldquo;What Got You Here Won’t Get You There\u0026rdquo; by Marshall Goldsmith.\nChange of Workstyle Who does not know the Covid19! The next big thing, nationwide lockdown, and restrictions forced all of us to think differently. Things that were occasional or non-preferable became a norm and maybe the preferable choice of work for many — The Work From Home. This change happened due to a strong Will to survive, and defeat Covid19. We are still educating the Will to deal with it, but we are now much stronger than before.\nSee how my day has changed! from kids school to whole day video calls, to energizers and more…\n Covid19 has challenged the way we eat, the way we commute, the way we socialize, and the way we work. In short, it… - XR development from home\n HumansOfRemoteWork at ThoughtWorks also covered my story.\n Kudos to the whole team for their strong Will power! Now we as a team are well placed, and continuously cultivating our Will to deal in the future.\nFinding the ways Work from home and Covid19 have not just changed the way we work but also changed the way live at home. Our ways of celebration have also changed. I have written about this change in an earlier article\n Such kind of changes have given me more points of view and facts that there are always alternatives, life does not stop at the current situation, it must go forward. I consider all such instances as a booster to mind and grow my will power.\nThere is always a lot to uncover in human potentials, read about it in the article below, and see how family members have uncovered their potentials in 2020.\n Covid19 has challenged the way work, I have penned down my earlier article to cover my workday and our shift toward new… - Uncovering the human potential\n Since the first change start from self, so I recorded my observations in the above article, now all these are good material to boost my will power, the will keep getting educated itself naturally, however, if we consciously put effort to train the Will then the result would be more fruitful.\nSharing and Caring Another important thing that helped me stay highly motivated throughout 2020 is a newly developed habit of sharing and caring. I have written more than 20 articles on thinkuldeep.com, ThoughtWorks Insights, XR Practices, TechTag.de and Dreamentor around technology, and motivations. Participated in 10 events as a speaker, guest lecturer, mentor, and juror at a platform like Google DSC, HackInIndia, Smart India Hackathon 2020, Techgig, Geeknights, The VRARA, GIDS, Nasscom.\nApart from this, I have been a mentor at PeriFerry, this has been a life-changing experience. Actually, we learn by sharing more and more. Here is my learning equation - The Learning Equation\n The learning equation — Learning is proportional to Sharing; Learning ∝ Sharing\n These instances give me good motives and to strengthen the Will to keep sharing. It is a continuous process.\nWork from Native Home By now, work from home has become a norm, I started full work from home at Gurgaon where we have an office, but later I shifted to my native place that is 500 km from the office. I have to spend this much time with my parents and natives after almost 20 years, it\u0026rsquo;s now an all-new experience with kids also enjoying non-metro city life that is more close to nature. This change enabled me to change my lifestyle, and focus on healthy living.\nBecame a cyclist with 1000 km run so far, the will is already trained to the next level, and hope to continue the same. Keep moving then only you can balance life. Watch the video with Ashok Ostwal, the mega cyclists.\n Home Renovation Last but not least, after moving to my native place, I was observing the home that is 25 years old and it was requiring repair and also some car parking space. It definitely requires real willpower not just from me but also from each family member here, as renovation means a lot of disturbance to daily life. I followed the principles of educating will and design the home in 3D, it not only helped me observe each change well but also imagine the future, then the next step was to train the will of the family by letting them imagine the future.\nRead my article here to know how technology helped me achieve this. XR enabled me to renovate my home!\nOther motives that got added to strengthen the Will are employment for the needy during these Covid19 times and keeping family busy. It was challenging to follow Covid19 safety protocols and continuing construction work with the office work, and however, as I said with a strong Will, we can achieve much more than what we think. I must thank everyone who has supported this directly and indirectly and bear with me.\nThis is definitely a life-changing moment for me that has trained my Will. Once things settle, I may move back to work from the office or from home near to the office, but this learning will remain with me and keep motivating me.\nThese were the few instances I consider as a contributor to my Will power, They have helped me provide a better perspective, imaginations, observations, reasoning, physical senses, and finally provoke actions. Most of these contributing to my Will unconsciously, but in the future, I will consciously exercise the will cultivation, to be better at it. I suggest reading the book “Power of Will” by Frank Channing Haddock, and share your thoughts.\nThank you, everyone, and wish you a strong Will Power in the year 2021.\nUpdated in 2021\nStart of 2021 with anniversary celebration This article is originally published at Medium\n","id":7,"tag":"thoughtworks ; motivational ; book-review ; work from home ; covid19 ; change","title":"Stories of Will Cultivation from 2020","type":"post","url":"https://thinkuldeep.com/post/will-cultivation-2020/"},{"content":"Covid19 is forcing us into a ‘new normal’, A new normal that makes most of us work from home, and that leads to spending long hours on Zoom, or other video conferencing platforms. Meetings, inceptions, discovery workshops, conferences, and webinars are happening in the same monotonous ways. We are stuck with a screen in almost the same pose and posture. Some might use their mobile phone or tablet but that limits us to a small screen. We do miss our office days and face to face meetings; the brainstorming, retrospectives, daily stand-ups or other important workshops exercises. Using physical boards, whole office walls and seeing our colleagues and clients body language was so normal pre-covid. Now, even when offices are re-opening, it will be difficult to go back to business as usual, while maintaining the social distancing norms. We have to face it, there is no other option than shaping what the ‘new normal’ means to our ways of working and interactions. We need touchless and contactless experiences that lead to the same results, and great digital products as well as customer experiences.\nCan XR help here? XR moving from nicety to necessity. At ThoughtWorks, we brainstormed on new ways of working that may be more engaging, interesting and productive could look like. One of those experiments is ThoughtArena, where users can collaborate in 3D space. It is an augmented reality-based mobile app, that allows the users to create a space with virtual boards in it.\nThoughtArena allows its users to extend traditional 2D workspaces to real world environments. Users can place virtual whiteboards in the 3D environment and use them like physical boards. They can place those broadcasts in their comfort and in real-time with other users. In addition, users can add audio and video notes and may vote on the notes.\nThis article explains our learnings around building spatial applications and also presents the results from the user experience and behavior analysis using the XR applications. We have done a couple of experiments and project deliveries in XR, learnings from them can also be found here.\nBest Practices for Product Definition The XR tech evolution is happening at great speed and there are a number of products being built to meet different business needs. XR based products are still evolving, and there are not enough standards and practices to validate the product maturity. Here are our learnings while defining the product features for ThoughtArena.\n Go for Lean Inception - Inception is typically the phase where we define the product and its roadmap, as well as align on the big picture. However, after the initial meetings with stakeholders, we realized the need for a lean inception approach, which gives us a quick start and iteratively discovers the MVP. We planned a few remote meetings with stakeholders from business, marketing and technology groups, and came up with an initial feature list. We betted on Bare Minimum Viable Product (BMVP) features and product vision in the first couple of days.\n Evolutionary product definition - We knew that, we have signed up for a product which will evolve with time, and we would need a plan which enables evolution. In ThoughtArena, we have included many new features since inception, and deprioritized others based on user expectation and need.\n Recruit a diverse user group - One of enablers for the product evolution is the user group with diverse skills and experience. We have recruited a diverse user group in terms of ethical background as well as professional background including members IT, Operations, Delivery, Experience Design, Marketing and Support, and diversity. The group helped us strengthen product definition and vision. They helped us to think about newer ways of interactions with the 3D boards such as adding multimedia which is typically not available in a physical or digital wallboard.\n State the Risky Assumptions - Defining a set of hypotheses for risks and assumptions and then validating them is valuable for a product, especially in an emerging technology space. We stated following risky assumptions for ThoughtArena:\n 3D experiences may be more enjoyable and better than 2D experiences Spatial (3D) experiences may improve creativity and productivity XR Tech is ready for prime time. It may be a remedy for Zoom fatigue. XR Products may leave a long lasting impact on the users. Plan to Validate the Risky Assumptions - Once we know the risky assumptions, we should plan to validate those quantitatively and qualitatively.\n Is the product 3D/Spatial fit? Is spatial experience enjoyable? Does the product really solve the user’s problem? Map the user’s mental model of a physical collaboration space - Users should be able to relate the experience of the product to the flexibility and intuitiveness of a physical collaboration space.\n Focus on lifecycle and diversity of content (creation, reorganization, consumption) - The tool should assist users in the different stages of content lifecycle : Creation (editing, duplication, look and feel of content), Reorganization (grouping, sorting, moving), and Consumption (updates, viewing real time participation, tagging, commenting etc.). The users should also be allowed to choose from a diverse range of content like freehand drawing, shapes, flowcharts, tables etc.\nUnderstand the user journeys for different use cases and focus on mapping them end to end functional flows. The product should aim to provide more transparency to view and manage collaborators and activity across the spaces and boards.\n Adapt the experience to the user’s context - Identify the relevant use cases to allow users to engage using their mobile devices. The tech should be stable to adapt to user’s need to move around or stay stationary without impacting the experience.\n Market analysis and coverage - Market analysis of similar products is important to understand what is already there. It also helps prioritize the product features. We analyzed XR collaboration products such as Spatial.io, Big Screen as well as popular products like Jabboard and Mural. This analysis enabled us to prioritize features such as audio, voice to text and video notes, and targeting to Android and iOS/iPadOS — which in turn would help to have more user base coverage.\n Best practices of User Experience Design The “Reality” is what we believe, and we believe what we see, hear and sense. So “Extended Reality (XR)” is about extending or changing the belief. Once the user gets immersed in the new digital environment, they also expect to interact with digital objects presented to them in the same/similar ways as they interact with real objects in the physical world.\n Minor glitch in AR applications can break the immersive experience and users may not believe in the virtual objects. It is more important to keep the user engaged all the time.\n AR Applications need great onboarding — As AR is a new technology, most of the first time users don’t know what to expect. They are only familiar with the 2-D interactions and they will interact with the 3-D environment in the same way while using a mobile device. This often results in interactions with the digital content in AR not being intuitive.\nEg. In the ThoughtArena app, users did not realise that they needed to physically move around or move their phone or tablet to be able to work on the virtual board. We needed to design a good onboarding experience, that properly communicated and educated the users about these interactions, to help them to get familiar with the new experience.\n We need to build hybrid user interactions of both digital and physical worlds, keep the best out of both worlds. Eg. During the user testing phase of the ThoughtArena app, we experienced that many of the users would like to use the gestures and patterns that are familiar to them. Users want to zoom in and out on the board or board items, however, there is nothing like zoom in and out in the real world. In the real world, in order to see things more clearly, we just move closer to the objects. Similarly, on a real board, editing is done by removing and adding an item again, however, in a virtual board, users expect an edit functionality on the added objects. For the best experience, we should include the best features of both 2D and 3D.\n Understanding the environment — As AR technology lets you add virtual content to the real world, it’s important to understand the user needs and their real-world environment. While designing an AR solution consider what use cases, lighting conditions or different environments your users will be using the product in. Is it a small apartment, vast field, public spaces? Give users a clear understanding of the amount of space they’ll need for your app and the ideal conditions for using it.\n Be careful before introducing a new interaction language. XR interactions itself are new to a lot of users. We realized that at this stage it would be too much for the users to introduce a completely new language of interaction. We followed Jakob’s Law of UX, and continued the 2D board and sticky paradigm in the 3D environment, tried to keep interactions such as pasting notes, moving from one place to another, removing etc similar to what we do in 2D.\n We have also researched on if 3D arrangement of content is always better than 2D arrangement, and found that spatial cognitive memory comes into play if the objects are distincts in look and feel and lessor in number, but if there is a large number of similar objects arranged in 3D then the user interfaces look more cluttered. Ref : 2D vs 3D\n Text Input is a challenge in XR applications. If one takes the text input in the spatial environment, then the 3D on-screen keyboard needs to be shown to the user, and it is hard to take input from there. Another way is to take text input with a 2D keyboard from the device, but then it breaks the immersive experience. User interface needs to be designed to provide pseudo immersion.\n Define product aesthetics — Define color, theme and application logo. We conducted a survey to take an opinion of different color themes. Based on the environment, color of the UI component may be adjusted to have better visibility.\n XR spatial apps may allow users to roam around in the space. It may be a remedy for zoom fatigue and may break the monotonous style of working. This may also make people more creative or attentive or atleast lead to a healthy work style. At the same time we need to be careful while designing interactions, for following reasons\n Too much movement may cause physical fatigue. It may be fatal if the user is in an immersive environment and not conscious of the real environment while interacting with virtual objects. Phone based XR applications where the user needs to hold a phone awkward position for a longer duration would cause strain in hand. Understand human anatomy while designing user experience in XR.\n Field of view (FOV) — The field-of-view is all that a user can see while looking straight ahead both in reality and in XR content. The average human field-of-view is approximately 200 degrees with comfortable neck movement. Devices with less FOV provide less natural experience. Field-of-Regard (FOR) — It is the space a user can see from a given position, including when moving eyes, head, and neck. Content positioning needs to be carefully defined, and provide provision to change the depth as per user’s preference. The human eye is comfortable focusing on objects half a meter to 20-meters in front of us. Anything too close will make us cross-eyed, and anything further away will tend to blur in our vision. Beyond 10-meters, the sense of 3D stereoscopic depth perception diminishes rapidly until it is almost unnoticeable beyond 20-meters. So the comfortable viewing distance is 0.5-meters to 10.0-meters where we can place relevant contents Neck/Hand movement — We need to position the primary UI element in the direct FOV without neck/hand movement, and secondary UI elements can be placed with neck/hand movement. UI elements which are placed beyond range require physical movement. Length of arm is another vital factor, It is essential to consider arms while designing UI, try to place fundamental interactions within this distance. Best Practices of XR Development XR development introduces us to the new set of technologies and tools. The standard development practices such as Test Driven Development, eXtreme Programming, Agile development helps building a better product. XR is influenced by gaming a lot, and many standard development practices for enterprise software development may not be straight forward.\n Plan for spikes - XR is still fairly new so it is better to have a plan for spikes for uncovering the unknowns.\n CI/CD first - Having a CI/CD setup makes the development cycle very smooth and faster. Investing some time in the beginning to set up the CI so that it is able to run the test suite and deploy to a particular environment with minimal effort, saves a lot of time later on. We used CircleCI for building both the Unity app, as well as the server side app. For the server side, we integrated with GCP. We maintained different environments on the GCP itself and CircleCI was handling the deployment to those environments.\n Quick feedback cycles - We follow a 2 week iteration model and had regular showcases at the end of those. The showcase consisted of a diverse user group (it is important to have a user group to better inputs, see previous articles of this series) , which helped us a lot in covering all aspects of the app, the features, the learnings, the process. All the feedback helped a lot in making our app better. Having regular catch ups and IPMs made sure we had the stories detailed and ready to be picked up for development.\n Learn the art of refactoring - A lot of times refactoring takes a backseat while delivering POCs or during tight deadlines. We did not let that happen here. The team made sure to keep refactoring the code as we moved along. In order to achieve refactoring, one must have a test suite to support it and we made sure that we have good code coverage and scenarios getting covered through our tests (unit and integration tests). There will always be scope for more refactoring, the trick is to keep doing it continuously and in small bits and pieces, so we do not end up with a huge amount of tech debt.\n Decision analysis and resolution - Collect facts and figures and maintain decision records.\nFor example following decisions we had to take for ThoughtArena App.\n Unity for the mobile apps — With Unity and its AR Foundation API, we were able to develop our app for both Android and iOS devices with minimal extra effort. The built in support for handling audio and video also helped to speed up the development process. Java and Micronaut for server — We decided to work in the Java ecosystem based on the tight schedule we had and the familiarity of the team with the Java ecosystem. We chose Micronaut for its faster startup times and smaller footprint. Also, we were focusing mainly on APIs and deploying on cloud, so we felt it to be the right framework for us. Websockets for real time communication — There are multiple ways to go about the real-time communication aspect. The first thing that comes to mind is websocket, an open TCP connection between server and client to exchange messages and events in a fast manner. There are other options as well to achieve real time, like queues, some DBs, etc. We went ahead with websockets as this does not require any additional infrastructure to make it up and running and it fits our use case as well — both client and server are sending messages to each other real time. All our data is structured and straightforward, so it was easy for us to choose a relational database for transactional data. Tech evaluation - Make sure we have a plan for tech choice evaluations to check if the tech is ready for prime time. We have evaluated multiple tech choices, for example cloud anchors, environment scanning, video playback, real-time sync, session management, maintaining user profile state locally etc.\n Extendable design — Separate out the input system from the business logic, so that multiple input mechanisms can be integrated such as click, touch, tap, double tap and gaze. We have used Unity’s event system as a single source of event trigger.\n XR tech considerations - AR takes a bit of time to initialize and learn the environment. Speed of this initialization depends on the hardware and OS capabilities.\n Tracking of the environment goes haywire when the device is shaking alot. Hard to control the AR system as it is a Platform specific capability. Longer use of the AR might make the hardware hot and also consumes battery as algorithms for AR are really CPU intensive. AR Tech is getting more mature year over year, but good hardware is still a dependency for this technology. AR Cloud anchors are early in stage but getting matured in a rapid manner. It needs proper calibration. In simple terms the user needs to do a little more scanning to achieve good results (to locate the points). We have to de-prioritized this feature due to its instability. Real time collaboration considerations\n Define how long the session should be. Handle disconnections gracefully. Define approach for scaling sessions, we have to introduce distributed cache for managing sessions. Cloud considerations — We have used Google App Engine (Standard environment) for our app, because of its quick setup, Java 11 support and auto-scaling features. Also, GCP handles the infrastructure part in case of GAE apps. However, Standard App Engine does not support websockets. So we had to switch to the Flexible Environment, which does su-pot Java 11 out of the box, to keep things on Java 11, we had to provide custom docker configuration in the pipeline. Best Practices of XR Testing Our learnings from developer level testing for XR applications\n We learned that most of the devices have software emulators which can be integrated to the XR development tools such as Unity, it helps developers to test the logic without a physical device. We realized that 70% of the functionalities of ThoughtArena app didn’t require a physical environment/movement like adding/moving of stickies or playing of videos, making of API calls to get data, displaying list of boards or members in a space, adding of boards/deleting of boards. All these could be tested in the editor’s instant preview. Cases where physical movement were required like the pinning and unpinning of Boards were the ones that could not be directly tested in the editor, without manually rotating the user’s POV from the inspector. Things that had platform specific plugins like uploading of Image, saving snapshot of board to local storage could not be tested in the editor. Unit testing - Unity engine comes with a great unit testing framework, and by plugging in a mock testing framework, we can test units very well.\n Features dependent on API calls — We had an entire internal mock server setup early into development to reduce dependency on the APIs. Once the API contracts were settled on, we could write tests for them and continue development even if the APIs were under development Board/Notes interactions by mocking user inputs such as click/touch Simulated the interactions as movement via mocking Pinning/Unpinning of boards and moving stickies. 2D UI interactions and use flow Automation testing - We extend Unity Editor’s unit testing framework and built a functional test suite that can do integration tests and end to end tests for some scenarios on the plugin device. We have open-sourced an Automation Testing Framework — Arium have a look here.\n Integration Testing - We have an integration testing suite for the backend, and for testing the APIs. Integration testing helped us to test out the websockets as well and if they are working in the intended way. These tests run on the CI after every push we make to the backend repo. Since our code was also using GCP storage as well, we needed to make sure that we are able to replicate that behaviour. We found out that GCP storage APIs do provide a test storage, which mimics the storage locally and used that in our integration tests.\n Functional Testing - Since XR depends on the environment, testers need to test the functionality in different environments, different lighting conditions, noise level, indoor, outdoor, moving into the environment to test the stability of functionalities.\n Acceptance criteria for the XR stories are quite different from what we see in general software development stories, a lot of factors need to be considered.\n Keep the user immersion intact while working. Acceptable Performance on different environment conditions. A functionality working very well may not work that well if the environment condition changes, then the user may lose the immersion, and the XR app does not meet the expectation. Testers must consider spikes to define the correct acceptance criterias. The factors which may impact may not be known before, for example what would be the impact of rain on a business story that requires outdoor testing. Acceptance criteria also needs to be evolved as the product evolves. Make sure we have supported devices with required resolutions for the test team, plan it from the stars.\n User Experience Testing Plan for user experience testing to get user insights and validate the hypothesis with users. It helped us to better understand the impact, benefits, limitations, flexibility of our spatial solution. The user experience also focused on the aspect of collaboration and assessing the effectiveness of the developed concept in terms of usefulness and usability. It can be divided in two parts: User research and Usability Testing.\n The user research covers as-is analysis of user, user interactions and tools the user is using relevant to the problem statement of the solution, and also note down user expectation from the changing environment. Usability testing covers how well the solution engages the user, and focuses on usability and usefulness of the solution. The next section describes the result of user experience testing. User experience testing should cover quantitative and qualitative analysis. Here are the our key learnings\n Recruit users for dedicated testing sessions with the UX Analysis team. Recruit minimum 9 users. Plan user interviews before and after the testing sessions, and over user behaviour during the testing. Recruit independent test users, and ask them to share their observations in a survey for quantitative analysis. Recruit users with diverse backgrounds, experiences and familiarity with XR. Define a test execution plan with a set of scenarios for dedicated user testing sessions, and try out mock sessions to see if the plan is working and we are getting the expected inputs. Define detailed questionnaires for independent testers. Collect observations only after a few trials of the app otherwise we might get correct feedback on the app feature if app onboarding/user training is not that great. Conclusion XR tech is evolving at such a rate that the untapped potential boggles the mind. XR business use cases are growing with convergence to the AI and ML. XR devices are now able to scan and detect the environments, and can adjust to the environment. The standards and practices are growing from the leanings and experiments industry is doing. We have shared our experience from building XR applications.\nAuthors and contributors : Kuldeep Singh, Aileen Pistorius, Sumedha Verma and Team XR Practices at ThoughtWorks. XRPractices\nThis article is originally published at Medium\n","id":8,"tag":"xr ; thoughtworks ; technology ; ar ; vr ; mr ; best practices ; experience ; learnings ; covid19","title":"Best Practices for Building Spatial Solutions","type":"post","url":"https://thinkuldeep.com/post/xr-best-practices/"},{"content":"This article is originally published at TechTag.de, you may find the German version here too.\nThe interaction between humans and technology is evolving radically. We need to develop our thinking and skills beyond the keyboard and screen. But what does this new technological reality look like? Here it is worth taking a closer look at the different forms of extended reality and how they can change our everyday lives.\nThe way we interact with technology today isn\u0026rsquo;t too far from what was predicted by futuristic sci-fi pop culture. We live in a world where sophisticated touch and gesture based interactions with wearables, sensors, augmented reality (AR), virtual reality (VR) and mixed reality (MR) work together. This creates an extremely immersive user experience.\nGartner predicts that the implementation of 5G and AR / VR will transform not only how customers interact, but also the entire product management cycle of brands. In fact, according to Deloitte, it is expected that in 2024, VR hardware and applications will generate sales of 530 million euros in Germany, in the best case even 820 million euros if new applications or gadgets conquer the market.\nIn order to better understand what influence augmented forms of reality can have on our everyday lives and what opportunities companies and users have to interact with the world around them, it is first necessary to illustrate the various terms used in the discussion be used.\nVR, AR, MR - the variety of realities Forms of Extended Reality\nVirtual Reality (VR) - the best-known term - is an immersive experience that opens up completely new worlds by allowing users to immerse themselves in a virtual, computer-generated world with innovative forms of interaction. The technology can be used with Head Mounted Display (HMD) devices. Smartphone-based VR is the most economical choice to test the technology. Google Cardboard and similar variants are quite popular in this segment. All you need is a smartphone that fits in a pre-made box. Many media players and content providers such as YouTube now offer stereoscopic 360-degree content.\nUsers testing VR devices\nAugmented Reality (AR) in turn represents digital content in a real environment. Smart glasses project digital content directly in front of the user\u0026rsquo;s eye. In the past five years, such devices have evolved from bulky gadgets to lightweight glasses with more computing power, longer battery life, a larger Field of View (FoV) and consistent connectivity. This projection-based technology makes it possible to display digital content on a glass surface. One example of this is the head-up display (HUD), which is used by many automobile manufacturers. It projects content onto the vehicle\u0026rsquo;s windshield. Drivers can see contextually relevant information such as maps or speed limits.\nIf you combine AR and VR, you move in the area of ​​mixed reality (MR), in which the physical world and digital content interact. Users can interact with the 3D content in the real world. In addition, MR devices recognize and store environments while simultaneously tracking the specific position of the device within an environment.\nUser testing a Microsoft MR device\nMixed reality devices such as Microsoft Hololens enable object recognition, scene storage and gesture-controlled operations.\nThe trend towards Extended Reality (XR) can be evaluated on the basis of the components mentioned. This collective term describes use cases in which AR, VR and MR devices interact with peripheral devices such as environmental sensors, 3D printers and internet-enabled smart home devices such as household appliances, industrial machines, vehicles and much more. Due to the synergies of AR, VR and MR technologies for the interactive display of 3D content as well as the modern development tools and platforms, the term XR is increasingly becoming a common language. In the industry, numerous use cases and opportunities for XR platforms are currently being evaluated, especially those that enable services across devices, development platforms and company systems.\nExtended reality in its various forms is not just for gamers and tech enthusiasts. There are also a plethora of implementation options that businesses can benefit from.\nA technology with versatile potential Extended Reality can be used in a variety of areas in companies, starting with training your employees or maintaining your equipment. VR technology can create and simulate environments that are either difficult to reproduce or involve enormous costs and / or risks. Examples of such scenarios are factory training courses or training courses on fire protection or controlling an aircraft. AR, on the other hand, can offer technicians contextual support in a work environment. For example, a technical specialist wearing AR-enabled smart glasses can view digital content that is displayed in their work environment. This could take the form of step-by-step instructions for a task or looking through relevant manuals and videos while on the assembly line or on site performing a task. The AR device is also able to take a snapshot of the work environment for compliance checks. In addition, the tech specialist can stream their activity to a remote specialist who can annotate the live environment and review the work.\nBut companies can also take advantage of the technology that allows autonomous vehicles to create virtual maps and locate objects in an environment. Similar technology enables XR-enabled devices to recognize unique environments and track the location of the devices within. Virtual positioning systems of large systems such as airports, train stations or factories can be created. Such a system also enables users to mark, locate and navigate to specific target points.\nXR technology can also be used in the retail sector to improve the customer experience and make it more unique. In an XR environment, customers have the opportunity to try out 3D models of products before buying them. For example, users can customize a virtual model of a piece of furniture in terms of color, interior design and fabrics. You can also look at the piece of furniture from all sides and experience the personalization and assess how it looks in the surroundings of your home.\nExtended Reality: Hardware and software have to go hand in hand XR technology is evolving at such a rate that the untapped potential is overwhelming. In order to be able to fully exploit it, it is recommended that the development of the software keep pace with the development of the hardware. The latter needs to be user-friendly and lightweight in terms of field of view, with immense processing power and battery power.\nThe software, on the other hand, needs to make better use of the advantages of artificial intelligence and machine learning in order to better assess different environments and make them accessible for every possible scenario. Organizations should also proactively create as many use cases as possible, fix deficiencies, and contribute to XR development practice. VR applications are a reality that prepares us for the pronounced effects of AR and MR technology in the future.\nThis article is derived from ThoughtWorks Insights and originally published at TechTag.de\n","id":9,"tag":"xr ; ar ; vr ; mr ; fundamentals ; thoughtworks ; technology","title":"eXtended Reality - the new norm?","type":"post","url":"https://thinkuldeep.com/post/extending_reality_new_norm/"},{"content":"Covid19 is still unmeasurable, and we are more or less getting adjusted to this new normal. We are eagerly waiting to say goodbye to this year 2020 and welcome the coming year of hope, 2021. Read more here to know why I call this year 2020, as the year of change.\nIn April this year, I explained XR development from home, and that was still near shore to office, I was less than an hour away from the office. The year 2020 wanted more changes and two months ago, we decided to take a break from the hustle-bustle metro life, and move to my native place, Raisinghnagar, a small peaceful town in Rajasthan, India. Here, covid19 cases are almost nil and we feel safer here. Above all, we got a chance to stay with family after more than 2 decades (after my schooling days).\nThe changes continued, I took this opportunity to bring other positive changes and started following small-town life, early rise, walking, cycling near the canal, yoga, spending time in the farms, and celebrating with family. Kids are enjoying the most. All this, I am doing with regular working from home, keeping in touch with fellow ThoughtWorkers, and my Mentees from PeriFerry. Surprisingly, I am getting very good internet speed here, and never faced any issue due to the internet. I have also hosted few webinars from here.\n Good, but how is the XR helping…\n Ok, let me come to the point — We moved to this town around 25 years ago, and my parents are keeping up the house we built at that time. Now, the building is asking for a good makeover, any change is not easy to accept, it required me to convince my parents about the change and bear with the inconvenience caused by renovation. XR tech has made my life easy when they saw a to-be house in XR, and I got “Let’s do it”. Once we accept the change, it is easy to travel further on the path. As I explained in my earlier article that, this 2020 will be remembered for life, no matter what, then why not remember it for a positive change.\n(See video demo at the end of this article)\nThe Need By now, you may have understood the need for renovation, and that is not just because of the old construct but also it required changes for car parking, separate passage for tenants on the upper floor, etc. I took charge to design the home on weekends and continue to uncover more potential of mine, read about uncovering human potentials in Covid19 time. I realized the following challenges with my initial paper/picture based designs.\n Actual scale design — We measured the house and I borrowed a pencil and paper sheet from my son, and drawn few designs with my engineering skills in ortho and isometric views, also created some 3D designs on paper, I love all this.\n Hard to change in the drawing — This is what I hate, re-draw for every change, and the eraser also spoils the drawings. When I added color to the drawing, it added another complexity, and change becomes impossible on the same drawing. It is hard to show the same drawing of different colors.\n Sharable multiple options — We need multiple options with color, style, and measurements that can be share easily. Digital options are good for this, for example sharing pictures on WhatsApp or by other means is quite popular.\n Visualize beyond imagination — We need a medium to visualize the To-Be state easily, need pictures of the house from multiple angles, and multiple locations. Ideally, I want to let viewers go into the To-Be state and feel it. Rather than see things only after completion.\n Safety and Security — Renovation and structural changes are risky, any change that causes an imbalance in structure could be fatal immediately, near term, or in long term. It needs a close review from experts and got few options rejected due to too many changes, unsafe or too costly.\n After designing on papers and utilizing all the drawing sheets of my kid, I realized the issue with the approach I took, every time there was a change expected and I need to tell mason and parents to assume things to avoid making changes in the design. I was losing flexibility to change, and not reaching to my satisfaction. See the next section on How XR helped me in coming out of this dilemma.\nXR solution I utilized XR tech to solve some of the above challenges, here are the steps I have taken so far.\n3D Modeling I think paper drawings were good to keep measurements at real scale, the next step I took to design the house in 3D. 3D modeling is a basic step for XR solution. There are a number of free/commercial tools for 3D design available like Blender, 3ds Max, Maya, CAD, etc. Even there are some concepts that can directly convert designs into 3D models. I was not comfortable in any of these and went ahead with my own skills, used plain Unity 3D objects to create a 3D model of To-Be house at the real scale. It wasn\u0026rsquo;t as difficult as I was thinking. We can also get a few good models for buildings such as doors, windows, vehicles, kitchen from the Unity Asset Store.\nIf you haven’t yet gone through Unity 3D, find this easy article by my colleague Neelarghya.\nJust the 3D model would solve many things, we can try different structures without complete rebuilding, try different options with colors, tiles, etc. Take as many snapshots as you want from a different angle and share it. Not just that we can try different lighting conditions, day and night, and add some lights to see the impact. The 3D design tool allows the designer/developer to get inside the building and see from inside, and snapshot and share.\n You may say it is just 3D modeling, where is the XR here?\n The 3D model is more of static designs with few animations, the only easy way to share these designs is via pictures or videos. 3D model walk-thru needs knowledge of the editor, and I had to show it to on my laptop every time anyone wants to see it, and that\u0026rsquo;s not a scalable solution. Also, it is not immersive and doesn\u0026rsquo;t give a feel. That\u0026rsquo;s where the next part comes in to fill the gap.\nBuilding XR App to Visualize 3D model Once we have a 3D model, we can visualize it in AR or VR devices and can roam around and inside it. You can watch the AR-based car customization demo in the following article.\n eXtending reality with AR and VR - Part 2\n Such a concept can be built using ARCore + Unity or similar alternatives.\nFollow the below steps to build an easy visualization using Unity for the AR capable smartphones/tablets.\n[Note: — This is done in Unity 2018, but it is recommended to use Unity AR Foundation with the latest version, please go ahead and explore that]\n Setup Project for ARCore — follow the google guidelines, and import the ARCore SDK package into Unity.\n Remove the main camera, light, and the add prefabs “ARCore Device” and “Environment Light” available in ARCore SDK.\n Make a prefab out of the 3D model created in the last step for the home.\n Create an InputManager class that instantiates the home prefab at the touch on the phone screen, and attaches it to an empty game object.\npublic class InputManager : MonoBehaviour { [SerializeField] private Camera camera; [SerializeField] private GameObject homePrefab; private GameObject home; private Vector3 offset = new Vector3(0, -10, 10); private bool initialized = false; void Update() { Touch touch; if (!initialized) { if (Input.touchCount \u0026gt; 0 \u0026amp;\u0026amp; (touch = Input.GetTouch(0)).phase == TouchPhase.Began) { CreateHome(touch.position); } else if (Input.GetMouseButtonDown(0)) { CreateHome(Input.mousePosition); } } } void CreateHome(Vector2 position) { Vector3 positionToPlace = offset; home = Instantiate(homePrefab, positionToPlace, Quaternion.identity); initialized = true; } } You can test it directly on an Android Phone (ARCore compatible). Connect the phone via USB, and test it with ARCore Instant preview. It needs some extra services running on the device. Alternatively, you can just build and run the project for android, and it will install the app to the device, and you can play with your XR app.\n I have not included plate detection features into it, but you may include it to make it more real. The house may appear on the detected plane.\n Since we need to run the app in outdoor, ARCore doesn\u0026rsquo;t work perfectly work outdoor, we need to do more optimization in the code to make it work, for now, I have added manual control to move the 3D model at our convenience.\nAdd the following code in the input manager\n[SerializeField] private Button leftButton; [SerializeField] private Button rightButton; [SerializeField] private Button upButton; [SerializeField] private Button downButton; [SerializeField] private Button forwardButton; [SerializeField] private Button backButton; [SerializeField] private Button resetButton; [SerializeField] private Button zoomInButton; [SerializeField] private Button zoomOutButton; private float step = 1; private float zoomStep = 0.1f; private float minZoom = 0.3f; private void Start() { leftButton.onClick.AddListener(() =\u0026gt; OnClickedButton(new Vector3(-step, 0, 0))); rightButton.onClick.AddListener(() =\u0026gt; OnClickedButton(new Vector3(step, 0, 0))); upButton.onClick.AddListener(() =\u0026gt; OnClickedButton(new Vector3(0, step, 0))); downButton.onClick.AddListener(() =\u0026gt; OnClickedButton(new Vector3(0, -step, 0))); forwardButton.onClick.AddListener(() =\u0026gt; OnClickedButton(new Vector3(0, 0, step))); backButton.onClick.AddListener(() =\u0026gt; OnClickedButton(new Vector3(0, 0, -step))); resetButton.onClick.AddListener(OnClickedReset); zoomInButton.onClick.AddListener(() =\u0026gt; OnClickedZoom(zoomStep)); zoomOutButton.onClick.AddListener(() =\u0026gt; OnClickedZoom(-zoomStep)); } private void OnClickedButton(Vector3 position) { if (initialized) { home.transform.position = home.transform.position + position; } } private void OnClickedReset() { if (initialized) { home.transform.position = offset; } } private void OnClickedZoom(float scale) { if (initialized) { Vector3 currentValue = home.transform.localScale; if (currentValue.x + scale \u0026gt; minZoom \u0026amp;\u0026amp; currentValue.y + scale \u0026gt; minZoom \u0026amp;\u0026amp; currentValue.z + scale \u0026gt; minZoom) { home.transform.localScale = new Vector3(currentValue.x + scale, currentValue.y + scale, currentValue.z + scale); //home.transform.position = offset; } } } This completes the code. You may find source code here\nExperience the To Be home in XR Run the app on your phone device, when trying outdoor, some of the the effects do not work well, use the given control to manually move left, right, top, down, near, and far. We can try using zoom-in and zoom-out control to scale it to real. You can get inside the model either by moving toward the model holding phone there or just use the control to bring the home closer such that we get inside the home.\nHere is a video demo.\n I showcased it to parents, wife, and kids, they experienced the To-Be home they are now more than interested in “Let’s do it” than earlier confusion.\nConclusion I have shared my quick experiment of XR that could save a lot of paper and time and money, and still achieve better agility in designing the home. This experiment is just a concept, it needs a lot of optimization to take it to production quality, and need alternative interactions to deal with tech limitations.\nThe following cases can enhance the use case much better\n Structural change simulation — Design each 3D parts with right weights as a rigid body, and load the BIM (Building Information Models) with gravity. Then try to change the structure and see the impact like pressure changes on beams and remaining structures. This would give a clear assessment of safety after the changes. Show actual size scale in 3d itself, as this still does not eliminate the need for paper design. Automatic scan the older structure and create a 3D model that can be changed later. Add tools for style customizations — color, lights, tiles, etc. Build a platform to include any 3D model. More suggestions are welcome.\nThis article is originally published at Medium\n","id":10,"tag":"xr ; thoughtworks ; motivational ; technology ; ar ; vr ; mr ; work from home ; covid19 ; change","title":"XR enabled me to renovate my home!","type":"post","url":"https://thinkuldeep.com/post/xr-enabled-home-renovation/"},{"content":"Today, The world is celebrating teacher’s day. I wish Happy Teacher\u0026rsquo;s day to all of us! We all are teachers and learners as every one of us has something to learn from others and something to teach others. I am thinking of an equation of learning that defines my way of learning and hope it would be helpful for you too. The learning equation — Learning is proportional to Sharing; Learning ∝ Sharing\n The more we share, the more we learn. The better we are at sharing, the better we become at learning. When we stop sharing, we stop learning too.\n We learn every day in all situations of life, good or bad. The learning matures only by sharing, you explore more of yourself by giving it away to others. Writing helps me discover what I know, and that I can say confidently. — https://thinkuldeep.com/\n I echo my colleague Jagbir’s view on why he shares his takeaways to the wider audience. He is an influential writer and he starts with “IT’S NOT YOU, IT’S ME”.\n Why Does he Spam us with Book Reviews!? \u0026ldquo;A lot of folks read books and quite a few book reviews already exist, then why should I care about your post?\u0026quot;. Truth…\n You may find some of my takeaways here.\nYesterday, I was discussing my mentoring experience with the PeriFerry team, and some life long lessons I learned here. This mentoring initiative is supported by PeriFerry and ThoughtWorks.\nIt’s been over 2 months since @thoughtworks partnered with @periferrydotcom for launching Prajña. Meet @KuldeepSingh \u0026amp; Julie. We wish all of PeriFerry\u0026#39;s trainers, teachers, mentors a very Happy Teacher’s Day! 🌱\nRead more - https://t.co/odWuMBQM8z#Teachers #education#TeachersDay\n\u0026mdash; PeriFerry (@periferrydotcom) September 5, 2020 Here’s something from one of our mentor’s. Kuldeep says, “I had goosebumps on the very first call when Julie said, ‘’I am unique, and I am proud of myself’’. It was just so powerful to hear that. Mentoring Julie has given me a new perspective of life; that each one of us have unique things to be proud of. We are all constantly learning and evolving, and the learning only matures by sharing. I’ve explored more of myself only by giving it away to others; to teach is to reaffirm our own lessons. I am Julie’s mentor, yet I have learnt a lot from her too… Thanks to ThoughtWorks and PeriFerry for providing us this platform!”\n There was a question, when should we mentor others? I relate mentoring with sharing. As per my equation above, we should start sharing when we want to learn. See, Learning is a natural process from birth, we keep learning, so is the process of sharing. We never stop learning and we never stop sharing. Unknowingly we keep sharing, no matter we want to share it or not. Mentoring is all about share what we want to share and becoming better at it.\n If we share anger with others, we become better at it, If we share jelousy with othres, we become better at it,\nif we share knowledge with others, we become better at it\nif we share our value with others, we become bettter at it\nif we share hate with others, we become better at it\nif we share love with others, we become better at it…\n There is a lot to learn and a lot to share, even this year 2020 is teaching us.\nHappy Teacher\u0026rsquo;s Day, to all my teachers for teaching me the values I carry today! Keep sharing!\nThis article is originally published at Medium\n","id":11,"tag":"general ; thoughtworks ; motivational ; takeaways ; mentor ; life ; experience ; learnings","title":"The Learning Equation","type":"post","url":"https://thinkuldeep.com/post/the-learning-equation/"},{"content":"It’s now two years since I accepted a challenging change to leave the previous organization after 13 long years, and join ThoughtWorks. Any change requires sacrifices and dedication before it becomes a positive change, but the journey becomes easy when we believe in the change and accept it. A change introduces us to a new environment and enables new thoughts, and new energy. The two years journey at ThoughtWorks is no different.\nThoughtworks interview conversation leaves a long-lasting impact on people, and I was also influenced from the interview processes, and by the time I joined ThoughtWorks, I read and heard multiple things about ThoughtWorks, interesting culture, different ways of working, people-first organization, connection matters most here, and also how hard it would be for a lateral to get settled. Putting down other offers wasn\u0026rsquo;t easy, but I was kind of prepared to accept the challenge and joined ThoughtWorks with mixed feelings.\nHere I am writing down the changes that happened around me, how I learned to sail in the sea of thoughts, and why do I call these two years as thoughtful years\nInduction in ThoughtWorks Initial few days I got the cultural dump, history, vision, who is who, office working structures, why we, etc. This is nothing different than other organizations. However, the interesting part started when I was asked to meet a number of people who were little settled in the organization or have gone through the journey I just started.\nSee what I got, a lot of inputs from different people, and most were talking about how ThoughtWorks is different and maybe difficult too. Some suggestions to go away with the baggage of titles/experience I was carrying, some suggestions were a little lighter asking to just go with the flow and take it easy. As a result, I got more confused.\nFew things, I learned initially, were hard to absorb, like democratic decision making in delivery, too much transparency, combined ownership of each and everything (even we have two MDs for India), people signing up on their own, almost no hard structures in the team and organization. There are just 4 grades, I joined the highest grade here, but nothing to be proud about, no one talks about what grade you are in.\nNow you are in a sea of people, build your network, create your influence, and influence the influencers. Choose your mentors, cultivators as so on, In short, define your own journey!\nWelcome to ThoughtWorks, your journey starts now! If all of the above bothers you then yes it\u0026rsquo;s hard, but I followed a golden rule to observe more and react less. I liked the culture in theory but was more curious to know the things on the ground, and that\u0026rsquo;s where the next section starts.\nCheck the ground to build a strong foundation I knew I have to change and learn new ways of working, but unless I believe in the to-be state it won\u0026rsquo;t be possible. Enough of theory, I wanted to see things on the ground, what does the word “Agile” mean here? How do the teams collaborate? How strong are the processes? What is so special about TW and TWers?\nI signed up to start my TW\u0026rsquo;s journey from the ground and joined a super techie team as a developer, the team is working on a commodity trading solution in Blockchain and Microservices. Blockchain was new but I was well equipped in microservices-based development. I started taking functional stories, and writing code but was that that easy? It was time to test myself, TW, TWers, and get yourself tested by TWers. Listing here my below observations and learnings from the ground.\n Sign-up — Yes, you already get a hint in the last paragraph, people do sign-up here. I learned to move from the paradigm of “task assignment” to “sign-up”. Stand-up — Everyone shares updates on the tasks they have signed up, any issues or bottlenecks. The interesting part of stand-up is, it is run by anyone in the team, you don’t need a scrum-master/TL/PM role or special experience to do that, in fact, TL/PM remains silent here unless there is any update to share. Every other member carries the responsibility to maintain the discipline of stand-up, they raise the flag of any discussion going beyond the limit, and then you are supposed to pass on the discussion for a later time. Team sign-up for the next tasks, and inform the team if there is any call-out. eXtreme Programming (XP) in practice — I have seen XP in practice for the first time. To my surprise I was productive from day-1, no special induction, I started pairing with other developers after a project brief from Business Analyst. Every developer (irrespective of experience level) is so confident in talking about unit tests, their value, and knows the art of refactoring. The team is focused on maintaining the test pyramid, and reducing the feedback cycle with pair-programming, unit tests, DevBox, and automation testing. All these make TWers super agile. Thanks to the Dev BootCamp where I learned these XP concepts, I have also shared my learnings on test-first-pair development here. Huddle helps take better decisions — Team members call for a huddle to discuss any tech issue or bottleneck, and take input from the rest of the team before proceeding. It\u0026rsquo;s a collective decision by the team to go for a particular option, and it leads to a more confident decision and less burden on individuals. Even the estimation of stories/tasks are done collectively. Overlapping boundaries across roles — Business Analyst here is not just gathering requirements but also manages the scope of the project, iteration management, and prioritization with Product Owners. Project Managers (if any) are more focused on enabling the team, providing autonomy to the team, no wonder if one sees a PM pairing with people. QAs review the unit tests during DevBox, do active programming for automation test suites, and help BA is grooming stories with the right acceptance criteria. Learned blockchain technology and tried hands in streamlining architecture cross-cutting concerns. This became a great platform for me to experience ThoughtWorks practices and build a network inside, outside the team. It has definitely built a strong foundation. I learned the “Agile” process means “what makes sense” here, the project teams decide how they want to run the project, no organization level enforcement.\nFound new love in Reading Books I was not a habitual book reader, this has changed now. ThoughtWorks gives a set of books as part of the joining kit, and also there is a yearly allowance to buy books. I must give credit to the books and their authors who helped me understand the ThoughtWorks analogy better and believe in it. It started with Extreme Programming Explained by Kent Beck. Another book Agile IT Organization Design — For digital transformation and continuous delivery by Sriram Narayan explains software development as a design process, this book exactly represents ThoughtWorks as an organization, I have experienced most of the concept on the ground.\nAnother book that influenced me a lot is What Got You Here Won’t Get You There by Marshall Goldsmith. This book is my change enabler and motivates me to change for good, change to become better, I literally followed this book to change myself on some of the improvements I received in my feedback from colleagues. Yes, you get feedback upfront, no matter at what role you are, ThoughtWorks has great feedback culture imbibed in the project delivery practices.\nWriting is an addiction Just like reading, writing is also an addiction. We just need a good platform and motivation to do that. Just after joining ThoughtWorks, I realized that people write a lot, there are a number of books written by ThoughtWorks. It motivated me to set up my personal blog site thinkuldeep.com — the world of learning, sharing, and caring.\n Welcome to Kuldeep\u0026rsquo;s Space\n The world of learning, sharing, and caring\n In the last two years, I have published more than 30 articles on ThoughtWorks Insights, Medium, DZone, and LinkedIn Pulse. I have collected all past authored content, but never published them, my personal blog now has around 70 articles, 13 events, and more about me, my social profiles, all at one place. It covers a wide range of topics: motivational, take-aways, covid19, technology, blockchain, ARVR, IoT, micro-services, enterprise solutions, practices, and more.\nI have been sharing my writing experience with my team, and now my writing is spreading, my teams have written more than 70 articles on the following publications XR Practices, Android-Core, and ThoughtWorks Insights.\nDoers are welcome By now, you get the sign-up culture here, I did sign up for setting up capabilities/practice for ARVR. We got some good assignments, and have a number of upcoming prospects. We have started from scratch, got signup from interested people, organized some inhouse training, and did a number of experiments around eXtended Reality. We have shared our learnings on ARVR practices internally and externally in the form of training, webinars, and blogs. We have found ways of working in the time of Covid19 and working from home without any impact on productivity and delivery.\nFear of failure is a major bottleneck for signing up for such an initiative, but at ThoughtWorks, we have overcome the fear of failures, we celebrate the failures too. We have events like FailConf at ThoughtWorks.\n My colleagues discussing successful failures We discussed what went well, and what we learned from those failures and how made them successful failures.\nGrow your social capital I realized the real importance of building social capital after joining ThoughtWorks and tried multiple ways to strengthen connections and communities inside and outside ThoughtWorks. Participated/Organized in more than 10 events, that cover workshops, webinars, and other events.\n Mentor and Judge at Hackathons (Smart India Hackathon, Hack In India) AwayDay and XR Days in ThoughtWorks was really helpful for me to know different offices and cultures. Inception Workshop was an eye-opener for me. Spoken for Students and Developer Communities (Google DevFest 2019, Google Developer Student Club) Joined The VRAR Associations, and Co-leading Product Design and Development community. Never bothered of social connections, but it matters, linked followers has also grown from 200 to more than 2000 in the last two years.\nPurposeful work I wrote about Embibe purpose and balance in life inspired by the book Life’s Amazing Secrets. ThoughtWorks also believes in balancing 3 Pillars; sustainability, excellence, and social justice. Working for social justice helps bring purpose to our work. I was able to revive a few hobbies which got buried under the work before ThoughtWorks and uncovered new potentials. I along with others ThoughWorkers have started mentoring at PeriFerry, and also supported Indeed Foundation. I have also signed up for mentor-ship at Dreamentor. It gives real pleasure when we guide someone and see them become successful.\nThoughtWorks is a more than 25 years old organization, and has an immense culture built over these years, understanding everything in just 2 years is not enough, I am still learning. Thank you all for making the journey fruitful.\nStay tuned and keep in touch!\nThis article is originally published at Medium\n","id":12,"tag":"general ; thoughtworks ; nagarro ; life ; change ; reflection ; takeaway ; motivational","title":"Two Thoughtful Years!","type":"post","url":"https://thinkuldeep.com/post/two-thoughtful-years/"},{"content":"eXtended Reality (XR) that includes AR and VR technologies have been silently disrupting the gaming industry for more than a decade. In the recent past, the tech has gained some traction in the education and training space.\nToday, however, COVID-19 with its mandates of social distancing and less to no physical interactions has accelerated adoption for XR. The tech is virtual, contactless, and delivers outstanding brand experiences - perfect for the new normal that multiple industries like retail will benefit from. To do that, the sector will first have to asses gaps that the new normal brings, and XR could service -\nFive ways COVID-19 is changing human behavior Trust and confidence: People are low on trust these days. The fear mongering mixed amid COVID-19 news has become a daily challenge to deal with. According to Outlook India, a C-Voter nationwide survey indicates that the trust levels of Indians in regard to social media is very low.\n Self-policing: Studies have shown that one\u0026rsquo;s behaviour in emergency situations depends on the individual’s perceived sense of security. And during the current pandemic, we are seeing unreasonable or unusual behaviour in response to what was once considered regular conduct.\n Virtual-friendly: 2020 has been the start of everything virtual. People indulge in virtual workout sessions, meetings, conferences, coffee breaks, dating, gaming, ceremonies, and more.\n Companionship: While social distancing might stay, our social nature will require us to be more intentional about seeking and responding to companionship and emotional support.\n Freedom: The lockdown has morphed the meaning of freedom. People have to reimagine what freedom means within the framework of acceptable behaviour during and post the pandemic.\n How does ‘virtual vs. physical’ impact retail consumers and enterprises When talking about consumers, Let’s use the example of a store visit; XR is changing online shopping and making it an immersive and virtual experience. 360 degree store layouts allow consumers to explore and discover products at leisure, exactly as they would in the real world - if not enjoy a more empathetic and gamified experience.\n Source : https://www.groovejones.com/shopping_aisle_of_the_future_ar_portal/ Imagine a shopping experience where one could tag family and friends and go on a communal and virtual shopping trip. While online shopping allows people to share their wishlists, immersive virtual shopping experiences definitely ups the ante.\nAdd to this, the option of virtually bumping into social contacts who are shopping at the same place as you. Social alerts on their buys and a native chat application will bring us as close to the real deal as possible.\nCustomers can also choose to interact with store assistants. Such experiences can be designed as invested conversations with the brand. Consumers are also encouraged to make intelligent buying decisions with contextual product information like offers, reviews, sizes, colors, etc. ThoughtWorks’ TellMeMore app tries to do just that - provide a better in-store product experience by overlaying information like book reviews, number of pages, and comparative prices beside the product (in this case; a book) view.\nVirtual positioning systems help build indoor positioning applications for large premises like shopping centers, airports, train stations, warehouses, factories, and more. In the case of a retail store, such a system will allow users to mark, locate, and navigate to specific items on virtual shelves.\nVirtual trials could easily become the norm where customers try products from fashion to make-up to cars and even houses and hotel rooms at ease and in as much detail as they’d like. And, when this is a collective experience for customers’ friends and family or even their favorite influencers - the resulting emotional connect with the retail brand is long lasting.\nFinally, adding tech like Haptics goes beyond the usual visual and hearing senses and introduces the sense of touch to customers’ shopping experiences.\nFor retail enterprises, one of the most obvious advantages of XR is stores being open 24/7 and 365 days a year. Enterprises also have the opportunity to personalize their VR store layouts to suit exactly what each of their customers wants to see and experience.\nInstore customer journeys are difficult to track. With VR, enterprises will track who the customer is, where the individual spent the most time in the store, buying habits and history, favorite products, the influencers followed, and usual triggers to complete a purchase.\nThe collected data, thanks to AI tech, can be processed real time for insights and used to influence the customer when in the store. Additionally, algorithms that determine ‘propensity to buy’ and ‘propensity for churn’ will be leveraged to help clients make quicker buying decisions. Retailers can give their customers proactive and personalized offers or discounts when they are in the VR store. As customers ‘walk’ through the store, such pop-ups will gain attention and push sales.\nIt’s common knowledge that customers, sometimes, defer buying decisions because of financial constraints. While most retail enterprises tie-up with financial institutions to alleviate this hesitance, the failure lies in leveraging that partnership at the right moment. With XR, retailers can integrate such financial support into the shopping experience. A customer can simultaneously explore products and available EMI or other financial support options.\nA word of caution; not all customers, except gamers, might own Head Mounted Devices (HMD). This suggests that retail VR experiences should not limit themselves to engaging with only certain devices but should enable immersive shopping experiences through a wide spectrum of mobile devices. This will aid customer adoption further leading to explosive growth in the XR-powered retail space.\nConclusion COVID-19 is perhaps irrevocably changing human behaviour and that includes the many retail-led transactions that people have on a regular basis. Physical touch and trials will be replaced with virtual experiences powered by immersive technology. People will make buying decisions based on a virtual, computer-generated replica of physical goods.\nWhat\u0026rsquo;s encouraging is the rate at which XR technology is accelerating. Retailers could embed touch and smell into customers’ virtual experiences quicker than earlier expected. We believe the combination of emerging tech like XR and AI is going to provide the world with an array of immersive and perhaps sci-fi-like shopping experiences to explore.\nThis article is originally published at ThoughtWorks Insights and derived from XR Practices Medium Publication. It is co-authored by Hari R and edited by Thoughtworks content team.\n","id":13,"tag":"xr ; ar ; vr ; future ; retail ; technology ; change ; thoughtworks ; enterprise","title":"The future of shopping","type":"post","url":"https://thinkuldeep.com/post/future-of-shoping/"},{"content":"Change is the only constant, early we accept, early we get ready for the future. 2020 is a year of change, accept it, and change our lifestyle and get ready for the future.\nI have been writing about the change in the past, right from accepting the change to change for good to have a purposeful and balanced life and uncover human potential in this Covid19 time.\nToday, June 20, is my younger son Lovyansh’s 3rd Birthday, and June month is really time to rejuvenate and celebrate for the kids when they are free from school. In India, this is the time, when families plan to go for holiday destinations with kids, and generally that the holiday destination is the maternal and paternal place where kids spend time with their grandparents. Especially families like mine where kids only get a chance to live with grandparents during this time. My wife Ritu, who has taken some break from work life and started enjoying the June month along with Kids at her parent\u0026rsquo;s place, and I too, a bachelor\u0026rsquo;s work-life here :) but this Covid19 has upset all of us. Neither my sisters and their kids could come to our place, nor we could go anywhere. We can not celebrate this birthday with family, was setting us back.\nAnd today morning, I come across a really nice note — What if 2020 isn\u0026rsquo;t canceled?\n View this post on Instagram A post shared by LESLIE DWIGHT (@lesliedwight) on Jun 3, 2020 at 12:38pm PDT\n We all know that it can not be canceled. We have to accept it. This is definitely the year of change and it will be remembered for our whole life and after our life, no matter what we do.\nSo fine, then why not celebrate a birthday, as usual, we probably need to make some changes as per our comfort and can still go with it. This article is about the changes that helped us enjoy Lovyansh’s birthday. So don’t setback and go ahead and find the change that makes you happy, and make this year as the year to remember.\nI feel satisfied when I get the following four senses when we plan a get-together. Getting together is really important for mental health and keeping social balance in life.\nSense of Preparation The hosts generally plan a lot, invitations, return gifts, venue, decorations, and some events during the celebrations to entertain the guests, and executing all of these. We try our best as per budget and satisfaction. Now all this may not there, but there are so many things that we used to ignore. The small moments that we never thought could be enjoyable. Here are few snapshots\n Decoration — become the home decorators, try all the creativity that you want, involve family members. My son Lakshya decorated the home, we observed the creativity of him. We handed over a hut that we prepared last week to Lovyansh. Cooking — Ritu prepared cake, and delicious paw bhaji, and a few items as per kids\u0026rsquo; choices. Getting ready — I got this duty to get kids bath and get them ready and myself ready. Cleal the area for the party. I was advised to stay in the party mood always and warned to stay away from office-related talks :) Planning — Go with the flow, but still, it is better to plan a few things. We also did some planning for two types of events and some plays on songs. Entertainment — Get ready to be a DJ, for playing songs, prepare laptops, PC and devices to the video calls. Sense of Celebration Getting a sense of celebration is important, it comes with multiple factors, such as dresses, mack-up, decorations, food, entrainment. When we are home, all these things look unnecessary. That\u0026rsquo;s where the change comes.\nGet dressed well, be in the party mood. It may be good ideas to ask your guest to be in the party mood and dressed well.\nWait a moment, I said guest? yes. Virtual guests! Invite the guests to join the video call. Be as creative as possible here, all the guests may not be well equipped with video conferencing tools. We were not well prepared here, but kids took the full credit here. Plan some virtual games if that entertains the audience. Make sure you include everyone in all the conversations. Do some videography, record every moment. Plan some plays with kids that you want to remember for years to come.\nOf course, the cake cutting, with a full house with the family.\nSense of Togetherness When we host a function, invites guests, and they happily join the function. It gives a great sense that we are not alone. We exchange gifts that symbolize a sense of giving and taking. But now, due to COVID, getting the physical presence of our loved one is hard.\nThe change is, belief in the virtual presence, let the loved one join on the video conference, and we can see the expressions, voice, just like when they come physically. It is definitely not the same as it is with a physical presence, but definitely it will boost the sense of togetherness. Try it out.\nYou may exchange the gifts by means of many digital ways, but in my opinion that also not desirable, exchange the expression and feeling may be treated at the give and take. Here our kids played well, they have shown handmade cards to each other and then shared them digitally, they also shared some email/WhatsApp messages.\nIn short, use technology to fulfill the sense of togetherness.\nWe followed the same on the next day, and wished my maternal grandfather and grandmother their 59th anniversary, on a video call. From last some time, the wishes were getting limited to what\u0026rsquo;s app text wishes, this time we wanted to break the stereotype, and follow the video group call instead of audio or text wishes.\nKeep video call as first preference, then the audio, and at last use text message. A text message is an asynchronous mode of communication, it should be used only when we don\u0026rsquo;t expect a quick response, and let the receiver response as per their availability. However, in the above case, we did not want to keep waiting to get the blessings from our grandparents over a text message and chose to do a video call and feel the blessed.\nSense of Remembrance Another aspect of hosting a get-together for friends and family is getting a sense of remembrance in the future. I am sure, many of us have gone through the flashback many times in this COVID19 time. By now, we might have looked at the videos, pictures of our past events, which we have not touched for years. We need to keep creating that sense of remembrance.\nSo record each and every moment, that’s what I am doing now, even writing this article is an act of remembrance. I have edited family videos for the first time.\nI would like to remember this birthday for years to come whenever 2020 will be remembered, this birthday must also be. Thank you all my loved ones for joining this celebration.\nConclusion Don\u0026rsquo;t let Covid19 stop the celebrations in life. Let’s accept Covid19 in our life and change our ways, change our beliefs, and live a healthy happy life.\nThis 2020 is an important year of our life, we have to remember it for our life, let’s make it the year of change and year to remember with good memories too.\nI wish you all the best! stay healthy, stay happy.\nThis article is originally published at Medium\n","id":14,"tag":"general ; selfhelp ; covid19 ; motivational ; life ; change","title":"2020 — The year of Change","type":"post","url":"https://thinkuldeep.com/post/2020-year-of-change/"},{"content":"In my earlier article, I have explained the fundamentals of XR and the type of XR, and the popular devices in those XR types. ARVR is not the only about Head-Mounted Display(HMD) devices. There is an ever-growing list of Google ARCore supported smartphones/tablets and ARKit is being supported in recent Apple phones. But still, when when it comes to AR and VR devices, the first thing that comes in mind is head-mounted display devices (HMD), however, the newer mobile phones/tablets are now coming with AR/VR support with a minimal set of tools attached.\nSimilarly, when it comes to ARVR Headsets, only a few names come in our mind, like Google Glass, Microsoft HoloLens, MagicLeap, Oculus, GearVR, but the list is really big and growing fast. CES 2020 has seen many new entrants in the segment.\nIn this article, I am trying to curate a list of XR device manufacturers and the devices they have produced so far, and producing in the near future. I am not categorizing the list in AR, VR, or MR but it just the XR.\nKey highlights There are more than 10 manufacturers trying to make normal looking XR glasses. There are number of competitive offerings from manufacturers other than the top known ones (Google, Microsoft) While some big brands are in catch-up game, but few new entrants are much ahead in the game. Near human eye resolution is achieved, and it\u0026rsquo;s not by the big players. Near human eye Field Of View is achieved, and it\u0026rsquo;s not by the big players. 60% of the XR devices are Android (AOSP) based. The Detailed List Please find the detailed list from 70+ manufacturers and their 150+ XR devices in my article at XR Practices Medium Publication\nConclusion So far article mentions 65+ XR device manufacturers, and 100+ devices may be. I have not included a list for Mobile VR, you can find many such vendors building VR boxes, just by searching in Amazon, Flipcart or similar eCommerce website. So the overall list is even bigger and growing day by day.\nThere are a number of peripherals that are also being manufactured for XR devices such as haptic gloves, jackets, and hats, we have not taken them into the account yet.\nLets now build more use cases using these devices and solve some real issues being faced by humanity. We are going through the COVID time, and we have seen usage of XR in Health, let\u0026rsquo;s try to solve some more, build more solutions which promote hygiene, touch-less, remove working, etc. More we use these devices, sooner they will reach in our price range.\nThanks for reading. Let me know if I have missed any, will include them.\n","id":15,"tag":"xr ; ar ; vr ; devices ; analysis ; general","title":"The Growing List of XR Devices","type":"post","url":"https://thinkuldeep.com/post/the-growing-list-of-xr-devices/"},{"content":"Accessing native XR features from the device using a web page is another form of WebXR.\nIn this article, I want to share an experiment where I tried to control the android device’s native features such as controlling volume and camera torch from a web page. This use case can be really useful in the XR world, where I can control the feature access in the device from cloud services, even the whole look and feel for showing the device controls can be managed from a hosted web page.\nI treat this use case and as a new form of Web XR, where I try to use native features of devices from the web. It opens the web page in Android WebView in the device, and then intercept the commands from the web page, and handle them natively. This way, we can also create a 3D app in Unity using 3D webview and show any 2D webpage in 3D. There could be many use-cases of this trial, I will leave up to you, and think more and let me know where you could use it more.\nLet me start, and show you what I have done.\nThe WebView Activity Let’s create an activity which opens webview, I have implemented it in one of my earlier articles. It intercepts the parameter “command” whenever there is a change in the URL. On the successful interception, it loads the URL with a new parameter called “result”. We will discuss the use or result later.\nprotected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); final String baseURL = getIntent().getStringExtra(INTENT_INPUT_URL); webView = new WebView(this); webView.setWebViewClient(new WebViewClient() { public boolean shouldOverrideUrlLoading(WebView view, String url) { Uri uri = Uri.parse(url); Log.d(TAG, \u0026#34;shouldOverrideUrlLoading\u0026#34; + url); String command = uri.getQueryParameter(\u0026#34;command\u0026#34;); if (command !=null) { ResultReceiver resultReceiver = getIntent().getParcelableExtra(RESPONSE_CALLBACK); if(resultReceiver!=null) { Bundle bundle = new Bundle(); bundle.putString(\u0026#34;command\u0026#34;, command); resultReceiver.send(RESULT_CODE, bundle); Log.d(TAG, \u0026#34;sent response: -\u0026gt; \u0026#34; + command); } view.loadUrl(baseURL + \u0026#34;?result=\u0026#34; + URLEncoder.encode(\u0026#34;Command \u0026#34; + command + \u0026#34; executed\u0026#34;)); return true; } view.loadUrl(url); return true; // then it is not handled by default action } }); WebSettings settings = webView.getSettings(); settings.setJavaScriptEnabled(true); setContentView(webView); //clear cache webView.loadUrl(baseURL); } Pass the command to the originator in the, via callbacks.\npublic static void launchActivity(Activity activity, String url, IResponseCallBack callBack){ Intent intent = new Intent(activity, WebViewActivity.class); intent.putExtra(WebViewActivity.INPUT_URL, url); intent.putExtra(WebViewActivity.RESPONSE_CALLBACK, new CustomReceiver(callBack)); activity.startActivity(intent); } private static class CustomReceiver extends ResultReceiver{ private IResponseCallBack callBack; CustomReceiver(IResponseCallBack responseCallBack){ super(null); callBack = responseCallBack; } @Override protected void onReceiveResult(int resultCode, Bundle resultData) { if(WebViewActivity.RESULT_CODE == resultCode){ String command = resultData.getString(WebViewActivity.RESULT_PARAM_NAME); callBack.OnSuccess(command); } else { callBack.OnFailure(\u0026#34;Not a valid code\u0026#34;); } } } The originator needs to pass the callback while launching the webview.\npublic interface IResponseCallBack { void OnSuccess(String result); void OnFailure(String result); } I have created the above activity in a separate library and defined it in the manifest of the library. however, it is not necessary you may keep it in one project as well. I kept it in the library so, in the future, I may include it in Unity applications.\nThe Main Activity In the main project, lets now create the main activity which launches the WebView activity with a URL where our web page is hosted for controlling the android device, along with the Command Handler callbacks.\nThe Command Handler It can handle volume up, down, and torch on and off.\nclass CommandHandler implements IResponseCallBack { @Override public void OnSuccess(String command) { Log.d(\u0026#34;CommandHandler\u0026#34;, command); if(COMMAND_VOLUME_UP.equals(command)){ volumeControl(AudioManager.ADJUST_RAISE); } else if(COMMAND_VOLUME_DOWN.equals(command)){ volumeControl(AudioManager.ADJUST_LOWER); } else if(COMMAND_TORCH_ON.equals(command)){ switchFlashLight(true); } else if(COMMAND_TORCH_OFF.equals(command)){ switchFlashLight(false); } else { Log.w(\u0026#34;CommandHandler\u0026#34;, \u0026#34;No matching handler found\u0026#34;); } } @Override public void OnFailure(String result) { Log.e(\u0026#34;CommandHandler\u0026#34;, result); } void volumeControl (int type){ AudioManager audioManager = (AudioManager) getApplicationContext().getSystemService(Context.AUDIO_SERVICE); audioManager.adjustVolume(type, AudioManager.FLAG_PLAY_SOUND); } void switchFlashLight(boolean status) { try { boolean isFlashAvailable = getApplicationContext().getPackageManager() .hasSystemFeature(PackageManager.FEATURE_CAMERA_FLASH); if (!isFlashAvailable) { OnFailure(\u0026#34;No flash available in the device\u0026#34;); } CameraManager mCameraManager = (CameraManager) getSystemService(Context.CAMERA_SERVICE); String mCameraId = mCameraManager.getCameraIdList()[0]; mCameraManager.setTorchMode(mCameraId, status); } catch (CameraAccessException e) { OnFailure(e.getMessage()); } } } Load Web Page in WebView It will just launch the webview activity, with a URL.\nprotected void onCreate(@Nullable Bundle savedInstanceState) { super.onCreate(savedInstanceState); WebViewActivity.launchActivity(this, \u0026#34;https://thinkuldeep.github.io/webxr-native/\u0026#34;, new CommandHandler()); } Define the Main Activity in Android Manifest and that\u0026rsquo;s it, the android side work is done.\nThe Web XR Page Now the main part, how the Web XR page can pass the commands and reads the result back from android. It is quite simple, Here is the Index.html, which has 4 buttons on it to control the android native functionalities.\n\u0026lt;!DOCTYPE html\u0026gt; \u0026lt;html lang=\u0026#34;en\u0026#34;\u0026gt; \u0026lt;head\u0026gt; \u0026lt;meta charset=\u0026#34;UTF-8\u0026#34;\u0026gt; \u0026lt;title\u0026gt;New form of Web XR\u0026lt;/title\u0026gt; \u0026lt;link rel=\u0026#34;stylesheet\u0026#34; href=\u0026#34;style.css\u0026#34;\u0026gt; \u0026lt;/head\u0026gt; \u0026lt;body\u0026gt; \u0026lt;div id=\u0026#34;myDIV\u0026#34; class=\u0026#34;header\u0026#34;\u0026gt; \u0026lt;h2\u0026gt;New form of Web XR\u0026lt;/h2\u0026gt; \u0026lt;h6\u0026gt;Control a device from a web page without a native app\u0026lt;/h6\u0026gt; \u0026lt;div\u0026gt; \u0026lt;span onclick=\u0026#34;javascript: doAction(\u0026#39;volume-up\u0026#39;)\u0026#34; class=\u0026#34;btn\u0026#34;\u0026gt;Volume Up\u0026lt;/span\u0026gt; \u0026lt;span onclick=\u0026#34;javascript: doAction(\u0026#39;volume-down\u0026#39;)\u0026#34; class=\u0026#34;btn\u0026#34;\u0026gt;Volume Down\u0026lt;/span\u0026gt; \u0026lt;/div\u0026gt; \u0026lt;div style=\u0026#34;margin: 25px\u0026#34;\u0026gt; \u0026lt;span onclick=\u0026#34;javascript: doAction(\u0026#39;torch-on\u0026#39;)\u0026#34; class=\u0026#34;btn\u0026#34;\u0026gt;Torch On\u0026lt;/span\u0026gt; \u0026lt;span onclick=\u0026#34;javascript: doAction(\u0026#39;torch-off\u0026#39;)\u0026#34; class=\u0026#34;btn\u0026#34;\u0026gt;Torch Off\u0026lt;/span\u0026gt; \u0026lt;/div\u0026gt; \u0026lt;/div\u0026gt; \u0026lt;div id=\u0026#34;result\u0026#34; class=\u0026#34;result\u0026#34;\u0026gt; \u0026lt;/div\u0026gt; \u0026lt;script\u0026gt; function onLoad(){ const urlParams = new URLSearchParams(window.location.search); const result = urlParams.get(\u0026#39;result\u0026#39;); var resultElement = document.getElementById(\u0026#39;result\u0026#39;); resultElement.innerHTML = result; } onLoad(); function doAction(command){ var baseURL = window.location.protocol + \u0026#34;//\u0026#34; + window.location.host + window.location.pathname; window.location = baseURL + \u0026#34;?command=\u0026#34; + command; } \u0026lt;/script\u0026gt; \u0026lt;/body\u0026gt; \u0026lt;/html\u0026gt; Look at the onClick action listener, it is just appending the command parameter and reload the page. That\u0026rsquo;s where webview will intercept the command pass it to the Command Handler in the android native layer.\nAlso, see the OnLoad method, it looks for the “result” parameter, and it found then it shows the result on load. The webview append a result parameter after intercepting a command.\nThe webpage is hosted in the same source code repo — https://thinkuldeep.github.io/webxr-native/\nTry all the buttons and see if they should correctly add the command in the URL.\nIf you don’t want to get into the publishing webpage, then you can package the HTML pages in the android assets as well as follows\n put the index.html and css in /src/main/assets and change URL in Main Activity @Override protected void onCreate(@Nullable Bundle savedInstanceState) { super.onCreate(savedInstanceState); WebViewActivity.launchActivity(this, \u0026#34;file:///android_asset/index.html\u0026#34;, new CommandHandler()); } That\u0026rsquo;s it.\nDemo So now you understand the whole flow, its time for a demo, right. Let\u0026rsquo;s go with it.\nTry it on your phone, and see it working.\nConclusion In this article, I have tried a few basics commands passing from a web app and android then can handle those commands natively. Now get the idea,\nHere is the source code, Now I leave to you to extend it further. https://github.com/thinkuldeep/webxr-native\nTry to address the following things —\n Security implications in this approach? How to handle the offline scenario Reload/Sleep/Cache scenarios. Build 3D app on top of webview activity, and build the app for say Oculus device. It is originally published at XR Practices Medium Publication\n","id":16,"tag":"xr ; android ; webxr ; native app ; tutorial ; technology","title":"A new form of WebXR","type":"post","url":"https://thinkuldeep.com/post/a-new-form-of-webxr/"},{"content":"Covid19 has challenged the way work, I have penned down my earlier article to cover my workday and our shift toward new ways of working. Covid19 has impacted not just the way we work, but also our lives. All-day, we keep hearing how it is hitting hard on humanity and it has brought so much negativity around us.\nEnough of negativity, let’s uncover human potential to cop with it. In this article, I will cover how the ways of our regular life are shifting, and uncovering new human potentials to try out new things and performing our chores own our own. This shift is important for us, not just to fulfill the need, but also to keep our mind busy, active, and be positive toward life. The kind of human mind shift I am talking about here will also impact various related businesses, and they need to think thru again to survive.\nShifting from… Let me start with before lockdown situation, where we were meeting and greeting physically. Starting from early Feb, this year, we have celebrated our 13th marriage anniversary the 1st anniversary of cousins, and few more family get-togethers.\nThe party was organized by my cousin Mukesh Verma, who works with Shri Arjun Ram Meghwal, Minister of State, Govt Of India. At that time and the schools were finishing up their annuals days, and we joined those proud movements also.\nThen comes the alarming situation of Covid19, puts the whole country into lockdown, we are still in lockdown, even govt has relaxed the lockdown restrictions but looks like humanity will be in the lockdown of the Covid19 impact for the years to come.\nStories of Uncovering Human Potential I will cover stories of me and my family members who have uncovered the unknown potentials in them or explored it more, and still emerging.\nStory of Mukesh Let me start with a story of the naughtiest child (then) of family, my cousin Mukesh Ostwal, who recently got placed in e-learning giant Byjus. He has some great people connect skills, and will do wonder at Job, nothing to surprise here. He surprised us with his painting skills during the COVID situation. Wall painting, stone painting, and paper painting, or whatever comes he can now paint, looks like his brushes are unstoppable.\nI feel he is an addict of Snapchat, Instagram, and more, and loves photography. I wish him all the best for the future.\nStory of Mamta The next story is about my sister Mamta, a teacher by profession at Kendriya Vidyalaya. She has surprised us by different dishes she prepared, from Momos to Coconut Sweets, to Panipuri at home. My brother-in-law Mr. Surender Kumar, who works in DMRC, is more available for the tasting and support (due to lockdown) and approved the dishes :). She is very good at painting, and this situation has given her a chance to explore more. Not just that, she has tried her tailoring skills and prepared some beautiful dresses for our angle Liesha (Kuku).\nWelldone Mamta, keep it up! eager to try the dishes.\nStory of Dr. Saroj Basniwal This story is about my sister Dr. Saroj Basniwal, who is a medical officer, and on an active govt duty at this time. My brother-in-law Dr. Basant Basniwal who runs a clinic and is also great support at home. In addition to duty, they surprised us with all the different dishes and sweets they have prepared at home and the way the kids celebrated their anniversary, birthday, and other functions inhouse.\nIt looks like they will not buy a cake, fries, chips, and sweets (like Jelebies, Gulab Jamun, etc) from outside in future.\nSaroj is also a great painter, and the same skills are coming in the kids. During this time, they became hairstylist for the kids, and also stitched and distributed masks.\nWelldone doctors, keep serving the nation, stay safe.\nStory of Ritu My story will be in-completed without the story of my wife Ritu, she has been a great support to me. She is an engineer by education and assisting our kids comfortably in their school from home. Lakshya (8years) need lessor attention of her in video classes from school, but Lovyansh (3years) just started, she needs to do the handholding and is a teacher for all the subjects, dance, sports, art and overall development of the kids.\nShe prepares delicious and healthy food, but in this COVID time, she has uncovered the potential by preparing many dishes and sweets for which we used to go out, such as Pizza, Pao Bhaji, Burger, Tikki, Dosa, Uttapam, Jalebi, Gulabjamun, and Rasgulla.\nI must say, after tasting these homemade sweets, I realized what is a pure desi ghee sweets. Lakshya and Lovyansh have enjoyed the same a lot.\nShe becomes hair-stylist for me and did a hair cut and given a totally different style, my son Lakshya is happy for my new hairstyle :)\nThanks, Ritu, and it’s more work for you but don’t worry I am here to support you.\nStory of Me (Kuldeep) Let me cover my story now, you have already seen my work from the home story. Now is the time to tell my non-working story. Being part of such as talented family members, it motivates me to re-invent myself, and keep trying new things.\nI used to be a painter in college days and did wall paintings, stage backdrop painting, floor painting, and more, but could not follow the habit since then. Last X-mas, I got a gift from my team drawing board, paint, and brushes to revive my habit, and I utilized the gift recently in this Covid19 time.\nWe have been waiting for getting the sofa cover-stitched for long, and you know what. I stitched the sofa cover, it was not that difficult, that I was thinking. I also stitched a few washable masks for family. I generally carry most of the repair and service at home on my own, be it servicing a fan, chimney, bicycle, and so on.\nAnd as my sister, I become a hairstylist for my kids, and after getting feedback on the hairstyle, it looks like I will be the permanent hairstylist for the younger ones for long.\nIn this COVID time, I become a frequent writer and written around 10 articles and implemented my own blogging site — Kuldeep’s Space (The world of learning, sharing and caring)\nStory of Kids How can kids stay behind, they follow the parents, looks like we are bringing up a few great artists.\nStory of Family There are other members of the family who also who have uncovered and explored the potentials in this COVID time.\nMy cousin, Permila Verma, is servicing the government as a teacher and is a great artist, her paintings are marvelous. Sunita Verma, wife of my brother has also joined the hand with Permila. Sonu Ostwal, is an engineer by education, and a great cook, and artists, prepared a lovely cake for mother’s day.\nGourav Ostwal, our forensic expert, supporting duty at the hospital, and attending virtual international conferences on forensic science. Brother-in-law, Madan Lal Arya, has shown gratitude towards people and stitched and distributed masks.\nIn the above picture, you also see my maternal grandfather, Shri. Bhagirath Verma stitching masks.\nThis article is incomplete if I don\u0026rsquo;t include the story of my motivation, he is a retired Head Master, but still a teacher for us, teaching us lessons of life.\nI try to follow him closely but there are many things only he can do at this age. Now you know the secret of my motivation level, whatever I do is nothing close to my grandfather.\nI have spent my childhood with him and learned that there is always much more potential in us than we think.\n So always trying, always improving.\n With this note, I would like to complete the article and wish you all the best, keep fighting, keep trying, keep improving, you can do much more than what you think…\nIt is originally published at Medium\n","id":17,"tag":"general ; selfhelp ; covid19 ; motivation ; life ; change ; motivational","title":"Uncovering the human potential","type":"post","url":"https://thinkuldeep.com/post/uncovering-the-human-potential/"},{"content":"There are a number of Enterprise XR use cases where we have a need to open the existing 2D apps in the XR devices in a wrapper app, and this article covers a concept, have a look!\nAfter trying the OS level customization on an XR device, you may like to run your XR apps as a system app. Here is a simple article by Raju K\n Android System Apps development - Android Emulator Setup and guidelines to develop system applications\n Now, you may also want to open existing applications in an XR container, and that\u0026rsquo;s where the XR container app must be a system app and the above article will help you there, however, this article is about building the XR container itself to open existing Android Activity in an XR container and interact with it.\nThis article covers building a XR container app using Unity, and ARCore, which renders an activity of another android app, in 3D space.\n3D Container Dev Environment Setup Create a Unity Project If you are new to Unity then follow this simple article by Neelarghya\n Download and setup XR project using AR Core — the google guidelines,\n Import the ARCore SDK package into Unity.\n Remove the main camera, light, and the add prefabs “ARCore Device” and “Environment Light” available in ARCore SDK.\n Create a Cube and scale it such that it looks like a panel. This is where we would see the android activity.\n Create a material with add to Cube, and set shader “Unlit/Texture”, may choose some default image. Go to File \u0026gt; Build and Settings\n Switch Build target to Android\n Change the Player Settings \u0026gt; Other Settings — package name, minimum SDK version 21.\n Select ARCore Supported in XR Settings\n Check export and export the project. It will export the project as an android project at a path say — EXPORT_PATH\n Create an Android Project Open Android Studio and create an empty project. Create a module in it as a Library Implement a 2D Wrapper Copy the Unity Generated Unity Activity Go to Android Studio.\n Copy EXPORT_PATH/src/main/java/com/tk/threed/container/UnityPlayerActivity.java — -\u0026gt; AndroidLibrary/main/java/com/tk/activitywrapperlibrary/UnityPlayerActivity.java\n Copy EXPORT_PATH/libs/unity-classes.jar to Library/libs\n File \u0026gt; Sync Project and Gradle Files — this should fix all the errors in UnityPlayerActivity.\n Now let the UnityPlayerActivity extend from ActivityGroup instead Activity, I know it is deprecated, there are other ways in the latest Android API using fragments, etc, but let\u0026rsquo;s go with it until I try other ways. Don\u0026rsquo;t change anything else in this class.\npublic class UnityPlayerActivity extends ActivityGroup { protected UnityPlayer mUnityPlayer; // don\u0026#39;t change the name of this variable; referenced from native code // Setup activity layout @Override protected void onCreate(Bundle savedInstanceState) { requestWindowFeature(Window.FEATURE_NO_TITLE); super.onCreate(savedInstanceState); mUnityPlayer = new UnityPlayer(this); setContentView(mUnityPlayer); mUnityPlayer.requestFocus(); } ... } Implement an App Container The app container loads the given activity as a view in a given ActivityGroup. It gets initialized with an instance of an Activity Group.\npublic class AppContainer { private static AppContainer instance; private ActivityGroup context; private AppContainer(ActivityGroup context){ this.context = context; } public static synchronized AppContainer createInstance(ActivityGroup context) { if(instance == null){ instance = new AppContainer(context); } return instance; } public static AppContainer getInstance() { return instance; } ... } Add a method to start a given activity by a LocalActivityManager and add it to the activity group. It runs in the UI thread, and avoid certain exceptions (not listing them here)\npublic void addActivityView(final Class activityClass, final int viewportWidth, final int viewportHeight){ context.runOnUiThread(new Runnable() { @Override public void run() { Intent intent = new Intent(context, activityClass); LocalActivityManager mgr = context.getLocalActivityManager(); Window w = mgr.startActivity(\u0026#34;MyIntentID\u0026#34;, intent); activityView = w != null ? w.getDecorView() : null; activityView.requestFocus(); addActivityToLayout(context, viewportWidth, viewportHeight); } }); } Add the activity to Layout — this will add the activity to a lower layer so it is not visible in the foreground.\nprivate void addActivityToLayout(Activity context, int viewPortWidth, int viewPortHeight) { Rect rect = new Rect(0, 0, viewPortWidth, viewPortHeight); FrameLayout.LayoutParams layoutParams = new FrameLayout.LayoutParams(rect.width(), rect.height()); layoutParams.setMargins(rect.left, rect.top, 0, 0); activityView.setLayoutParams(layoutParams); activityView.measure(viewPortWidth, viewPortHeight); ViewGroup viewGroup = (ViewGroup) context.getWindow().getDecorView().getRootView(); viewGroup.setFocusable(true); viewGroup.setFocusableInTouchMode(true); initBitmapBuffer(viewPortWidth, viewPortHeight); viewGroup.addView(activityView, 0); //Add at lower layer } What? are you serious, if this is not visible in the foreground, how would you see it. Well, we will render the bitmap of this activity view. Remember the screen we created in Unity, we will render this bitmap there, wait and watch!\n Let’s initialize a bitmap as follows\nprivate Bitmap bitmap; private Canvas canvas; private ByteBuffer buffer; private void initBitmapBuffer(int viewPortWidth, int viewPortHeight) { bitmap = Bitmap.createBitmap(viewPortWidth, viewPortHeight, Bitmap.Config.ARGB_8888); canvas = new Canvas(bitmap); buffer = ByteBuffer.allocate(bitmap.getByteCount()); } Now the buffer is ready and ready to be consumed. let\u0026rsquo;s expose a consumption method.\n Render the texture — Here comes the magic method which renders a given native texture pointer using OpenGL (Don’t ask me details, some help from here). It draws the activity views on a canvas (which has bitmap) and then copies the bitmap’s pixel to the buffer, then it magically writes the row pixel data against the native texture pointer.\npublic void renderTexture(final int texturePointer) { byte[] imageBuffer; imageBuffer = getOutputBuffer(); if (imageBuffer.length \u0026gt; 1) { Log.d(\u0026#34;d\u0026#34;, \u0026#34;render texture\u0026#34;); //activityView.requestFocus(); GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, texturePointer); GLES20.glTexSubImage2D(GLES20.GL_TEXTURE_2D, 0, 0, 0, activityView.getWidth(), activityView.getHeight(), GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE, ByteBuffer.wrap(imageBuffer)); } } private byte[] getOutputBuffer() { if (canvas != null ) { try { canvas.drawColor(0, PorterDuff.Mode.CLEAR); activityView.draw(canvas); bitmap.copyPixelsToBuffer(buffer); buffer.rewind(); return buffer.array(); }catch (Exception e){ Log.w(\u0026#34;AppContainer\u0026#34;, \u0026#34;getOutputBuffer\u0026#34;, e); return new byte[0]; } } else { return new byte[0]; } } Using The App Container Add just line at the end of UnityPlayerActivity.OnCreate method, and initialize the app container. No more changes here.\n@Override protected void onCreate(Bundle savedInstanceState) { ... AppContainer.createInstance(this); }` Ok, enough of Java, let’s get back to Unity. A 3D screen is waiting to consume the texture.\nUsing ActivityWrapper Library Before we go, let\u0026rsquo;s build this library, so that it can be included in Unity Project.\n Go to Build \u0026gt; Make activitywrapperlibrary\n Copy the library (AAR) from “activitywrapperlibrary/build/outputs/aar/activitywrapperlibrary-debug.aar” to “Unity Project/Assets/Plugins/Android”\n Add an android manifest file coping from earlier exported path EXPORT_PATH/src/main, make the following changes.\n Change the name of the main Activity to the activity we defined as the wrapper.\n... \u0026lt;application tools:replace=\u0026#34;android:icon,android:theme,android:allowBackup \u0026#34;android:theme=\u0026#34;@style/UnityThemeSelector\u0026#34; ... \u0026lt;activity android:label=\u0026#34;@string/app_name\u0026#34; android:screenOrientation=\u0026#34;fullSensor\u0026#34; .... android:name=\u0026#34;com.tk.activitywrapperlibrary.UnityPlayerActivity\u0026#34; \u0026gt; \u0026lt;intent-filter\u0026gt; \u0026lt;action android:name=\u0026#34;android.intent.action.MAIN\u0026#34; /\u0026gt; \u0026lt;category android:name=\u0026#34;android.intent.category.LAUNCHER\u0026#34; /\u0026gt; \u0026lt;category android:name=\u0026#34;android.intent.category.LEANBACK_LAUNCHER\u0026#34; /\u0026gt; \u0026lt;/intent-filter\u0026gt; \u0026lt;meta-data android:name=\u0026#34;unityplayer.UnityActivity\u0026#34; android:value=\u0026#34;true\u0026#34; /\u0026gt; \u0026lt;/activity\u0026gt; \u0026lt;meta-data android:name=\u0026#34;unity.build-id\u0026#34; android:value=\u0026#34;d92db379-0eb8-4355-a322-0ba760976bb7\u0026#34; /\u0026gt; \u0026lt;meta-data android:name=\u0026#34;unity.splash-mode\u0026#34; android:value=\u0026#34;0\u0026#34; /\u0026gt; \u0026lt;meta-data android:name=\u0026#34;unity.splash-enable\u0026#34; android:value=\u0026#34;True\u0026#34; /\u0026gt; ... \u0026lt;/application\u0026gt; Now, this newly implemented activity will act as a Main Player Activity and on the start of the unity app, it will start the app container.\nBut when you try to run this, you will see multiple Manifest merger conflicts, use User the “tools: replace” at the \u0026lt;application\u0026gt; as well as \u0026lt;activity\u0026gt; level wherever it is required. You can always see merge conflicts in the Android Manifest Viewer of Android Studio.\nOther issues will be related to unity-classes.jar which somehow gets packages in the generated library as well as supplied by Unity itself. You need one copy,\nSomehow “provided” and “compileOnly” wasn\u0026rsquo;t working for me while generating library, so I did a quick workaround, please find some better solution for it. I have to exclude unity-classes.jar from the main Gradle file of unity project, here is a way to do that.\n Go to File \u0026gt; Build Settings\u0026gt; Player Settings \u0026gt; Publishing Settings \u0026gt; Check on Custom Gradle Template\n It should generate a file mainTemplate.gradle, edit the file and add exclude in dependencies in dependencies section as follows\n... **APPLY_PLUGINS** dependencies { implementation fileTree(dir: \u0026#39;libs\u0026#39;, include: [\u0026#39;*.jar\u0026#39;], excludes: [\u0026#39;unity-classes.jar\u0026#39;]) **DEPS**} ... This should fix the build issues in unity. Enough now, we haven\u0026rsquo;t yet build any 3D activity loader, let\u0026rsquo;s jump there.\n Implement XR Activity Loader Let\u0026rsquo;s add more components in the scene, label, and a Load button in Unity. Render the Activity in a 3D Screen Let\u0026rsquo;s create a script to render the activity on the screen.\n Script with the name of the activity class, and height, width as input\npublic class ActivityLoader : MonoBehaviour { public string activityClassName; public Button load; public Text response; public int width = 1080; public int height = 1920; private AndroidJavaObject m_AppContainer; private IntPtr m_NativeTexturePointer; private Renderer m_Renderer; private Texture2D m_ImageTexture2D; private void Start() { m_Renderer = GetComponent\u0026lt;Renderer\u0026gt;(); load.onClick.AddListener(OnLoad); m_ImageTexture2D = new Texture2D(width, height, TextureFormat.ARGB32, false) {filterMode = FilterMode.Point}; m_ImageTexture2D.Apply(); } ... } On clicking the load button, it should load the given activity class. By this time the AppContainer must be initialized, and let\u0026rsquo;s call it to load the activity in a view.\nprivate void OnLoad() { if (activityClassName != null) { response.text = \u0026#34;Loading \u0026#34; + activityClassName; AndroidJavaClass activityClass = new AndroidJavaClass(activityClassName); AndroidJavaClass appContainerClass = new AndroidJavaClass(\u0026#34;com.tk.activitywrapperlibrary.AppContainer\u0026#34;); m_AppContainer = appContainerClass.CallStatic\u0026lt;AndroidJavaObject\u0026gt;(\u0026#34;getInstance\u0026#34;); m_AppContainer.Call(\u0026#34;addActivityView\u0026#34;, activityClass, width, height); m_Renderer.material.mainTexture = m_ImageTexture2D; Debug.LogWarning(\u0026#34;Rendering starts\u0026#34;); StartCoroutine(RenderTexture(m_ImageTexture2D.GetNativeTexturePtr())); } else { response.text = \u0026#34;No package found\u0026#34;; } } Load the texture, call the magic method now in a coroutine, and keep calling it. I know it\u0026rsquo;s not optimal, but lets first make it work then we will discuss multiple methods of optimizations.\nprivate IEnumerator RenderTexture(IntPtr nativePointer) { yield return new WaitForEndOfFrame(); while (true) { m_AppContainer.Call(\u0026#34;renderTexture\u0026#34;, nativePointer.ToInt32()); yield return new WaitForEndOfFrame(); yield return new WaitForEndOfFrame(); yield return new WaitForEndOfFrame(); } } Add this script to Screen GameObject. Assign button, response.\n What to give in Activity Class Name. Oh! we haven’t created any app activity yet which we want to render here. Let’s do that first.\n The 2D App Let\u0026rsquo;s build a 2D app that we want to render in the unity 3D container. Go back to the Android Studio, Choose Login Activity while project creation. It generates a full login app and input fields and so.\nTry to run this app. Just make and deploy to the phone or emulator. and see how it runs. Now we can open this activity from another app if we build our container app as a system app. Ref to article Or we can include this app in Unity as a library. Let\u0026rsquo;s go with this.\nBuild it as a Library To include this app in Unity, we need to build it as a library.\nFollow the steps.\n Open build.gradle and change apply plugin: ‘com.android.application’ to apply plugin: ‘com.android.library’\n Remove applicationId “com.tk.loginapp” from defaultConfig\n Update the Android Manifest as follows — remove the intent filters\n\u0026lt;?xml version=\u0026#34;1.0\u0026#34; encoding=\u0026#34;utf-8\u0026#34;?\u0026gt; \u0026lt;manifest xmlns:android=\u0026#34;http://schemas.android.com/apk/res/android\u0026#34; package=\u0026#34;com.tk.loginapp\u0026#34;\u0026gt; \u0026lt;application android:allowBackup=\u0026#34;true\u0026#34; android:icon=\u0026#34;@mipmap/ic_launcher\u0026#34; android:label=\u0026#34;@string/app_name\u0026#34; android:roundIcon=\u0026#34;@mipmap/ic_launcher_round\u0026#34; android:supportsRtl=\u0026#34;true\u0026#34; android:theme=\u0026#34;@style/AppTheme\u0026#34;\u0026gt; \u0026lt;activity android:name=\u0026#34;.ui.login.LoginActivity\u0026#34; android:label=\u0026#34;@string/app_name\u0026#34;\u0026gt; \u0026lt;/activity\u0026gt; \u0026lt;/application\u0026gt; \u0026lt;/manifest\u0026gt; Click on Sync Now\n Go to Build \u0026gt; Make module ‘app’\n It will generate AAR in app/build/outputs/aar/app-debug.aar\n Using the 2D App library in Unity Follow the same steps as earlier, copy the AAR to Unity Project/Assets/Plugins/Android\n Open file mainTemplate.gradle, edit the file and copy all the dependencies entries from 2D apps Gradle file and paste in dependencies section as follows\n... **APPLY_PLUGINS** dependencies { implementation fileTree(dir: \u0026#39;libs\u0026#39;, include: [\u0026#39;*.jar\u0026#39;], excludes: [\u0026#39;unity-classes.jar\u0026#39;]) implementation \u0026#39;androidx.appcompat:appcompat:1.1.0\u0026#39; implementation \u0026#39;com.google.android.material:material:1.1.0\u0026#39; implementation \u0026#39;androidx.annotation:annotation:1.1.0\u0026#39; implementation \u0026#39;androidx.constraintlayout:constraintlayout:1.1.3\u0026#39; implementation \u0026#39;androidx.lifecycle:lifecycle-extensions:2.2.0\u0026#39; **DEPS**} ... Now, the 2D app is fully available in Unity.\n Let\u0026rsquo;s define the activity in /Assets/Plungs/Android/AndroidManifest.xml It is better to redefine the activities as per the available themes which we are going to render in 3D.\n\u0026lt;activity tools:replace=\u0026#34;android:theme\u0026#34; android:name=\u0026#34;com.tk.loginapp.ui.login.LoginActivity\u0026#34; android:theme=\u0026#34;@style/Theme.AppCompat.Light\u0026#34;/\u0026gt; If you still run into InflateExceptions while running the app, then follow the article below by Raju K\n The InflateException of Death - AppCompat Library woes in Android\n Running a 2D Activity in XR Container So, the time has come to know the class name of the activity, here it is “com.tk.loginapp.ui.login.LoginActivity”.\n Go to Unity and give this name in the Script of the Screen Game Object. Connect your android phone via USB and Go to File \u0026gt; Build and Run. Wait for the build to complete and here you go. Wow, the login app’s activity is getting rendered in the 3D environment but the texture is mirrored, and the color is not coming as it was in 2D.\nI can see the cursor blinking in the text box, which means the texture is continuously getting rendered. See logcat, there is a continuous log of rendering. Great!\n Fixing Mirror Issue Let’s fix the mirroring issue, Luckily Raju K has done good research here, see the article\n Unity Shader Graph — Part 1 - In one of the projects that I’m working on recently, there was a need to mirror the texture at runtime.\n Let\u0026rsquo;s create a simple Unlit shader that just does mirroring. Right-click and Create \u0026gt; Shader \u0026gt; Unlit Shader \u0026gt; Give it a name and Open.\n This project will open in the editor. You may directly modify the file (if you don’t want to follow full steps), make the changes in bold.\nShader \u0026#34;Unlit/Mirror\u0026#34; { Properties { _MainTex (\u0026#34;Texture\u0026#34;, 2D) = \u0026#34;white\u0026#34; {} [Toggle(MIRROR_X)] _FlipX (\u0026#34;Mirror X\u0026#34;, Float) = 0 [Toggle(MIRROR_Y)] _FlipY (\u0026#34;Mirror Y\u0026#34;, Float) = 0 } SubShader { ... Pass { ... #pragma multi_compile_fog #pragma multi_compile ___ MIRROR_X #pragma multi_compile ___ MIRROR_Y ... v2f vert (appdata v) { v2f o; o.vertex = UnityObjectToClipPos(v.vertex); float2 untransformedUV = v.uv; #ifdef MIRROR_X untransformedUV.x = 1.0 - untransformedUV.x; #endif // MIRROR_X #ifdef MIRROR_Y untransformedUV.y = 1.0 - untransformedUV.y; #endif // MIRROR_Y o.uv = TRANSFORM_TEX(untransformedUV, _MainTex); UNITY_TRANSFER_FOG(o,o.vertex); return o; } --- } Change shader for the material of Screen GameObject select this newly created shader. Check Mirror Y, and rotation 180. XR Container in Action Run the app again. Here you go. Congratulations! you are able to run your 2D app in an XR container. You can go closer to it.\nThis is not complete yet, we have just rendered the texture, and the next challenges are implementing all the interactions Gaze, Touch, Gestures, Voice, keyboard inputs for UI controls or work with controllers, etc. However, interactions are also possible, we need to dispatch the events to all the views which are rendering in texture, and the x, y coordinate may be fetched on the texture area by adding a collider on the Screen GameObject. Get the touch, gesture event, and raycast to the object, the raycast hit coordinates, then can be transformed to x, y coordinate on-screen, the dispatch mouse click event at that position in the internal activity-view. Similar ways we can implement drag, key pressed, etc.\nConclusion Though we are able to run a 2D app’s one activity in an XR app container and but this approach has many flaws\n ActivityGroup is deprecated and may soon be removed from android\u0026rsquo;s future versions. Since the other activity runs in the background, there would side effects on observers inside the activity. The different apps may need different customizations, and the approach is not scalable to meet all the requirements Handling full activity life-cycle is a challenge We are controlling only one activity and if that activity is spawning some child activity then there would be issues. Building AAR for the existing system app is challenging, even if we build them maintenance would be a nightmare. Invoking existing running activity from other app is not allowed due to security, we get over it by building the container as a system app, but still, not all the permissions and access to shared storage would not be possible. Multiple versions of android would create conflict and issues while inflating the layout in texture. This approach is just a Proof of Concept, it may not be a production-ready approach.\nThere are other approaches you may try out such as\n Customize the Window Manager of the OS, and place the 2D windows in spatial coordinates. Use VirtualDisplay and Presentation Analyse Oculus TV/VrShell App Try these out and share your learning in this field, do let me know if you have any other finding. if this can be done in a better way.\nThis article is original published on XR Practices Publication\n","id":18,"tag":"xr ; ar ; vr ; mr ; 3d ; enterprise ; android ; system ; customization ; Unity ; ARCore ; technology","title":"Run a 2D App in XR Container","type":"post","url":"https://thinkuldeep.com/post/run-a-2d-app-in-xr-container/"},{"content":"The Operating System of XR (AR VR) devices is mostly customized to the hardware of the device, and not just that the OS level customizations are needed to meet the enterprise needs such as integration with Enterprise MDM (Mobile Device Management) solutions (such as Intune, MobileIron and Airwatch), 3D lock screen, idle-timeout, custom home screen, custom settings, security, as so on.\nMost organizations currently using traditional hardware and software assets such as mobile devices, laptops, and desktops, and the processes and policies for managing these devices are quite stable and straightforward by now. Now the expectation is to follow the same/similar process for managing the organization’s XR devices. Most enterprise MDM supports standard OS (Windows, Mac, Linux, and Android), but mostly these are not well aligned to 3D user experience.\nThere are a number of XR devices based on Android, eg Oculus,Google Glass, Vuzix, and ARCore/VR supported Smart Phones. This article is a first step toward building a custom Android OS and small customizations. We will use the Android Open Source Project (AOSP) to build the custom image as per our needs.\nTake a deep breath, this will be a lengthy task, you need to wait for hours and hours to finish the steps, and plan your time ahead.\nLet\u0026rsquo;s start …\nBefore you start Wait, before you start. It needs 230GB plus free disk space, and a decent Linux based environment. I was able to get a spare Mac Book Pro with 230 GB available space, I thought of setting the environment following the guide building-android-o-with-a-mac.\nBut later, I found that after basic build software installations and IDE setup, space available is less than 220 GB. Luckily I was in touch with an Android expert Ashish Pathak, he suggested not to take a risk and find other PC.\nAnd here we go, found a Windows 10 PC with 380GB free space.\nLet’s first set up a Linux environment from scratch. Remember Virtual Box etc are not the option here, let\u0026rsquo;s not complicate things.\nOk, let’s start now…\nSetup a Linux based Development Environment Follow the steps below to set up a Linux environment from scratch on a Windows PC.\n Free up the memory space — Open Disk Manager and choose the Shrink disk/delete an unused drive, and let space comes as unallocated, no need to format it, while installation the unallocated space will be formatted as per the Linux OS.\n Download Ubuntu Desktop OS Image — https://releases.ubuntu.com/16.04/\n Create a Bootable image — Download Rufus from https://rufus.ie/, insert a USB pen drive (I used one of size 8GB), select the downloaded Ubuntu ISO image, and start creating the image bootable image. It will some minutes and once you see “Ready” eject the USB safely.\n Boot from the Pen Drive — On windows OS, choose restart, and press F9 while starting (for me it worked, try to look at what does your PC support to see boot menu), it will show a boot menu, choose the pen drive option. It will start the Ubuntu installation. Go with the default options.\n Setup Ubuntu — During installation, it will reboot, and choose Ubuntu option from the boot menu, and then choose the timezone, language, and set up a user account (remember credentials please for later login). You will see the Ubuntu home screen, and see keyboard shortcuts. Remember these. Congratulations! it\u0026rsquo;s ready now.\nSetup Android Build Environment Follow the below steps to set up a build environment for AOSP. Detailed steps are here\n Install Git — https://git-scm.com/downloads — Linux version Install Repo Tool — https://gerrit.googlesource.com/git-repo/+/refs/heads/master/README.md $ sudo apt-get install repo Install Java 8 — $ sudo apt-get install openjdk-8-jdk Create a working directory arvr@arvr-pc:/$ cd ~ arvr@arvr-pc:~$ mkdir aosp arvr@arvr-pc:~$ cd aosp arvr@arvr-pc:~/aosp$ Download AOSP Source Code Initialize the AOSP repository source code (the default is master): I fetched from the android-8.1.0_r18 branch to build for my old Android device, You may want to go for the latest. arvr@arvr-pc:~/aosp$ repo init -u https://android.googlesource.com/platform/manifest -b android-8.1.0_r18 --depth=1 You need to set the correct username and email in git config, to make the above succeed. git config --global user.name \u0026#34;FIRST_NAME LAST_NAME\u0026#34; git config --global user.email \u0026#34;MY_NAME@example.com\u0026#34; Sync the Repo arvr@arvr-pc:~/aosp$ repo sync -cdj16 --no-tags ... ... Checking out files: 100% (9633/9633), done. Checking out files: 100% (777/777), done.tform/system/coreChecking out files: 23% (181/777) Checking out files: 100% (34/34), done.latform/test/vts-testcase/performanceChecking out files: 32% (11/34) Checking out projects: 100% (592/592), done. repo sync has finished successfully. arvr@arvr-pc:~/aosp$ It takes a couple of hours to complete, I let it run in the night, with caffeine ON.\n Parallel to this, I have also started downloading Android Studio, with SDK, emulators or so. — Follow the steps\nGood night!\nBuild the source code Good morning, source code is downloaded, the Android Studio is also ready.\nLet’s do another big chunk, building the source code\n Set the environment\narvr@arvr-pc:~/aosp$ source build/envsetup.sh including device/asus/fugu/vendorsetup.sh including device/generic/car/vendorsetup.sh including device/generic/mini-emulator-arm64/vendorsetup.sh including device/generic/mini-emulator-armv7-a-neon/vendorsetup.sh including device/generic/mini-emulator-mips64/vendorsetup.sh including device/generic/mini-emulator-mips/vendorsetup.sh including device/generic/mini-emulator-x86_64/vendorsetup.sh including device/generic/mini-emulator-x86/vendorsetup.sh including device/generic/uml/vendorsetup.sh including device/google/dragon/vendorsetup.sh including device/google/marlin/vendorsetup.sh including device/google/muskie/vendorsetup.sh including device/google/taimen/vendorsetup.sh including device/huawei/angler/vendorsetup.sh including device/lge/bullhead/vendorsetup.sh including device/linaro/hikey/vendorsetup.sh including sdk/bash_completion/adb.bash Now lunch command is available for you, pass an x86 target to it, arm ones are quite slow. lunch command without any parameter lists the options, choose one of them.\narvr@arvr-pc:~/aosp$ lunch You\u0026#39;re building on Linux Lunch menu... pick a combo: 1. aosp_arm-eng 2. aosp_arm64-eng 3. aosp_mips-eng 4. aosp_mips64-eng 5. aosp_x86-eng 6. aosp_x86_64-eng 7. full_fugu-userdebug arvr@arvr-pc:~/aosp$ lunch 5 ============================================ PLATFORM_VERSION_CODENAME=REL PLATFORM_VERSION=8.1.0 TARGET_PRODUCT=aosp_x86 TARGET_BUILD_VARIANT=eng TARGET_BUILD_TYPE=release TARGET_PLATFORM_VERSION=OPM1 TARGET_BUILD_APPS= ... OUT_DIR=out AUX_OS_VARIANT_LIST= ============================================ Run make command — “make” or “m” with parallelization 16.\narvr@arvr-pc:~/aosp$ make -j16 ... REALLY setting name! Warning: The kernel is still using the old partition table. The new table will be used at the next reboot. The operation has completed successfully. #### build completed successfully (01:43:31 (hh:mm:ss)) #### This command took good 2 hours, and your emulator is ready now. Good noon!\n Run the emulator\nNow the emulator is ready, let\u0026rsquo;s run it.\narvr@arvr-pc:~/aosp$ emulator \u0026amp; [1] 11121 arvr@arvr-pc:~/aosp$ emulator: WARNING: system partition size adjusted to match image file (2562 MB \u0026gt; 200 MB) emulator: WARNING: cannot read adb public key file: /home/arvr/.android/adbkey.pub qemu-system-i386: -device goldfish_pstore,addr=0xff018000,size=0x10000,file=/home/arvr/aosp/out/target/product/generic_x86/data/misc/pstore/pstore.bin: Unable to open /home/arvr/aosp/out/target/product/generic_x86/data/misc/pstore/pstore.bin: No such file or directory Your emulator is out of date, please update by launching Android Studio: - Start Android Studio - Select menu \u0026#34;Tools \u0026gt; Android \u0026gt; SDK Manager\u0026#34; - Click \u0026#34;SDK Tools\u0026#34; tab - Check \u0026#34;Android Emulator\u0026#34; checkbox - Click \u0026#34;OK\u0026#34; Your fresh android image is running. Congratulations. You should see it in the list of connected devices in ADB.\narvr@arvr-pc:~/aosp$ adb devices List of devices attached emulator-5554 device You can install applications on this emulator just like any other.\n Customizing the Settings App Let\u0026rsquo;s assume, we want to customize the Settings app for our XR need. Let\u0026rsquo;s open the settings I wanted to customize a settings app.\nOops! the settings app is failing to load. Logcat tied to look through the logs using logcat.\narvr@arvr-pc:~/aosp$ adb logcat 05–10 11:21:10.091 1640 1651 I ActivityManager: START u0 {act=android.intent.action.MAIN cat=[android.intent.category.LAUNCHER] flg=0x10200000 cmp=com.android.settings/.Settings (has extras)} from uid 10015 05–10 11:21:10.092 1604 2569 W audio_hw_generic: Not supplying enough data to HAL, expected position 895044 , only wrote 894994 05–10 11:21:10.126 1383 1383 D gralloc_ranchu: gralloc_alloc: Creating ashmem region of size 1540096 05–10 11:21:10.130 2782 2782 W zygote : Unexpected CPU variant for X86 using defaults: x86 05–10 11:21:10.132 1640 1759 I ActivityManager: Start proc 2782:com.android.settings/1000 for activity com.android.settings/.Settings ... This issue may be related to the branch Android 8.1 that I checked out, I tried to rebuild “m clean” and “m” but no luck.\nGood evening!\nLuckily I am not the only one facing this, I found this.\n Android 8 Settings app crashes on emulator with clean AOSP build stackoverflow.com\n Now I got two suggestions, change WifiDisplaySettings.java in the AOSP packages or just clean build with the user-debug target. I will go with the first option.\nOpen the Settings App in Android Studio Locate the settings in /packages/apps/Settings and open in Android Studio. Open class - /src/com/android/settings/wfd/WifiDisplaySettings.java and Change the code snippet\npublic static boolean isAvailable(Context context) { return context.getSystemService(Context.DISPLAY_SERVICE) != null \u0026amp;\u0026amp; context.getSystemService(Context.WIFI_P2P_SERVICE) != null; } To\npublic static boolean isAvailable(Context context) { return false; } Let’s build again, another few hours? No, don\u0026rsquo;t worry this time it won’t take long, it will build just what you have changed. Run the make command again.\narvr@arvr-pc:~/aosp$ m -j16 The operation has completed successfully. [1]+ Done emulator (wd: ~/aosp) (wd now: ~/aosp/packages/apps/Settings/src/com/android/settings/wfd) #### build completed successfully (02:44 (mm:ss)) #### arvr@arvr-pc:~/aosp$ emulator \u0026amp; The emulator is now built with changes, run the emulator now. And here you go, Settings app is working! Customize the Settings App Hay! you know, you have already customized the android image. Not excited enough, let\u0026rsquo;s do one more exercise to customize the Settings Menu options.\nOpen /res/values/strings.xml and change some menu titles eg. change “Battery” title to “Kuldeep Power” and “Network \u0026amp; Internet” to “K Network \u0026amp; Internet”.\n\u0026lt;string name=\u0026#34;power_usage_summary_title\u0026#34;\u0026gt;Kuldeep Power\u0026lt;/string\u0026gt; \u0026lt;string name=\u0026#34;network_dashboard_title\u0026#34;\u0026gt;K Network \u0026amp;amp; Internet\u0026lt;/string\u0026gt; Build the image (remember the m command) and run the emulator as described in the last step, wait for a few minutes and you have your customized android menu. With this, your custom settings app is ready and available in the Andorid Image, and now I would like to close the article, and say you good Night.\nI will write about flashing this image to a device later, the spare phone for which I am building this image, is now not working, looks like my son has experimented with that old phone.\nIn the future, we will also discuss how to customize native users managements in android based XR devices, and also how to handle device-specific options that are not supported in the regular phone.\nGood Night, keep learning! keep sharing!\nThis article is original published on XR Practices Publication\n","id":19,"tag":"xr ; customization ; enterprise ; android ; aosp ; os ; technology","title":"OS Customization for XR Device","type":"post","url":"https://thinkuldeep.com/post/os-customization-for-xr-devices/"},{"content":"Change is the only constant, it requires continuous effort from humankind to understand the change that leads to success, and make us even more successful. In my earlier article, I talked about the success mantra, and also wrote about accepting the change when required. Sometimes the way we perceive our success becomes a blocker for becoming more successful. We accumulate some habits knowingly or unknowingly along the path of being successful, and mostly we never realize that those habits start blocking our way forward.\nToday, in the time of the Covid19 pandemic, even nature wants us to change the way work, the way we eat, the way we use natural resources, and other words the way we live. We will get over this pandemic, with a lot of learning and change. We need to change for humankind, and ourselves, after all, what got you here, won’t get you there.\nI penned down an article for the successful people and want to become more successful. Know the changes that are required to reach there. We might have achieved many goals and crossed milestones, but with all this, we may not reach there, as it requires constant monitoring and change in us.\nThe article is highly influenced by the book What Got You Here Won’t Get You There by Marshall Goldsmith Marshal is one of the world’s preeminent executive coaches, and this book is his experience of coaching successful people to become more successful. I recommend this book to everyone who thinks that they are successful, and they don’t need or need advice for being more successful. This book covers 20 habits that hold you back from the top, honestly, you may link many of them with your habits.\nMy takeaway from the book here -\nSuccess Myths The book starts nicely, directly hitting the critical point in the very first chapters with nice examples. Being a successful person, we are in the delusion of success,\n We too much believe in our skills that made us successful, This gives us confidence that we can succeed in the same way, Which adds another myth that we will succeed in the future, Why not we, as with all this, we are committed to the goals and beliefs. in fact, they are obsessed with the goal. Successful people are the ones who want to become more successful, but the change required to become more successful is hard to finds, it is hard for us to believe why to change.\nAccumulated hold back habits Successful people with strong success myths described in the last section generally accumulates some of the following habits that hold them here, and don’t let them there. The higher we go, the more we prone to behavioral issues.\nSome of them holdback habits are -\nToo much me We the successful people, accumulate some habits where we value too much of being “me”. With this, wining becomes necessary for us in all situations even if it is not important,\n We argue too much and go at the level of putting down other people, or sometimes we withhold important information to stay important and disclose when the time to the limelight. We also try to add unnecessary value to other’s ideas, to remain in the light. When we listen to other’s ideas, even if it is of great importance, rather than saying “Thank you” to the person, we may suggest some un-valuable. With such a state of mind, it is hard to value other’s ideas, and consequences might be that other people stop giving their opinion, which boosts this habit. We gain the habit expressing our opinion, no matter how hard hurtful or non-useful it is, just to realize our rights of “Too much me”. It blocks our way We can’t be masters of everything, but with a habit of putting “too much me” keep telling the world how smart are we, and this can some times lead to embarrassment when we arguing with other people with that expertise, we also tend to not value other’s expertise.\nDefend and Oppose With this holdback habit, we tend to defend what we say and oppose each and everything that others say. We tend to use words like “if”, “No”, “But” and “However” a lot in our conversation.\nWhen someone comes with an idea, our first reaction is “Let me tell you why it won’t work”. No matter what the idea is, our mind, first starts finding negatives from the idea.\nSuccess Bias We are so biased of the success that we feel we can make fun of anyone, pass judgment, or make even some descriptive comments, and they would ignore them in respect to our success. It is not true. The damage by our word is irreversible, it doesn\u0026rsquo;t matter how hard we apologize later.\nSome have a bias for some people, and they play favoritism.\nFailed to recognize others or eat up their credit Even some successful people have this habit of not recognizing others well and encourage them for their work. Even worse, some leader eats up the credit of others, and use their idea as their own idea.\nGeneral interpersonal holdback issues Always making excuses, and blaming others to the issues Failed to express regret or gratitude when it is needed most Speak more and listen less. Talking in anger Not ready to listen to bad news React more and respond less Know your holdback Habits The only way to know about your habits that hold you back is feedback. ThoughtWorks has a great culture of feedback, everyone gives feedback frequently, and it is advised to get feedback from all the diverse roles, and across teams.\nThe ‘self’ we think, and the ‘self’ rest of the world sees in us is mostly different. The only way to fill the gap is by getting right feedback.\nSome organizations also have 360-degree feedback and encourage people for one-on-one feedback with peers. We should know the art of taking feedback, and share a vision and agenda for asking the feedback. Consume the feedback peacefully. This is just one way of feedback, there are other ways to get feedback from others.\n Observe others, if we find some problem with them, try to see that problem in ourselves. It\u0026rsquo;s hard but, but be honest, we will get immediate feedback without really talking to anyone. Observe others, and see how they react when we talk to them or talk in general in the meeting. Note down their physical expressions, physical movement, ignorance, or so. Note down all of these, you will see some pattern. The Change Begins The right time to start the change is now, identify the right thing you want to change and start it now. Make a plan, and tell the world that you are improving, yes, in fact, publicize to the world, that you are changing, and ask for a favor. Remember, and convey the message, that you can’t change the past but from today are trying to improve. Leave the past behind and move forward.\nGenerally, change begins with apologizing, the author calls it a magic move. Please mean it when you apologize to someone.\nImprove your listening skill, My friend Gary Butters says -\n You have two ears and one mouth\u0026hellip; use them in that proposition\u0026hellip;\nand in that order!\n If your statement not adding any value then don’t just say anything other than ‘thank you’. and yet this is the next key thing you do. Starting thanking people in your life, have gratitude towards them.\nAnd so on, identify that change that you want to make, and be clear why you want to do that change. eg. Complete the statement\n if you get rid of this holdback habit then ……………\n Measure the change The change becomes easy when we can measure it, if we can measure it then we can achieve it. and the best technique to measure is follow up. Once we declare that we are changing, just set up the follow ups with your peers, and people who signed up to favor us in our journey of change. Ask them about how much have we changed, what next they expect etc.\nYou are here, define your there and let the journey begins\nConclusion Let the change continue till the last bread, and keep improving every day for humankind and self, and be better you.\nIt is originally published at Medium\n","id":20,"tag":"general ; selfhelp ; success ; covid19 ; book-review ; takeaways ; change ; motivational","title":"Change to become more successful!","type":"post","url":"https://thinkuldeep.com/post/change_to_become_more_successful_/"},{"content":"In the previous article, we described the importance of interoperability in while building Enterprise XR solutions, In the article, we will discuss how to manage user accounts in the XR device, and implement single sign-on across the XR apps. We will use Android’s AccountManger approach to login into active directory and integrated it into Unity. Please read my earlier article on setting up the app in active directory, log in using android webview, considering a web-based company login form should appear for login, and no custom login form is allowed to capture the credentials.\nImplement Android Library with custom AccountAuthenticator We will continue from the last section of the previous article, which has WebViewActivity which launch webview for logon.\nWebview Login Activity Let’s add a method to launch a webview activity.\npublic class WebViewActivity extends AppCompatActivity { ... private WebView webView; public static void launchActivity(Activity activity, String url, String redirectURI, IResponseCallBack callBack){ Intent intent = new Intent(activity, WebViewActivity.class); intent.putExtra(INTENT_INPUT_INTERCEPT_SCHEME, redirectURI); intent.putExtra(INTENT_INPUT_URL, url); intent.putExtra(INTENT_RESPONSE_CALLBACK, new CustomReceiver(callBack)); activity.startActivity(intent); } private static class CustomReceiver extends ResultReceiver{ ... @Override protected void onReceiveResult(int resultCode, Bundle resultData) { String code = resultData.getString(INTENT_RESULT_PARAM_NAME); callBack.OnSuccess(code); } } Create binding service for custom Account Type Create a custom authenticator, extending AbstractAccountAuthenticator, implement all abstract methods as follows. public class AccountAuthenticator extends AbstractAccountAuthenticator { public AccountAuthenticator(Context context){ super(context); } @Override public Bundle addAccount(AccountAuthenticatorResponse accountAuthenticatorResponse, String s, String s1, String[] strings, Bundle bundle) throws NetworkErrorException { // do all the validations, call AccountAuthenticatorResponse for //any failures. return bundle; } ... @Override public Bundle getAuthToken(AccountAuthenticatorResponse accountAuthenticatorResponse, Account account, String s, Bundle bundle) throws NetworkErrorException { return bundle; } ... } Create a service to bind the custom authenticator public class MyAuthenticatorService extends Service { @Override public IBinder onBind(Intent intent) { AccountAuthenticator authenticator = new AccountAuthenticator(this); return authenticator.getIBinder(); } } Add the service in Android Manifest in the application tag \u0026lt;service android:name=\u0026#34;package.MyAuthenticatorService\u0026#34;\u0026gt; \u0026lt;intent-filter\u0026gt; \u0026lt;action android:name=\u0026#34;android.accounts.AccountAuthenticator\u0026#34; /\u0026gt; \u0026lt;/intent-filter\u0026gt; \u0026lt;meta-data android:name=\u0026#34;android.accounts.AccountAuthenticator\u0026#34; android:resource=\u0026#34;@xml/authenticator\u0026#34; /\u0026gt; \u0026lt;/service\u0026gt; It refers to an XML file that defines the type of account the authenticator service will bind. \u0026lt;?xml version=\u0026#34;1.0\u0026#34; encoding=\u0026#34;utf-8\u0026#34;?\u0026gt; \u0026lt;account-authenticator xmlns:android=\u0026#34;http://schemas.android.com/apk/res/android\u0026#34; android:accountType=\u0026#34;com.tw.user\u0026#34; android:label=\u0026#34;MyAccount\u0026#34; android:icon=\u0026#34;@drawable/ic_launcher\u0026#34; android:smallIcon=\u0026#34;@drawable/ic_launcher\u0026#34; /\u0026gt; This adds support for account type “com.tw.user”. The app which includes this library will intercept the request for this type.\nManage Accounts of Custom Type Let\u0026rsquo;s add class to manage user accounts.\nNow let\u0026rsquo;s add methods to this class, to manage account\n Add User Account — This method adds an account to android AccountManager. Since we are trying to add an account of type “com.tw.user”, binder service will call our AccountAuthenticator.addAccount. We may do data validation there, however, we are currently returning the same bundle there. After this, it will call AccountManagerCallback which we have provided in the addAccount call below. public static void addAccount(Context context, String bundleData, final IResponseCallBack callback) { final AccountManager accountManager = AccountManager.get(context); Bundle bundle = new Bundle(); try { JSONObject jsonObject = new JSONObject(bundleData); bundle.putString(KEY_ACCOUNT_TYPE, ACCOUNT_TYPE); bundle.putString(KEY_ACCOUNT_NAME, \u0026#34;MyAccount\u0026#34;); ... //prepare bundle accountManager.addAccount(ACCOUNT_TYPE, authTokenType, null, bundle, (Activity) context, new AccountManagerCallback\u0026lt;Bundle\u0026gt;() { @Override public void run(AccountManagerFuture\u0026lt;Bundle\u0026gt; future) { Bundle udata = null; try { udata = future != null ? future.getResult() : null; if (udata != null) { String accountName =udata.getString(KEY_ACCOUNT_NAME); String accountType= udata.getString(KEY_ACCOUNT_TYPE); Account account= new Account(accountName,accountType); boolean result= accountManager.addAccountExplicitly( account, null, udata); if(result){ callback.OnSuccess(\u0026#34;Account Added Successfully\u0026#34;); } else { callback.OnFailure(\u0026#34;Account can not be added\u0026#34;); } } else { callback.OnFailure(\u0026#34;Error! No user added\u0026#34;); } } catch (Exception e) { callback.OnFailure(\u0026#34;Error! \u0026#34; + e.getMessage()); } } }, new Handler(Looper.getMainLooper())); } catch (Exception e) { callback.OnFailure(\u0026#34;Error! \u0026#34; + e.getMessage()); } }\n Remove User Account — This method removes an account from android AccountManager. public static void removeAccount(Context context, final IResponseCallBack callnack) { final AccountManager accountManager = AccountManager.get(context); Account[] accounts = accountManager.getAccountsByType(ACCOUNT_TYPE); if(accounts.length \u0026gt; 0) { Account account = accounts[0]; try { boolean result = accountManager.removeAccountExplicitly( account); if(result){ callnack.OnSuccess(\u0026#34;Account removed successfully!\u0026#34;); } else { callnack.OnFailure(\u0026#34;Error! Could not remove account\u0026#34;); } } catch (Exception e) { callnack.OnFailure(\u0026#34;Error! +\u0026#34; + e.getMessage()); } } else { callnack.OnFailure(\u0026#34;Error! No account found\u0026#34;); } } Get Login In User Account — This method fetches logged in user data. public static void getLoggedInUser(Context context, final IResponseCallBack responseCallBack) { final AccountManager accountManager = AccountManager.get(context); Account[] accounts = accountManager.getAccountsByType(ACCOUNT_TYPE); try { if(accounts.length \u0026gt; 0) { Account account = accounts[0]; final JSONObject jsonObject = new JSONObject(); jsonObject.put(KEY_USER_NAME, accountManager.getUserData(account, KEY_USER_NAME)); ... prepare the response. responseCallBack.OnSuccess(jsonObject.toString()); } else { responseCallBack.OnFailure(\u0026#34;Error! No logged user of the type\u0026#34; + ACCOUNT_TYPE); } } catch (Exception e) { responseCallBack.OnFailure(\u0026#34;Error!\u0026#34; + e); } } Source code for android library can be found at https://github.com/thinkuldeep/xr-user-authentication\nUnity Authenticator App Let’s customize the unity app that we have created in the last article, as follows, build the android library and include it in the unity project.\nAdd methods to ButtonController.cs\nLogin — Launch WebView for Login On clicking Login button, the following logic opens the WebViewActivity implemented in the Android library and load the provided URL.\nprivate void OnLoginClicked() { AndroidJavaClass WebViewActivity = new AndroidJavaClass(\u0026#34;package.WebViewActivity\u0026#34;); _statusLabel = \u0026#34;\u0026#34;; AndroidJavaObject context = getUnityContext(); WebViewActivity.CallStatic(\u0026#34;launchActivity\u0026#34;, context, authUri, redirectUri, new LoginResponseCallback( this)); } make sure of the redirectURI. The webview will intercept this redirectURI and return code, which we use for getting the access_token.\nprivate class LoginResponseCallback : AndroidJavaProxy { private ButtonController _controller; public LoginResponseCallback(ButtonController controller) : base(\u0026#34;package.IResponseCallBack\u0026#34;) { _controller = controller; } public void OnSuccess(string result) { _statusLabel = \u0026#34;Code received\u0026#34;; _statusColor = Color.black; _controller.startCoroutineGetToken(result); } public void OnFailure(string errorMessage) { _statusLabel = errorMessage; _statusColor = Color.red; } } Login — Fetch the Access Token Once we receive the authorization code, we need to fetch access_token by the HTTP token URL.\nvoid startCoroutineGetToken(string code) { StartCoroutine(getToken(code)); } IEnumerator getToken(string code) { WWWForm form = new WWWForm(); form.headers[\u0026#34;Content-Type\u0026#34;] = \u0026#34;application/x-www-form-urlencoded\u0026#34;; form.AddField(\u0026#34;code\u0026#34;, code); form.AddField(\u0026#34;client_id\u0026#34;, clientId); form.AddField(\u0026#34;grant_type\u0026#34;, \u0026#34;authorization_code\u0026#34;); form.AddField(\u0026#34;redirect_uri\u0026#34;, redirectUri); var tokenRequest = UnityWebRequest.Post(tokenUri, form); yield return tokenRequest.SendWebRequest(); if (tokenRequest.isNetworkError || tokenRequest.isHttpError) { _statusLabel = \u0026#34;Login failed - \u0026#34; + tokenRequest.error; _statusColor = Color.red; } else { var response = JsonUtility.FromJson\u0026lt;TokenResponse\u0026gt;(tokenRequest.downloadHandler.text); _statusLabel = \u0026#34;Login successful! \u0026#34;; _statusColor = Color.black; TokenUserData userData = decodeToken(response.access_token); AddAccount(userData, tokenResponse); } } Login — Add Account to Android Accounts Once we get the access_token, we can decode access_token get some user details from it.\nprivate class TokenUserData { public string name; public string given_name; public string family_name; public string subscriptionId; } private TokenUserData decodeToken(String token) { AndroidJavaClass Base64 = new AndroidJavaClass(\u0026#34;java.util.Base64\u0026#34;); AndroidJavaObject decoder = Base64.CallStatic\u0026lt;AndroidJavaObject\u0026gt;(\u0026#34;getUrlDecoder\u0026#34;); var splitString = token.Split(\u0026#39;.\u0026#39;); var base64EncodedBody = splitString[1]; string body = System.Text.Encoding.UTF8.GetString(decoder.Call\u0026lt;byte[]\u0026gt;(\u0026#34;decode\u0026#34;, base64EncodedBody)); return JsonUtility.FromJson\u0026lt;TokenUserData\u0026gt;(body); } Once we have all the detail for the account, we call the add account to android accounts using the library APIs.\nprivate void AddAccount(TokenUserData userData, TokenResponse tokenResponse) { AndroidJavaClass AccountManager = new AndroidJavaClass(\u0026#34;pakcage.UserAccountManager\u0026#34;); UserAccountData accountData = new UserAccountData(); accountData.userName = userData.given_name +\u0026#34; \u0026#34; + userData.family_name ; accountData.subscriptionId = userData.subscriptionId; accountData.tokenType = tokenResponse.token_type; accountData.authToken = tokenResponse.access_token; AccountManager.CallStatic(\u0026#34;addAccount\u0026#34;, getUnityContext(), JsonUtility.ToJson(accountData), new AddAccountResponseCallback( this)); } Get Logged-in User On Get LoggedIn User, call a respective method from the android library.\nprivate void OnGetLoggedUserClicked() { AndroidJavaClass AccountManager = new AndroidJavaClass(\u0026#34;package.UserAccountManager\u0026#34;); AccountManager.CallStatic(\u0026#34;getLoggedInUser\u0026#34;, getUnityContext(), new LoginResponseCallback(false, this)); } Logout — Remove Account to Android Accounts On logout, call remove the account.\nprivate void OnLogoutClicked() { AndroidJavaClass AccountManager = new AndroidJavaClass(\u0026#34;poackage.UserAccountManager\u0026#34;); AccountManager.CallStatic(\u0026#34;removeAccount\u0026#34;, getUnityContext(), new LogoutResponseCallback()); } Run and test Lets now test all this. Export the unity project/run in the device. Try login, get logged in user, and logout API. In the next section, we will try to access the logged-in user outside of this authenticator app.\nSharing Account with other XR apps Once you logged into the XR Authenticator App and added an account in android accounts, it can be accessed in any other app using Android APIs. Let\u0026rsquo;s create another unity project on similar lines. AccountViewer XR App.\nThe cube is supposed to rotate only if a user is logged in the android system, and see the logged-in user details.\nGet User Account from Android Accounts It does not require any native plugin to be included. you can get the shared account directly via android APIs - accountManager.getAccountsByType(“com.tw.user”)\nprivate UserAccountData getLoggedInAccount() { UserAccountData userAccountData = null; var AccountManager = new AndroidJavaClass(\u0026#34;android.accounts.AccountManager\u0026#34;); var accountManager = AccountManager.CallStatic\u0026lt;AndroidJavaObject\u0026gt; (\u0026#34;get\u0026#34;, getUnityContext()); var accounts = accountManager.Call\u0026lt;AndroidJavaObject\u0026gt; (\u0026#34;getAccountsByType\u0026#34;, \u0026#34;com.tw.user\u0026#34;); var accountArray = AndroidJNIHelper.ConvertFromJNIArray\u0026lt;AndroidJavaObject[]\u0026gt; (accounts.GetRawObject()); if (accountArray.Length \u0026gt; 0) { userAccountData = new UserAccountData(); userAccountData.userName = accountManager.Call\u0026lt;string\u0026gt; (\u0026#34;getUserData\u0026#34;, accountArray[0], \u0026#34;userName\u0026#34;); ... userAccountData.tokenType = accountManager.Call\u0026lt;string\u0026gt; (\u0026#34;getUserData\u0026#34;, accountArray[0], \u0026#34;tokenType\u0026#34;); userAccountData.authToken = accountManager.Call\u0026lt;string \\. (\u0026#34;getUserData\u0026#34;, accountArray[0], \u0026#34;authToken\u0026#34;); } return userAccountData; } Get profile using logged in user’s token Read the token and get the profile data from the HTTP URL https://graph.microsoft.com/beta/me/profile/names with JWT token.\nIEnumerator getProfile(UserAccountData data) { UnityWebRequest request = UnityWebRequest.Get(getProfileUri); request.SetRequestHeader(\u0026#34;Content-Type\u0026#34;, \u0026#34;application/json\u0026#34;); request.SetRequestHeader(\u0026#34;Authorization\u0026#34;, data.tokenType + \u0026#34; \u0026#34; + data.authToken); yield return request.SendWebRequest(); if (request.isNetworkError || request.isHttpError) { _labels[indexGetProfile] = \u0026#34;Failed to get profile\u0026#34; + request.error; _colors[indexGetProfile] = Color.red; } else { var userProfile = JsonUtility.FromJson\u0026lt;UserProfileData\u0026gt;(request.downloadHandler.text); _labels[indexGetProfile] = \u0026#34;DisplayName: \u0026#34; + userProfile.displayName + \u0026#34;\\nMail: \u0026#34; + userProfile.mail + \u0026#34;\\nDepartment: \u0026#34; + userProfile.department + \u0026#34;\\nCreated Date: \u0026#34; + userProfile.createdDateTime; _colors[indexGetProfile] = Color.black; } } Check if a user is logged Even application focus, check if a user exists in android accounts. if the user exists then load the profile data.\nprivate void OnApplicationFocus(bool hasFocus) { if (hasFocus) { CheckIfUserIsLoggedIn(); } } private void CheckIfUserIsLoggedIn() { UserAccountData data = GetLoggedInAccount(); if (data == null) { _isUserLoggedIn = false; _labels[indexWelcome] = \u0026#34;Welcome Guest!\u0026#34;; _labels[indexGetProfile] = \u0026#34;\u0026#34;; } else { _isUserLoggedIn = true; _labels[indexWelcome] = \u0026#34;Welcome \u0026#34; + data.userName + \u0026#34;!\u0026#34;; StartCoroutine(getProfile(data)); } } On update make sure you rotate the cube only if a user is logged in.\nprivate void Update() { if (_isUserLoggedIn) { cube.transform.Rotate(Time.deltaTime * 2 * _rotateAmount); } requestLoginButton.gameObject.SetActive(!_isUserLoggedIn); welcomeText.text = _labels[indexWelcome]; getProfileResponse.text = _labels[indexGetProfile]; getProfileResponse.color = _colors[indexGetProfile]; } Run and test Login on the Account Authenticator XR App, as in the last step. Open the Account Viewer XR App, you should see the Cube Rotating and Profile details of the logged-in user.\nRequest Login from external XR app Let’s implement the request login from the external unity app\nAdd Request Login Add button for request login in the Account Viewer app. It will find an intent for the Account Authenticator app running in package com.tw.threed.authenticator. If the intent is available, means the XR authenticator app is installed in the device, then it will start the intent with a string parameter external_request.\nprivate void OnRequestLoginClicked() { AndroidJavaObject currentActivity = getUnityContext(); AndroidJavaObject pm = currentActivity.Call\u0026lt;AndroidJavaObject\u0026gt;(\u0026#34;getPackageManager\u0026#34;); AndroidJavaObject intent = pm.Call\u0026lt;AndroidJavaObject\u0026gt;(\u0026#34;getLaunchIntentForPackage\u0026#34;, \u0026#34;com.tw.threed.authenticator\u0026#34;); if (intent != null) { intent.Call\u0026lt;AndroidJavaObject\u0026gt;(\u0026#34;putExtra\u0026#34;,\u0026#34;external_request\u0026#34;, \u0026#34;true\u0026#34;); currentActivity.Call(\u0026#34;startActivity\u0026#34;, intent); labels[requestlogin] = \u0026#34;Send Intent\u0026#34;; colors[requestlogin] = Color.black; } else { labels[requestlogin] = \u0026#34;No Intent found\u0026#34;; colors[requestlogin] = Color.red; } } Handle external login request in the Account Authenticator App There are multiple ways to receive externally initiated intent in Unity, but most of them need a native plugin to extend UnityPlayerActivity class. Here is another way, handle on focus event of any GameObject and access the current activity and check if intent has the external_request parameter. Call the OnLoginClicked action if it found the parameter.\nprivate bool inProcessExternalLogin = false; private void OnApplicationFocus(bool hasFocus) { if (hasFocus) { AndroidJavaObject ca = getUnityContext(); AndroidJavaObject intent = ca.Call\u0026lt;AndroidJavaObject\u0026gt;(\u0026#34;getIntent\u0026#34;); String text = intent.Call\u0026lt;String\u0026gt;(\u0026#34;getStringExtra\u0026#34;, \u0026#34;external_request\u0026#34;); if (text != null \u0026amp;\u0026amp; !inProcessExternalLogin) { _statusLabel = \u0026#34;Login action received externally\u0026#34;; _statusColor = Color.black; inProcessExternalLogin = true; OnLoginClicked(); } } } We need to quit the app after successfully adding the account, so it can go in. background.\nprivate class AddAccountResponseCallback : AndroidJavaProxy { ... void OnSuccess(string result) { _statusLabel = result; _statusColor = Color.black; if (_controller.inProcessExternalLogin) { Application.Quit(); } } ... } Run and test Open Account Viewer App and request login (it will be visible only if the user is not logged in) It will open the Account Authenticator XR app and invoke web login. Once you log in it will add a user account in accounts, and close the Account Authenticator XR app. On returning to the Account Viewer app, you will the rotating cube rotating and user profile details. Now go back to the Account Authenticator app, and logout (ie remove the account). On the Account Viewer app, the cube is no more rotating, and no user profile data is shown. This completes the tests. You can found complete source code here :\n Android Library: https://github.com/thinkuldeep/xr-user-authentication Account Authenticator App: https://github.com/thinkuldeep/3DUserAuthenticator Account Viewer App: https://github.com/thinkuldeep/3DAccountViewer Conclusion In this article, we have learned how to do single sign-on in the android based XR device, and share the user context with other XR applications. Single sign-on is one of the key requirements for the enterprise-ready XR device and applications.\nIt was originally published at XR Practices Medium Publication\n","id":21,"tag":"xr ; ar ; vr ; mr ; enterprise ; android ; unity ; tutorial ; technology","title":"User Account Management In XR","type":"post","url":"https://thinkuldeep.com/post/user-account-management-in-xr/"},{"content":"The web has grown from a read-only web (Web 1.0) to interactive web (Web 2.0) and then moved into the semantic web which is more connected and more immersive. 3D content on the web is there for quite some time, the most web-browsers support rendering 3D content natively. The rise for XR technologies and advancements in devices and the internet is leading the demand for Web-based XR. It will be a norm for web-based development.\nWe are going through the Covid19 time, and most of us are working from home. These days, one message is quite popular on social media, asking to keep your kids engaged, and let them bring Google 3D animals in your bedroom\nHowever, this feature was there for some time now. Google search pages have provision to show models in 3D in your space, you need a compatible phone for making it work.\nThis is WebXR. Experiencing XR directly on a website without a need for special apps, or downloads. Generally, XR application development needs a good amount of effort and cost on platforms such as Unity 3D, Vuforia, Wikitude. The content is delivered in the form of an app for XR enabled smartphones or head-mounted devices such as Hololens, Oculus, NReal, or similar. With WebXR technology, we can build applications directly with our favorite web technology (HTML, JavaScript) and deliver the content on our website.\nThere are few WebXR libraries popular these days such as babylon.js, three.js, and A-frame. WebVR is quite stable but WebXR is experimental at this stage in most of the libraries. In addition to Google and Mozilla, organizations like 8th Wall, XR+ are also putting great effort into making the WebXR stable. At ThoughtWorks, we have built some cool projects for our clients utilizing available WebXR libraries.\nIn this post, we will say hello to WebXR, and also takes you through the classic XR development in Unity 3D. The first part of the article is for people who are new to Unity 3D, so they need to say hello to the Unity 3D platform, then we will go to the WebXR world, and understand how we can co-relate both the world.\nNote: — We will discuss the WebAR part for android here, you may relate it to WebVR and for other devices, the concept is more or less the same. Follow the respective libraries documentation if you are more interested in the WebVR part or for other devices.\nHello XR World! If you haven’t yet gone through Unity 3D, please read below a cool article by my colleague Neelarghya Mondal -\n Well let me tell you this won’t be the end all be all… This is just the beginning of a series of Journal log styled… Let’s get started with Unity…\n Then set up the first XR project using AR Core — [the google guidelines](the google guidelines), Import the ARCore SDK package into Unity.\n Remove the main camera, light and the add prefabs “ARCore Device” and “Environment Light” available in ARCore SDK. Create a prefab of Cube with rotating script public class RotateCube : MonoBehaviour { private Vector3 rotation = new Vector3(20, 30, 0); void Update() { transform.Rotate(rotation * Time.deltaTime); } } Create an InputManager class that instantiates the cube prefab at the touch position. public class InputManager : MonoBehaviour { public Camera camera; public GameObject cubePrefab; // Update is called once per frame void Update() { Touch touch; if (Input.touchCount \u0026gt; 0 \u0026amp;\u0026amp; (touch = Input.GetTouch(0)).phase == TouchPhase.Began) { CreateCube(touch.position); } else if (Input.GetMouseButtonDown(0)) { CreateCube(Input.mousePosition); } } void CreateCube(Vector2 position) { Ray ray = camera.ScreenPointToRay(position); Instantiate(cubePrefab, (ray.origin + ray.direction), Quaternion.identity); } } You can test it directly on an Android Phone (ARCore compatible). Connect the phone via USB, and test it with ARCore Instant preview. It needs some extra services running on the device. Alternatively, you can just build and run the project for android, and it will install the app to the device, and you can play with your XR app. This completes the first part, and now we know how a typical AR app is built and delivered. Find the source code here Hello Web XR! Let’s say hello to Web XR. We will try to implement a similar example that we have implemented in the last section, but this time without the Unity 3D platform and without building an app for the device. We will implement a simple web page that can do the same, using WebXR library https://threejs.org\nCreate a web page Create a work folder and open it in the editor — I use Intellij but you may use any text editor. Create an HTML file and host it in web-server — index.html, Intellij has built it web-server which is just a click away, but you may use other easy ways You may download three.js dependencies on your own or just download via some package manager — npm install three Follow the steps described by three.js and update the index.html page — Creating-a-scene For XR enabled scene keep renderer.xr.enabled = true; Now, index.html looks like below with an On Touch logic which adds a cube on touching the screen. \u0026lt;script type=\u0026#34;module\u0026#34;\u0026gt; import * as THREE from \u0026#39;./node_modules/three/build/three.module.js\u0026#39;; import { ARButton } from \u0026#39;./node_modules/three/examples/jsm/webxr/ARButton.js\u0026#39;; var container; var camera, scene, renderer, lastmesh; var controller; init(); animate(); function init() { container = document.createElement( \u0026#39;div\u0026#39; ); document.body.appendChild( container ); scene = new THREE.Scene(); camera = new THREE.PerspectiveCamera( 70, window.innerWidth / window.innerHeight, 0.01, 20 ); var light = new THREE.HemisphereLight( 0xffffff, 0xbbbbff, 1 ); light.position.set( 0.5, 1, 0.25 ); scene.add( light ); renderer = new THREE.WebGLRenderer( { antialias: true, alpha: true } ); renderer.setPixelRatio( window.devicePixelRatio ); renderer.setSize( window.innerWidth, window.innerHeight ); renderer.xr.enabled = true; container.appendChild( renderer.domElement ); document.body.appendChild( ARButton.createButton( renderer ) ); var geometry = new THREE.BoxGeometry( 0.1, 0.1, 0.1 ); function onSelect() { var material = new THREE.MeshNormalMaterial(); var mesh = new THREE.Mesh( geometry, material ); mesh.position.set( 0, 0, - 0.3 ).applyMatrix4( controller.matrixWorld ); mesh.quaternion.setFromRotationMatrix( controller.matrixWorld ); scene.add(mesh); lastmesh = mesh; } controller = renderer.xr.getController( 0 ); controller.addEventListener( \u0026#39;select\u0026#39;, onSelect ); scene.add( controller ); window.addEventListener( \u0026#39;resize\u0026#39;, onWindowResize, false ); } function onWindowResize() { camera.aspect = window.innerWidth / window.innerHeight; camera.updateProjectionMatrix(); renderer.setSize( window.innerWidth, window.innerHeight ); } function animate() { renderer.setAnimationLoop( render ); } function render() { if(lastmesh){ lastmesh.rotation.x += 0.1; lastmesh.rotation.y += 0.2; } renderer.render(scene, camera ); } \u0026lt;/script\u0026gt; Host and test the web page Publish the page in the web-server and test it. The page is now hosted in the inbuilt web-server of IntelliJ, make sure external access is enabled. http://localhost:8001/hello-webxr/index.html. It is still not ready to be used on browsers, we need to do some customizations on the browsers to make it work.\nOn PC, we need to install WebXR emulators in the browser to make it work. VR emulator is great but the AR emulator still doesn’t work well.\nOn Mobile, open chrome browser and open chrome://flags page. Search for XR and enable all the experimental features. Make sure The device must have ARCore support. Now we want to test the page which is running on PC/Laptop in the chrome browser on the android phone. There are multiple ways to do that (port forwarding, hotspot, etc). Simplest is, come on the same wifi network and access the page via IP instead of localhost. Type in http://192.168.0.6:8001/hello-webxr/index.html\nOnce the web page is loaded, it will ask to start XR experience, and also ask for the user consent for accessing the camera.\nOnce approved, it scans the area and you can start placing the cubes.\nSource code is available here\nTry this link in your android phone right now, and experience the WebXR. https://thinkuldeep.github.io/hello-webxr/\nI tried, a similar implementation can be done with Babylon JS but “immersive-ar” mode is not stable there when we host webpage, however, “immersive-vr” mode works great. So these webpages can be opened in any supported browser on the XR devices, you get all advantage of Web, and server the XR content simply from your website. This completes our introduction with WebXR.\nConclusion We have got an introduction to XR as well as WebXR. There are some features in XR which may not be achieved by WebXR but it is still a great option to take the technology to a wider audience. There are already great examples at the respective sites of these libraries, the community is also submitting some good projects samples and games. Try those, and stay tuned to WebXR, it is coming.\nIt is originally published at XR Practices Medium Publication\n","id":22,"tag":"xr ; ar ; vr ; unity ; webxr ; immersive-web ; spatial web ; general ; tutorial ; technology","title":"WebXR - the new Web","type":"post","url":"https://thinkuldeep.com/post/webxr-the-new-web/"},{"content":"In Part I of the article, we summarized our observations of eXtended reality’s foundational blocks - AR, VR and MR. We also discussed the popular and most relevant tools and implementations of the new technology. In Part II of the article, we will be looking at a few accessible and novel implementations of the tech.\nAt ThoughtWorks, we have delivered both XR solutions and components of XR SDK (Software Development Kit) on the back of which XR solutions are built. We have explored Unity 3D technology beyond the gaming bucket. Infact, ThoughtWorks’ Technology Radar states,\n “\u0026hellip;Unity has become the platform of choice for VR and AR application development because it provides the abstractions and tooling of a mature platform\u0026hellip;Yet, we feel that many teams, especially those without deep experience in building games, will benefit from using an abstraction such as Unity\u0026hellip; it offers a solution for the huge number of devices.”\n We have also built an XR incubation center that lets us experiment with XR experiments while collaborating on agile development practices like TDD and CI/CD for XR.\nAll this work with XR tech has helped us understand the space that much better. And, I have taken this opportunity to round up some really interesting use cases for the tech that will be ‘good-to-know’ for business leaders who want to explore the tech’s potential.\nUse case 1: training and maintenance XR tech has a direct impact on the improved efficiency and effectiveness of training, and maintenance and repair use cases.\nVR tech can generate and simulate environments that are either hard to reproduce or involve huge costs and/or risk. Examples of such scenarios are factory training sessions or those on fire safety or ‘piloting’ an aircraft.\nAR, on the other hand, is a great way to provide technicians with contextual support in a work environment. For instance, a technician wearing a pair of AR-enabled smart glasses can view digital content that’s superimposed on their work environment. This could be in the form of step-by-step instructions for a task or to sift through relevant manuals and videos while they are on the assembly line/or on field performing a task or it could be used to issue a voice command. The AR device is enabled to also take a snapshot of the work environment for compliance checks. Additional features include the technician being able to stream the experience to a remote expert, who can annotate the live environment and review the work.\nHere’s an interesting case study from ThoughtWorks of how XR tech is transforming airline operations.\nUse case 2: locate and map Autonomous vehicles leverage techniques like Simultaneous Localization and Mapping (SLAM) to build virtual maps and locate objects in an environment. Similar technology empowers XR-enabled devices to detect unique environments and track device positions within.\nThis Virtual Positioning System can help build indoor positioning applications for large premises like airports, train stations, warehouses, factories and more. Such a system also allows users to mark, locate and navigate to specific target points.\nAt ThoughtWorks, we recently implemented a concept for indoor positioning at our offices. It’s a mobile app called Office Explorer, and when used with an AR-enabled smartphone or tablet, configures the entire office on a 2D map that visitors can use to explore the space.\nThis usecase was also quoted at Google DevFest\nWe have built office-explorer #DevFest2019 pic.twitter.com/0yRz7Euo8J\n\u0026mdash; Kuldeep Singh (@thinkuldeep) September 10, 2019 Visitors or new ThoughtWorkers can follow the augmented path on the app to reach meeting rooms and work stations. This app was built using Unity 3D, ARCore and ARKit. The backend services and web portal were built on React, Kotlin, MongoDB and Micronaut.\nUse case 3: product customization and demonstration Virtual, CAD models or paper-drawn prototypes usually precede final physical products. What’s more, the former also allows for affordable customizations in an XR environment. Consumers have an opportunity to try 3D models of products before they buy it.\nBelow is a quick concept that was created at ThoughtWorks, where one could customize a particular virtual model of a vehicle with respect to colour, interiors and fabrics. They could also step inside the virtual car and experience the personalizations, evaluate how their car looks and fits in their garage or in front of the house.\nThoughtWorkers experimenting with AR-enabled customization of a vehicle. Our app used Unity 3D and ARCore to detect the plane surface and place the product’s 3D model before commencing customizations. The same technology can be used to build an entire driving experience - a great sales tool! An additional benefit of this approach is one of the most popular features when demonstrating a product; the complete 360° visualization of it.\nHere is how Thoughtworks helped Daimler build innovative tools to support the latters’ futuristic sales experience.\nUse case 4: contextual experiences Gartner states that 46% of global retailers planned to deploy either AR or VR solutions to meet customer service experience requirements. The global research and advisory firm also foretells that no less than 100 million consumers will use AR to shop both online and in-store in the future.\nBusinesses are being disrupted by both the sheer amount of data they have at their disposal and the way they engage with that date. This data is in the form of buying history, product reviews, product-health, recommendations, usages statistics, features level analysis, comparative studies, social sentiment analysis for the product and more.\nA lot of customers look to such data as they make decisions they can trust and be confident in. This gives businesses to present (hopefully unbiased) data within the right context, empowering consumers to make their decisions quickly.\nThoughtWorks’ TellMeMore app displays relevant data for a book An implementation called TellMeMore is an app that we designed at ThoughtWorks. It aims at providing a better in-store product experience by overlaying information like book reviews, number of pages and comparative prices on/beside the product. This app was created using Vuforia and Unity3D.\nThis is illustrative of opportunities to enhance customer experiences to the extent that without opening a product, customers have access to product insights and even experience detailed 3D models of the same. Add to this, the option of amplifying seemingly boring static objects such as event posters or product advertisements when viewed through an AR companion app.\nThoughtWorks XR day’s paper poster turned into an AR experience: Use case 5: customer engagement 2016’s fad, Pokémon GO is perhaps AR’s most famous champion. Historically, gamification and VR have proven to better engage users. Today, shopping malls and business conferences leverage VR experience studios to engage kids and customers alike. Today, we have kids riding a roller coaster without even sitting on the actual structure.\nThoughtWorkers have implemented a smartphone-based AR concept called LookARound that can involve visitors at expos and events. The flow of activity is this -\n Users scan a visual cue (like a QR code) within the event area Users have to find a 3D goodie that\u0026rsquo;s hidden close to the visual cue. It will be superimposed (Pokémon GO style) on the 3D environment around the user. The app appraises users of the distance between them and the 3D goodie Users can collect the goodie and compete with other players. LookARound was a huge hit at a two day internal office-wide event, ‘India Away Day 2019.’ 1700 ThoughtWorkers explored different points of interests at the venue, such as break-out sessions, tech stalls and event areas while engaging with AR.\nSnapshot of the LookAround App at ThoughtWorks India Away Day 2019 At the Away Day, 500 ThoughtWorkers competed with each other at a time. They were encouraged by 3D leaderboards showcased on Microsoft Hololens and Oculus Quest.\nConclusion XR tech is evolving at such a rate that the untapped potential boggles the mind. And, we believe the software’s evolution should be matched by the hardware. The latter needs to be user-friendly in terms of field of view and lightweight with immense processing and battery power.\nThe software on the other hand, needs to take a more informed advantage of AI and ML to better estimate diverse environments, making it accessible to any and every possible scenario. Businesses also need to proactively build as many use cases as possible, fix shortcomings and contribute to XR development practices. VR is a reality and has us primed for the pronounced impact of AR and MR tech in the future.\nThis article was originally published on ThoughtWorks Insights and XR Practices Publication\n","id":23,"tag":"xr ; ar ; vr ; mr ; usecases ; thoughtworks ; technology","title":"eXtending reality with AR and VR - Part 2","type":"post","url":"https://thinkuldeep.com/post/extending_reality_with_ar_and_vr-2/"},{"content":"The way we interact with technology today is not too far from what was predicted by futuristic, sci-fi pop culture. We are living in a world where sophisticated touch and gesture based interactions are collaborating with state of the art wearables, sensors, Augmented Reality (AR), Virtual Reality (VR) and Mixed Reality (MR). And, this is eXtending user experiences to be extremely immersive.\nAdd to this, Gartner’s expectation that 5G and AR/VR implementation will transform not only customer engagement but brands’ entire product management cycles as well. Infact, according to IDC, spending on AR and VR is expected to reach $18.8 billion US dollars this year apart from enjoying more than a 75% compound annual growth rate through to 2023.\nEven world famous tech conferences like CES 2020 are not immune to the allure of AR/VR products. Infact, CES recognized Human Capable’s AR Norm Glasses as the best of innovation at the event. We are also seeing a rise in several new players and devices like NReal,RealMax and North.\nThe Seismic Shifts For over two decades, ThoughtWorks has been steering organizations through the dynamic world of tech innovation, right from AI to ML to the above mentioned AR and VR. The Seismic Shifts report is the company’s broad round up of upcoming tech trends and a ‘how-to’ playbook.\nFor the purpose of this article, I’d like to draw your attention to one particular shift, Evolving Interactions. ThoughtWorks’ report on the Seismic Shifts states,\n “We must evolve our thinking—and our capabilities—beyond the keyboard and the screen. Mixed reality (MR) will become the norm.”\n If nothing else, this is indicative of the radical evolution of people’s interaction with technology.\nTypes of Reality To get a better handle of what ‘beyond the keyboard and the screen’ means, let’s look at the different terms used when discussing reality tech -\nAugmented reality AR overlays digital content on to a real environment. Here is a list of the multiple tools that enhance AR implementation in the market today -\nSmart glasses project digital content right in front of the user\u0026rsquo;s eye. The last 5 years have seen the progress of small glasses from bulky gadgets to lightweight glasses packed with more processing power, better battery life, a wider field of view and consistent connectivity. Google Glass, Vuzix, Epson, NReal are some of the popular devices in this segment.\nAR capable smartphones and tablets that run AR applications are the most affordable link to augmented reality use cases. The eco-system that supports AR application development for Android and iOS devices is rich with several tools and techniques like ARCore, ARKit, Unity AR Foundation. Smartphone manufacturers are also powering their more recent phones with AR-compatible hardware. For example, here is the ever growing list of ARCore supported devices from Google.\nProjection-based display devices allow digital content to be projected onto a glass surface. For example, Mercedes’ Head-Up Display, HUD casts content onto the vehicle’s windshield glass. Drivers can view augmented information like maps, acceptable speed limits and more.\nVirtual Reality VR is an immersive experience that shuts out the physical world and takes one into a virtual or computer-generated world.\nHead Mounted Display (HMD) devices on the market include Oculus Quest, Oculus Go, HTC Vive, Sony Playstation, Samsung GearVR etc. Our bi-annual report that provides nuanced advice around technologies, tools, techniques, platforms and languages, also called the Technology Radar puts a spotlight on Oculus Quest stating,\n “\u0026hellip;changes the game, becoming one of the first consumer mass-market standalone VR headsets that requires no tethering or support outside a smartphone”. Furthermore, Oculus has recently come up with a hand-tracking solution on Oculus Quest which is a great market differentiator for the device.\n Smartphone based VR is an economical choice to experience VR tech. Google Cardboard, and similar variants are quite popular in this segment. All one needs is a smartphone to fit into a virtual box. Media players and media content providers like YouTube have adapted and introduced the capability of consuming content in a 360-degree VR environment.\nMixed Reality MR is a combination of both AR and VR, where the physical world and digital content interact. Users can interact with the content in 3D. Besides, MR devices recognize and remember environments for later, while also keeps track of the device’s specific location within an environment.\neXtended Reality, XR The familiarity with the foundational blocks allows us to move on to the bigger trend that businesses have an opportunity to explore - eXtended Reality. It’s an umbrella term for scenarios where AR, VR, and MR devices interact with peripheral devices such as ambient sensors, actuators, 3D printers and internet enabled smart-devices such as home appliances, industrial connected machines, connected vehicles and more.\nThe term XR is soon becoming common parlance thanks to the synergy of AR, VR and MR tech to create 3D content, and cutting edge development tools and practices. The industry is examining investment opportunities in XR platforms that allow shared services across devices, development platforms and enterprise systems like ERP, CRM, IAM etc.\nFor instance, Lenovo ThinkReality Platform and HoloOne Sphere are providing device independent services, SDK (Software Development Kit) and cloud based central portal, which further integrates with enterprise systems. WebVR, a web browser-based XR experience has also been mentioned in ThoughtWorks Technology Radar,\n “WebVR is an experimental JavaScript API that enables you to access VR devices through your browser. If you are looking to build VR experiences in your browser, then this is a great place to start.”\n Additionally, Immersive Web from Google and WebXR from Mozilla are other prominent examples.\nDevelopment platforms like Unity 3D have gone ahead and crafted cross-device features. This is evident in how Unity AR Foundation supports both platforms - Android devices that use Google ARCore and iOS devices that use ARKit to build AR experiences.\nConclusion An overview of the foundational blocks give us a small insight into the myriad possibilities of how we might interact with the world around us. In Part II of this series on eXtended reality, we will be deep diving into interesting use cases of the tech and explore realities that were only once possible in the movies.\nThis article was originally published on ThoughtWorks Insights and XR Practices Publication\n","id":24,"tag":"xr ; ar ; vr ; mr ; fundamentals ; thoughtworks ; technology","title":"eXtending reality with AR and VR - Part 1","type":"post","url":"https://thinkuldeep.com/post/extending_reality_with_ar_and_vr-1/"},{"content":"Covid19 has challenged the way we eat, the way we commute, the way we socialize, and the way we work. In short, it impacted the way we live. More than a million people got infected and the number is growing exponentially, it has impacted the whole world. We need to find new ways of life and change our focus and priorities to save nature and humanity. Technology can play a great role in coping with it. eLearning, eCommerce, robots in healthcare and contactless technologies are already helping.\nRemote working is the new normal, it is getting accepted by many industries and institutions beyond IT and consulting firms. Even traditional educational institutions started live classes. My 8-year-old son Lakshya’s new academic session started yesterday, I and my wife joined the orientation session and he met all his classmates over video conferencing, and today onward he will attend his school from home.\nI am part of an XR development team at ThoughtWorks, we develop software for a couple of XR devices. The government of India has to lock-down the whole country to strictly implement social distancing, and we all are working from home now. In this article, I am sharing, how the 100% remote working impacted me and my team and how it has impacted the business continuity.\nDo read the tips of working from home by ThoughtWorkers\n https://medium.com/@leovikrant78/the-positive-side-how-work-from-home-can-improve-quality-of-life-b08ab1f981df https://www.thoughtworks.com/remote-work-playbook https://martinfowler.com/articles/effective-video-calls.html My Day like This section describes how does my day look like, and how much it is different and how much is the same in comparison to earlier.\nMorning Energiser Workout The day now does not start with a morning walk but an indoor workout session with colleagues. ThoughtWorks, Gurgaon office colleagues have started morning energizer workout session, with a basic workout of stretching and yoga. Earlier, at this time, we used to start getting ourselves prepared for office and kids for school, and some of us may be in the traffic during this period or in the next two hours. Earlier, all the effort need to put into dropping kids at the school bus stop and reach office in time.\nSupport my kid to attend school from home He got a schedule from school to join sessions starting at 9 am.\nHe did a rehearsal with me on video conferencing tool Zoom and now equipped with doing zoom on his own. He is super happy to meet his teachers and students today. He has not seen his friends since the last school session which got completed in first week of March. He got promoted 4th standard. Earlier, I could not support my wife in all such activities after reaching to office. But now, I have some more flexibility and more time to plan things. While she prepares a delicious breakfast, I get my son ready for the school from home.\nTeam Standup and Tech Huddles My team is working from home from different parts of India and distributed across 3 offices in India (Gurgaon, Pune, Coimbatore). There are multiple projects running in this team, and I am shared in a few projects, so attending multiple standups and tech huddles.\nXR Development Standup, with a virtual background :) The day generally starts with standup where we discuss what has been done yesterday, what is the plan for today and if there are any impediments. Then team signup for the task they want to do today, and callout things which they think the team needs to know. Since everyone is working from home, we generally have a full house, no ice-cream meters these days for joining late or stuck in traffic excuses.\nToday many of the team members called out that they are canceling their planned leaves due to Covid19 situations and also looking at the criticality of the work we have signed up. Today we have signed up an interesting piece of work with the client. The team wants to deliver the signed work at all the cost. What a commitment team!\nRemote Pair Development Pair development is very core to ThoughtWorkers, we do things in pairs, and get first-hand feedback immediately and fix things then and there. It helps us producing better quality products and reduce the burden from a single human mind.\nXR development has device dependency, and remote pair development is another challenge, but we have been doing remote pair development from couple of months between team located at Gurgaon and Coimbatore offices, and now we are getting better in remote pair development. Some of my time goes in active development pairing, here is how we solved some of the challenges\n A limited number of devices — Developers use emulators to test the functionality and have written automation functional tests that run on CI environments to further confirm that development build has the right set of functionality. We make sure our QA team is well equipped with real devices to test it further. We follow a concept called “Dev-Box” (aka volleyball test, shoulder check), where a developer pair invites the QA and few team members to check the developed functionality on the developer’s machine before committing the code in the repository. So the developer has a chance to try functionality in real XR device before pushing the code. Showing XR functionality remotely — Some devices support Mixed Reality Capture natively, and some need external tools to stream device experience remotely. We have also build a custom experience streamer to share device experience. Developers sharing the XR streams on video calls Integrations with multiple tools and technologies — Since teams are distributed, it is very important to call out dependencies, and we need to make sure we are not doing duplicate work and duplicate spikes to uncover unknowns. We focus on defining contracts first and then share mock implementation to the dependent team to resolve the dependency well early. Some of my time goes into pairing with different technology leads in defining project architecture. Pairing with UX Designer: User experience is key things in XR development and today some of my time also gone in discussing user experience design for the upcoming product. We discussed, how we can make the design completely hands-free, and the user can control the 3D app just by head movement and gaze. Distance between intractable items and field of view of the device is a big consideration while designing the user experience. I am happy that I am able to share my experience with multiple XR devices with the UX designer of my team.\nDiscussing the concept of the field of view and user interactions on video conferencing was not easy. We figured out an easy way, we created an FoV box with a gaze pointer at the center, then we grouped them, now moving the FoV box also move gaze pointer and it gives similar behavior of how the XR device shows things within FOV.\nPairing with UX designer and business analyst Pairing for Planning and Showcases: Being part of multiple projects, I need to be in sync with project planning and requirements across the projects and I can be helpful for project iteration managers to define dependencies and plan the sprint accordingly. So, some of my time goes into discussing the project plan and planing out the showcases.\n Get buy-in from the client — we have taken buy-in from the client to take the XR devices at home for remote development. Internet availability at home — Every ThoughtWorker got Rs. 5000/- reimbursements to make arrangements at home and can get things like internet routers, inverters, headphones. It helped team members who don’t have good internet speed and prepare for remote work. Trying its best to enable all of us to work from home. MD Townhall Joined Townhall from MD’s office and Head of Techs — Overview on how Covid19 is impacting us and industries, our plan to cope up with uncertainties, and what new coming up in India, new office spaces, projects, and some less painful actions. We have discussed some interesting social initiates going in ThoughtWorks to support the people of india in this Codiv19 situation. Working from home when kids are around My 2.5 year younger son Lovyansh, has never seen me at home every day, and that too working for the whole day (sometimes night ;). He expects more attention from me after seeing me more. He just wants to press keys on the keypad, and touch the laptop, but I just can’t afford to do that in this situation. He has started controlling his curiosity now, sometimes he is happy just by watching what I am doing. I hope you have noticed matching dress, he is learning colors these days :) Spend good time in completing few spikes on integrating Unity Application on Android native application and interoperability around these technologies.\nIntegrated the client’s enterprise Active Directory with the XR app, that we are building for the client. This is a step toward making the XR devices enterprise-ready.\nSome of the spikes I have done are already published at Enterprise XR\nPick up Grocery and Essentials There are online groceries and essential delivery vendors around my house, but none of them can deliver at home due to Covid19, but they are allowed to deliver in the community center following all the hygiene and Govt protocols. The society staff allows people with masks and sensitized hands to pick the delivery. The community center is 10 mins walk in the same society area. I need to get out of my house for some time, depending on the delivery time. Today it was afternoon.\nEvening Energiser Pune team organizes evening energizer to have fun and stretching, and break together. Energizers are important to stay active and focused, going together on a video call to share/show what is happening around- kids, pets are really connecting us better as a team.\nNew Joiner Induction One XR expert joined our team on 31st March, and it is not the best time to join or leave any organization. But at ThoughtWorks, we have taken this challenge and now the new joiners getting all the joining formality done digitally. ThoughtWorks has planned 100% remote induction, immersion workshops and hands-on dev boot-camp for the new joiners. I am his remote buddy, and we keep in touch remotely.\nEvening meetings The client team is primarily distributed in the US and China. We generally have some evening meetings with the client or with our ThoughtWorkers in other timezones. Earlier, these evening meetings were most problematic for me to attend, as if I accept a meeting at 5:30 pm and try to leave between 6–8 pm then I have to pass through a lot of traffic, in that case I may opt to stay in office till 8 pm, and that means everything (dinner, evening walk, sleep time) get delayed.\nNow working from home has the flexibility to attend evening calls, and even attending a late evening is also fine. So we are now more accessible to other timezones.\nAlso earlier, it was a struggle to schedule a meeting in office hours, as we need to check the availability of meeting rooms for the participants across the offices and book them. Most of the ThoughtWorks offices in India are preparing additional spaces, as existing ones are getting crowded, finding a meeting room is a nightmare until we get more spaces.\nKeep multiple video conferencing software ready on your PC. Different clients and users have different preferences. Get hands-on these tools on sharing screen, content, chat while on a video call.\nBreak Time is Family Time I noticed that kids are just waiting for me to come out of the room, and they always have a lot of things to show, and talk. Each break time is now family time. I keep getting instructions from my wife to drink water and walk after some time :)\nWriting this article And of course, after all of the above, spending some time writing this article. I am planning to do some indoor walk after completing it. This covers my day.\nImpact of Full Remote Working In the last section, I have explained how does my day look like, it is not much different than my day in office, and it is also not the same as my day in office. Let’s now see, how does full remote working or work from home has impacted us, the projects, the businesses, and the organization. This is applicable to non-XR projects as well.\nSocial Connect Social distancing is a must to protect ourselves and others from Covid19 infections. As a result, social connect is reducing, and we do miss office working hours and the energetic office environment. We are trying to fill it with virtual meetings, but it has its own pros and cons. Global connect is improving though.\nFamily Connect Family connect is improving, and we have more time for family, and also more time to work.\nAgility Now we have more flexible hours than earlier. The hours we spend preparing to go office and reach office are now added flexible hours which can plan to keep our priority in mind. We now have more control on time, we can plan our meetings in extended hours, and spend quality time with family as well. We are becoming location independent and distributed agile practices are growing naturally. Global-first is coming naturally at org level.\nBusiness Continuity Before going to 100% work from home, we have rehearsed it for couple of days, and we have measured that there is almost no impact on the business deliveries. Now we are 100% working from home, and we have been delivering on all the commitments on time. There is almost no reduction in productive hours and people have also canceled planned leaves, which adds effective productive hours.\nOrganization At the organization level, there needs to be multiple initiates to be planned in addition to move operations, people support and people enablement remotely. The future will soon be here with teleportation meetings, where we can be ported virtually to other locations.\n New security policies may be required The idea of having big office space may not be the future The location-independent global first organization New leave policies Meeting infra for virtual meetings Remote working charter Cost optimizations Measures to face another economic slowdown. Ops 10:30 pm, probably update this section later. Conclusion Covid19 has impacted the world from all the side and forced us to work from home. We are observing a lot of positive sides of working from home without any major impact on business commitments. We hope we will rid of covid19 soon and wish speedy recovery of the ecosystem.\nIt is originally published at XR Practices Medium Publication\n","id":25,"tag":"xr ; ar ; vr ; mr ; work from home ; covid19 ; thoughtworks ; pair-programming ; xp ; agile ; practice","title":"XR development from home","type":"post","url":"https://thinkuldeep.com/post/xr-development-from-home/"},{"content":"In the previous article, we discussed implementing multi-factor authentication for an andorid application, and in the article we will cover another enterprise aspect, Interoperability.\nInteroperability is key aspect when we build enterprise solutions using XR technologies. The enterprises have digital assets in form of mobile apps, libraries, system APIs, and they can’t just throw away these existing investments, in fact, the XR apps must be implemented in such a way that it is interoperable with them seamlessly. This article describes interoperability between XR apps and other native apps. It takes Unity as the XR app development platform and explains how a Unity-based app can be interoperable with the existing Java-based android libraries.\nRead the below article by Raju Kandaswamy, it describes steps to integrate the XR app built using Unity into any android app.\n Embedding Unity App inside Native Android application While unity provides android build out of the box, there are scenarios where we would like to do a part unity and part… medium.com\n The article below describes interoperability in the opposite of what Raju has described, we will try to integrate an existing Android Library into the Unity-based XR App.\nBuild an Android Library In the last article, we have implemented a user authenticator. Let’s now expose a simple user authenticator implementation as an android library.\n goto File \u0026gt; New \u0026gt; New Module \u0026gt; Select Android Library Implement a UserAuthenticator with simple authentication logic. Each method is accepting an argument context, but currently, it is not being used so you may ignore it for now.\n public class UserAuthenticator { private static UserAuthenticator instance = new UserAuthenticator(); private String loggedInUserName; private UserAuthenticator(){ } public static UserAuthenticator getInstance(){ return instance; } public String login(String name, Context context){ if(loggedInUserName == null || loggedInUserName.isEmpty()){ loggedInUserName = name; return name + \u0026#34; Logged-In successfully \u0026#34;; } return \u0026#34;A user is already logged in\u0026#34;; } public String logout(Context context){ if(loggedInUserName != null \u0026amp;\u0026amp; !loggedInUserName.isEmpty()){ loggedInUserName = \u0026#34;\u0026#34;; return \u0026#34;Logged-Out successfully \u0026#34;; } return \u0026#34;No user is logged in!\u0026#34;; } public String getLoggedInUser(Context context){ return loggedInUserName; } } Go to Build \u0026gt; Make Module “userauthenticator” — It will generate aar file. under build/outputs/aar This is the library that can be integrated with Unity in C#. The next section describes how it can be used in Unity.\nBuild an XR app in Unity Let’s now build the XR app in unity to access the API exposed by the android library implemented in the last step. Unity refers to the external libraries as a plugin.\n Create unity project We have 3 APIs in UserAuthenticator of the android library, let\u0026rsquo;s build a 3D UI layout with buttons, text elements. as follows. The source code is shared at the end of the article. Add a script ButtonController. This script describes the ways to access java classes.\n `UnityEngine.AndroidJavaClass` - Load a given class in Unity, using reflection we may invoke any static method of the class. `UnityEngine.AndroidJavaObject` - Keep the java object and automatically dispose of it once it's done. Using reflection we can invoke any method of the object. public class ButtonController : MonoBehaviour { private AndroidJavaObject _userAuthenticator; ... void Start() { AndroidJavaClass userAuthenticatorClass = new AndroidJavaClass(\u0026#34;com.tw.userauthenticator.UserAuthenticator\u0026#34;); _userAuthenticator = userAuthenticatorClass.CallStatic\u0026lt;AndroidJavaObject\u0026gt;(\u0026#34;getInstance\u0026#34;); loginButton.onClick.AddListener(OnLoginClicked); logoutButton.onClick.AddListener(OnLogoutClicked); getLoggedInUserButton.onClick.AddListener( OnGetLoggedUserClicked); } private void OnLoginClicked() { loginResponse.text = _userAuthenticator.Call\u0026lt;string\u0026gt;(\u0026#34;login\u0026#34;, inputFieldUserName.text, getUnityContext()); } private void OnLogoutClicked() { logoutResponse.text = _userAuthenticator.Call\u0026lt;string\u0026gt;(\u0026#34;logout\u0026#34;, getUnityContext()); } private void OnGetLoggedUserClicked() { getLoggerInUserResponse.text = _userAuthenticator.Call\u0026lt;string\u0026gt;(\u0026#34;getLoggedInUser\u0026#34;, getUnityContext()); } private AndroidJavaObject getUnityContext() { AndroidJavaClass unityClass = new AndroidJavaClass(\u0026#34;com.unity3d.player.UnityPlayer\u0026#34;); AndroidJavaObject unityActivity = unityClass.GetStatic\u0026lt;AndroidJavaObject\u0026gt;(\u0026#34;currentActivity\u0026#34;); return unityActivity; } } Attached the ButtonController script to an empty game object. Link all the buttons, texts, and input field components. Copy the android library in /Assets/Plugins/Android folder. Switch to Android as a Platform if not already done. — Go to File \u0026gt; Build Settings \u0026gt; Select Android as a Platform \u0026gt; Click of Switch Platform.\n Update the player settings — Go to File \u0026gt; Build Settings \u0026gt; Player Settings correct the package name and min API level in “Other Settings”\n Connect your android device, and click on Build and Run. or export the project and test it in Android studio.\n Here is a demo. Type in the user name, and click on login, it will call the Android API from the library and show the value returned by the library in the text next to the button.\nSave way button works, test the logic you have written in the library. \nWith this, we are done with basic integration. The next section describes an approach with callback methods.\nIntegration using Callbacks. Callbacks are important, as the android system may not be always running result synchronously, there are multiple event and listeners which works on callbacks.\n Let’s implement a callback interface in the android library. public interface IResponseCallBack { void OnSuccess(String result); void OnFailure(String erorrmessage); } Update the UserAuthenticator implementation to call the OnSuccess and OnFailure Callbacks. public class UserAuthenticator { ... public void login(String name, IResponseCallBack callBack, Context context){ if(loggedInUserName == null || loggedInUserName.isEmpty()){ loggedInUserName = name; callBack.OnSuccess (loggedInUserName + \u0026#34; Logged-In successfully \u0026#34;); } else { callBack.OnFailure(\u0026#34;A user is already logged in\u0026#34;); } } public void logout(IResponseCallBack callBack, Context context){ if(loggedInUserName != null \u0026amp;\u0026amp; !loggedInUserName.isEmpty()){ loggedInUserName = \u0026#34;\u0026#34;; callBack.OnSuccess (\u0026#34;Logged-Out successfully \u0026#34;); } else { callBack.OnFailure(\u0026#34;No user is logged in!\u0026#34;); } } ... } Build the Android library again and replace it in Unity project Assets/Plugins/Android Update corresponding in ButtonController of Unity. Implement the Java interface using UnityEngine.AndroidJavaProxy. private AuthResponseCallback _loginCallback; private AuthResponseCallback _logoutCallback; void Start() { ... _loginCallback = new AuthResponseCallback(loginResponse); _logoutCallback = new AuthResponseCallback(logoutResponse); } private class AuthResponseCallback : AndroidJavaProxy { private Text _reponseText; public AuthResponseCallback(Text _reponseText) : base(\u0026#34;com.tw.userauthenticator.IResponseCallBack\u0026#34;) { } void OnSuccess(string result) { _reponseText.text = result; _reponseText.color = Color.black; } void OnFailure(string errorMessage) { _reponseText.text = errorMessage; _reponseText.color = Color.red; } } private void OnLoginClicked() { _userAuthenticator.Call(\u0026#34;login\u0026#34;, _loginCallback, inputFieldUserName.text, getUnityContext()); } private void OnLogoutClicked() { _userAuthenticator.Call(\u0026#34;logout\u0026#34;, _logoutCallback, getUnityContext()); } here is the demo to show the failure response in red color.\nThe response callback implementation sets the color in the text fields.\nCode Branch till this point is added here — Android Library, Unity App\n\nIn the next section, we will discuss more complex integration with native components and intents.\nSingle Sign-On Integration using device native WebView In this section, we will integrate the android\u0026rsquo;s web view with the unity app, and also see invocation using Intents.\nRefactor Android Library Let’s refactor the android app we have implemented the last article, and use it as a library. Also, learn to pass the arguments to Intent and gets the response from it.\n Wrap the WebView invocation in an activity — It takes URL, returnURI as input. It intercepts the returnURI and parse the “code” from it and returns in a ResultReceiver callback. public class WebViewActivity extends AppCompatActivity { ... @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); final String interceptScheme = getIntent().getStringExtra(INTENT_INPUT_INTERCEPT_SCHEME); final String url = getIntent().getStringExtra(INTENT_INPUT_URL); webView = new WebView(this); webView.setWebViewClient(new WebViewClient() { public boolean shouldOverrideUrlLoading(WebView view, String url) { Uri uri = Uri.parse(url); if (interceptScheme !=null \u0026amp;\u0026amp; interceptScheme.equalsIgnoreCase(uri.getScheme() +\u0026#34;://\u0026#34; +uri.getHost())) { String code = uri.getQueryParameter(\u0026#34;code\u0026#34;); ResultReceiver resultReceiver = getIntent().getParcelableExtra(INTENT_RESPONSE_CALLBACK); if(resultReceiver!=null) { Bundle bundle = new Bundle(); bundle.putString(\u0026#34;code\u0026#34;, code); resultReceiver.send(INTENT_RESULT_CODE, bundle); } finish(); return true; } view.loadUrl(url); return true; } }); WebSettings settings = webView.getSettings(); settings.setJavaScriptEnabled(true); setContentView(webView); webView.loadUrl(url); } ... } User Authenticators Login method invokes the web-view activity with an SSO URL, and callback. public void login(IResponseCallBack callBack, Context context){ if(accessToken == null || accessToken.isEmpty()){ Intent intent = new Intent(context, WebViewActivity.class); intent.putExtra(WebViewActivity.INTENT_INPUT_INTERCEPT_SCHEME, HttpAuth.REDIRECT_URI); intent.putExtra(WebViewActivity.INTENT_INPUT_URL, HttpAuth.AD_AUTH_URL); intent.putExtra(WebViewActivity.INTENT_RESPONSE_CALLBACK, new CustomReceiver(callBack)); context.startActivity(intent); } else { callBack.OnFailure(\u0026#34;A user is already signed in\u0026#34;); } } CustomReceiver.OnReceivedResult gets an access_token using the code returned by the web-view activity. access_token is then saved in the UserAuthenticator. public class CustomReceiver extends ResultReceiver{ private IResponseCallBack callBack; CustomReceiver(IResponseCallBack responseCallBack){ super(null); callBack = responseCallBack; } @Override protected void onReceiveResult(int resultCode, Bundle resultData) { if(WebViewActivity.INTENT_RESULT_CODE == resultCode){ String code = resultData.getString(INTENT_RESULT_PARAM_NAME); AsyncTask\u0026lt;String, Void, String\u0026gt; getToken = new GetTokenAsyncTask(callBack); getToken.execute(code); } else { callBack.OnFailure(\u0026#34;Not a valid code\u0026#34;); } } } The access_token is used in getting the user name from the Azure Graph API. This completes the refactoring. The source code is attached at the end. public void getUser(IResponseCallBack callBack){ if(loggedInUserName == null || loggedInUserName.isEmpty()){ if(accessToken != null \u0026amp;\u0026amp; !accessToken.isEmpty()){ try { new GetProfileAsyncTask(callBack).execute(accessToken).get(); } catch (Exception e) { callBack.OnFailure(\u0026#34;Failure:\u0026#34; + e.getMessage()); } } else { callBack.OnFailure(\u0026#34;No active user!\u0026#34;); } } else { callBack.OnSuccess(loggedInUserName); } } All the HTTP communications are wrapped in a utility class HttpAuth, this completes the refactoring of the android library. Since we have added a new activity don’t forget to add it in the android manifest.\nUnity App Invoking Native WebView in Android Libary This example is now doing SSO with the organization’s active directory (AD), the username and password will be accepted at the organization’s AD page.\n Let’s change the layout a bit. We don’t need an input field now. There is no major change in ButtonContoller, expect the signature change for the library APIs. Now android handles the intent in a separate thread, and callback in unity may not access the game object elements as they may be disabled when another activity spawned on. Status of unity element may be checked in the main thread. The callbacks now updating few helper variables and the update method updates the unity elements in main there whenever there is any change in the state.\n private void Update() { //set text and color for response lables } private class AuthResponseCallback : AndroidJavaProxy { private int index; public AuthResponseCallback(int index) : base(\u0026#34;com.tw.userauthenticator.IResponseCallBack\u0026#34;) { this.index = index; } void OnSuccess(string result) { labels[index] = result; colors[index] = Color.black; } void OnFailure(string errorMessage) { labels[index] = errorMessage; colors[index] = Color.red; } } Since the Android library now have an Activity, and it requires a dependency “androidx.appcompat:appcompat:1.1.0” in Gradle build. We need to use a custom Gradle template in Unity, Go to File \u0026gt; Build Settings \u0026gt; Player Settings \u0026gt; Publishing Settings. Choose Custom Gradle Template. It will generate a template file in Plugins. Add the following dependency in the Gradle template. Make sure don’t change **DEPS** Build/Export the project and run on Android. You have completed the native integration. Here is a demo of integrated SSO in Unity. \nSource code can be found at following git repo — Android Library, Unity App\nConclusion This article describes the basics of interoperability between XR apps and android native libraries. In the next article, we will how to share the user context across XR apps.\nThis article was original published on XR Practices Publication\n","id":26,"tag":"xr ; ar ; vr ; mr ; integration ; enterprise ; android ; unity ; tutorial ; technology","title":"Enterprise XR - Interoperability","type":"post","url":"https://thinkuldeep.com/post/enterprise-xr-interoperability/"},{"content":"XR use cases are growing with advancements in the devices, internet and development technologies. There is an ever-growing demand to build enterprise use cases using ARVR technology. Most enterprise use cases eventually require integration with an enterprise ecosystem such as IAM (Identity and Access Management), ERP, CRM, and single sign-on with other systems.\nMost organizations protect digital assets with a single sign-on using Multiple Factor Authentication (MFA). The MFA is generally a web-browser based authentication where the browser redirects to tenant’s authentication page where the user provides their credentials and then the user confirms another factor (PIN, OTP, SMS or mobile notifications), once it succeeds, it gets redirected back to the protected resource. The users generally trust the organization’s AD page to provide credentials. So it is important to include this web-based flow in the XR app development.\nThis series of posts covers end to end flow for building XR application integrated with the organization\u0026rsquo;s web-based authentication flow.\nSetup Active Directory This section describes the AD setup on Microsoft Azure. A similar setup is available on other active directories. Here are the steps you need to follow to configure AD for a mobile app.\n Open Azure Portal with valid credentials, please signup if you don’t have that, https://portal.azure.com (currently there is 28 days free subscription to try out, go and “Add subscription” choose Free Trial) Search for Azure Active Directory Go to App Registration \u0026gt; New Registration — Fill in the details for a new app. Note down the client id and other parameters. Ref [4] in references for more details.\nBuild an Android App integrated with Organization AD There are multiple ways to integrate and build an android app that can communicate with an organization’s AD for authentication. There are proprietary libraries (from Google, Microsoft, Facebook etc.) that can be included in the project and you can directly communicate with respective AD, but it is helpful when Basic Authentication is enabled. Below steps cover OAuth 2 type MFA and web-browser based flow for authentication, it doesn\u0026rsquo;t require any specific library dependency.\nAndroid Project Setup Build your first app if you haven\u0026rsquo;t build any yet — https://developer.android.com/training/basics/firstapp Understand the MainActivity and Android Manifest file.\nOpen Web View In Android App Open authorization URL in a web-view. For the above Azure AD configuration following is the authorization URL. Ref [1]\nhttps://login.microsoftonline.com/common/oauth2/v2.0/authorize?client_id=86501f6d-0498-40ab-b3c9-81ab8ffef1ac\u0026amp;scope=openid\u0026amp;response_type=code Refer to the code below to open a web-view at the start of the main-activity.\npublic class MainActivity extends AppCompatActivity { private static final String TAG = \u0026#34;MainActivity\u0026#34;; //xrapp://auth-response private final String REDIRECT_URL_SCHEME = \u0026#34;xrapp\u0026#34;; private final String REDIRECT_URL_HOST = \u0026#34;auth-response\u0026#34;; private final String AD_AUTH_URL = \u0026#34;https://login.microsoftonline.com/common/oauth2/v2.0/authorize?client_id=86501f6d-0498-40ab-b3c9-81ab8ffef1ac\u0026amp;scope=openid\u0026amp;response_type=code\u0026#34;; private WebView myWebView; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); myWebView = new WebView(this); WebSettings settings = myWebView.getSettings(); settings.setJavaScriptEnabled(true); settings.setDomStorageEnabled(true); setContentView(myWebView); myWebView.loadUrl(AD_AUTH_URL); } } The android-manifest looks like this. Make sure you have the Internet permission\n\u0026lt;manifest xmlns:android=\u0026#34;http://schemas.android.com/apk/res/android\u0026#34; package=\u0026#34;package.userauthenticator\u0026#34;\u0026gt; \u0026lt;uses-permission android:name=\u0026#34;android.permission.INTERNET\u0026#34; /\u0026gt; \u0026lt;application android:allowBackup=\u0026#34;true\u0026#34; android:icon=\u0026#34;@mipmap/ic_launcher\u0026#34; android:label=\u0026#34;@string/app_name\u0026#34; android:roundIcon=\u0026#34;@mipmap/ic_launcher_round\u0026#34; android:supportsRtl=\u0026#34;true\u0026#34; android:theme=\u0026#34;@style/AppTheme\u0026#34;\u0026gt; \u0026lt;activity android:name=\u0026#34;.MainActivity\u0026#34;\u0026gt; \u0026lt;intent-filter\u0026gt; \u0026lt;action android:name=\u0026#34;android.intent.action.MAIN\u0026#34; /\u0026gt; \u0026lt;category android:name=\u0026#34;android.intent.category.LAUNCHER\u0026#34; /\u0026gt; \u0026lt;/intent-filter\u0026gt; \u0026lt;/activity\u0026gt; \u0026lt;/application\u0026gt; \u0026lt;/manifest\u0026gt; Intercept the Redirect URL While configuring AD, we have provided a redirect URL — xrapp://auth-response. Clicking on “Continue” in the last step will redirect to “xrapp://auth-response?code=\u0026lt;value of code\u0026gt;”\nThe android app needs to intercept this URL to read code parameter from it. Following code checks for any URL loading and see if any URL starts with xrapp://auth-response and then fetch code from it.\n... myWebView = new WebView(this); myWebView.setWebViewClient(new WebViewClient() { public boolean shouldOverrideUrlLoading(WebView view, String url) { Uri uri = Uri.parse(url); if (REDIRECT_URL_SCHEME.equalsIgnoreCase(uri.getScheme()) \u0026amp;\u0026amp; REDIRECT_URL_HOST.equalsIgnoreCase(uri.getHost())) { String code = uri.getQueryParameter(\u0026#34;code\u0026#34;); return true; } view.loadUrl(url); return true; } }); ... Fetch the access token In the last step, we read the authorization code, lets now fetch an access-token from AD. Here is the URL to fetch the token for above AD setup https://login.microsoftonline.com/common/oauth2/v2.0/token Ref[2]\nAdd an Async Task to fetch a token once the Authentication code is fetched.\n... AsyncTask\u0026lt;String, Void, String\u0026gt; getToken = new GetTokenAsyncTask(); getToken.execute(code); return true; ... Here is the GetTokenAsyncTask\nprivate class GetTokenAsyncTask extends AsyncTask\u0026lt;String, Void, String\u0026gt; { @Override protected String doInBackground(String... strings) { String code = strings[0]; HashMap\u0026lt;String, String\u0026gt; postParams = new HashMap\u0026lt;\u0026gt;(); postParams.put(\u0026#34;client_id\u0026#34;, CLIENT_ID); postParams.put(\u0026#34;grant_type\u0026#34;, \u0026#34;authorization_code\u0026#34;); postParams.put(\u0026#34;code\u0026#34;, code); return performPostCall(AD_TOKEN_URL, postParams); } protected void onPostExecute(String response) { super.onPostExecute(response); String access_token = getValueFromJSONString(response, \u0026#34;access_token\u0026#34;); finish(); } } Fetch the User Profile Using the Access token Once the access token is fetched, get the user profile data. Microsoft has graph APIs to fetch the profile data Ref [3] eg. https://graph.microsoft.com/beta/me/profile/ It would return whole JSON data for the profile.\n... String access_token = getValueFromJSONString(response, \u0026#34;access_token\u0026#34;); result = new GetProfileAsyncTask().execute(access_token).get(); ... The GetProfileAsyncTask fetches the profile data in the background.\nprivate class GetProfileAsyncTask extends AsyncTask\u0026lt;String, Void, String\u0026gt; { @Override protected String doInBackground(String... strings) { String access_token = strings[0]; return performGetCall(GET_PROFILE_URL, access_token); } } Show the Username from the profile Let\u0026rsquo;s show the user name from the profile data. For this, we may create another intent to display the text and call the intent after getting the user name from the response. Don’t forget to add intent in the android manifest file.\n... result = new GetProfileAsyncTask().execute(access_token).get(); String displayName = getValueFromJSONString(result, \u0026#34;displayName\u0026#34;); displayName = \u0026#34;Welcome \u0026#34; + displayName + \u0026#34;!\u0026#34;; Intent intent = new Intent(MainActivity.this, DisplayResponseActivity.class); intent.putExtra(\u0026#34;response\u0026#34;, displayName); startActivity(intent); finish(); ... \nConclusion This article describes the end to end steps to configure an app on Active Directory and build an integrated android app without any proprietary dependencies. In the next few articles, we will discuss how to handle interoperability and build integrate ARVR app. Next \u0026raquo; Enterprise XR — Interoperability Source code can be found at https://github.com/thinkuldeep/xr-user-authentication/tree/part1\nThis article was original published on XR Practices Publication\nReferences : Oauth2 Authorization EndPoint OAuth2 Token End Point Using Microsoft Graph APIs https://docs.microsoft.com/en-us/azure/app-service/configure-authentication-provider-aad#register ","id":27,"tag":"xr ; ar ; vr ; mr ; cloud ; integration ; enterprise ; android ; mfa ; tutorial ; technology","title":"Enterprise XR - Multi-Factor Authentication","type":"post","url":"https://thinkuldeep.com/post/enterprise-xr-multi-factor-authentication/"},{"content":"Everyone in India is talking about Jeff Bezos, the richest of the world, who was on India\u0026rsquo;s visit recently. People are talking about the investments he is making in India, talking about his prediction about India\u0026rsquo;s \u0026ldquo;21st Century is going to be the Indian century\u0026rdquo;, and discussing the impact on small businesses. We have seen disruptions lead by Amazon in multiple sectors, now people have a fear that whatever Amazon touch, it becomes Amazon\u0026rsquo;s. This fear may be true, as Amazon always stays ahead of the curve. How could Amazon always be true? what makes Amazon or Jeff Bezos? What is the success mantra of the company like Amazon?\nThe book \u0026ldquo;Success Secrets of Amazon by Steve Anderson\u0026rdquo; talks about 14 principles that lead to amazon\u0026rsquo;s success. Not all the steps Amazon takes are successful or infact Amazon doesn\u0026rsquo;t take a step but takes a journey, which means continuous effort to make the way ahead.\nHere are my top 5 takeaways from the book:\nReturn on Risk (RoR) Evaluate RoR instead of RoI, invest on Risks. The process of evaluating return on a risk improves the risk-taking capability, which is a key factor of the success of a company like Amazon.\nNot doing a thing is equality riskier than the risk of doing that. So it is always better to act and know if that works.\nExperiment more and celebrate the failure Celebrate your failures and come out of them, make them the successful failure. Failure is the bigger teacher of life, learn from them. Plan for the failures, and make the mindset shift. Amazon\u0026rsquo;s failed in the first place ideas like Amazon Auctions as eBay competitor, zShops and Fire Phone but all these failures turned into a business of billions of $ now - amazon shopping, Alexa and more.\nBet small but on the big idea, and follow closely and progressively. Free shipping, AWS, and Kindle are examples of the same. They started small but with a big aim in mind.\nDon\u0026rsquo;t afraid of failures, but focus on making them successful failures. Have a dedicated investment in experimentation, and indulge it part of the culture.\nSpeed up decision making by decentralizing the authority especially for the decisions which can be changed or reversed. It is worth trying them instead of delaying them for multi-layer approvals, and if they don\u0026rsquo;t work then also you have learned from it. Trust your guts and keep trying.\nMove from customer-focused to customer-obsessed business Amazon is not a customer-focused business but it is actually a customer-obsessed business. Understand your customer well, and stay ahead of what they need. Use technology to solve traditional barriers/ways of working, and improve efficiency and effectiveness. Reach beyond the expected.\nThink long-term and own the future To stay ahead of the curve, one needs to think in the long-term and build the businesses for future stakeholders. Align your short-term goals with the long-term plan. People will connect with your long-term goal if you promote ownership with them. Promote ownership as a culture, sharing the long-term benefits with people, make them decision owners, let them be open and collaborative, after all, they are the future business owners.\nSet the bar high Save your belief and organization culture, no matter what, keep the high standards and don\u0026rsquo;t dilute the culture on the name of the scale. Organic growth is not always viable, thus we need to go and include people from outside, so hire best or develop the best. Measure what make sense, financial data is not only the measurement of success.\nConclusion Here I talked about my top 5 takeaway from Amazon\u0026rsquo;s success stories. Read out the book, letters of Bezos to understand more about the success mantra of Amazon.\nIt is was originally published at My Linked Profile\n","id":28,"tag":"general ; selfhelp ; success ; book-review ; takeaways ; motivational","title":"Success Mantra","type":"post","url":"https://thinkuldeep.com/post/success_mantra_amazon/"},{"content":"Travel and transportation have faced and continue facing disruptions. Many of the traditional ways are no more in the choice. Earlier our parents used to warn us for taking rides with strangers, but now Uber/Ola/Lyft/Didi, etc are the preferred ways. Owning a car may be an absurd idea now. We are now dreaming of Autonomous vehicles, Flying Cars, Hyperloop, and Space travel in the near future. Do read, how the user interactions have been evolved with time in my earlier post.\nFaster travel vs transportation without travel Disruption from faster travel is not the only thing, but what if we don’t need to travel at all, instead a virtual model of us get ported at the destination and serves the purpose of travel. Possibilities of virtual model transportation are not too far, Ref: Microsoft Holoportation, Ref: Teleportation Conferences, so the travel industries will not just get disrupted by faster, convenient and eco-friendly mode of travel but also by the No-Travel or Virtual Travel mode. It will just be like what (electronic/mobile)-shopping did to the window-shopping. However, window-shopping still co-exists, and a lot of investment is being done into the user experience and solutions which makes a smoother transition across e/m/w-shopping. In this post, we will discuss some ARVR use-cases for the travel and transportation industry which may exist in the coming future as disruptors or enablers. but before that let’s take a look at the tools available in this space.\nTools for Extending the Reality: Heads Up Display (HUD) It extended information in front of the drivers/workers as per the context, so that they can make quick decisions and stay aware of the situations. Other vehicle sensors (GPS, Camera, Speed, IMU) helps in building the context environment. HUD is there in the industry for long, and due to its presence and it has the potential to augment more information. HUD — Photo Credit: Amazon.in\nHead-Mounted Displays and Smart Glasses Smart glasses and head-mounted displays can help the workers while their hands are busy performing the job. Smart glasses by Google Glass, Vuzix, Epson, and etc are good for displaying multimedia information right in front of the user’s eye, however, Head Mounted Devices like Microsoft Hololens, Magic Leap and Lenovo ThinkReality A6 are more capable devices and they detect and interact with the environment and provide the immersive experience to the users. Normal looking glasses are being built, and soon we have smart glasses in the normal form factor. Photo Credit: https://www.lenovo.com/ww/en/solutions/thinkreality\nVR Headsets Virtual reality headset creates a virtual environment and provides an immersive experience to the end-users in that environment. It could be a training environment, product demonstrations or a simulation of a real environment.\nAR Capable Smartphone A smartphone will still play a key role in the industry due to its omnipresence, AR applications on phones are the cheapest and easiest way to adapt the extended reality use cases. The applications built on WebXR are also well demonstratable on the phone.\nLook at the more tools such as connected wearables, holographic projections in the attached slides.\n Think beyond keyboard and screen - dev fest 2019 Use-cases using Extended Reality Tech Entertainment — Make Journey not Destination Currently, most of the travel businesses are focusing on reaching a destination faster and efficient but looking at the future disruptions, the focus would shift towards making the journey interesting.\nA lot of data is collected and used while a vehicle is moving from pickup to drop locations, this data can be used in making the journey interesting with\nARVR technology, sharable cloud AR anchors may be tagged with GPS and can be augmented in the environment when the user watching using the AR-enabled devices. Based on positioning anchors multiple users may also collaborate and participate in social AR networks, where user-specified AR anchors may be managed. So businesses around AR anchors storage would have great potential in the future, it can be monetized with ads etc. PC: https://www.vrnerds.de/google-i-o-2018-cloud-anchors-und-ar-maps/\nFM radio or Video Streaming of a favorite web series will not be the only options\nThe future vehicles may be seen as an embedded VR studio, where travel from your home to the airport may be experienced as a journey of a theme park, or a journey by helicopter, a journey on mars or any journey you may think of. One of the concepts is as Holoride PC: Holoride\nLocate and Map We are talking about self-driving cars and autonomous vehicles, which uses techniques like Simultaneous Localization and Mapping(SLAM) to build the virtual map and locate objects in the environment. A moving vehicle also generates and consumes a lot of data, which can be augmented on the environment just like ARMaps from Google, a virtual positioning system on GPS.\nThe ARMaps can be used for various use-cases for AR localization — Exactly locate the passenger(driver app), plan/auto adjust the routes, locate my car in AR (passenger app), etc. In the ride-sharing platform such as Uber, it is not always easy to find your car in a crowded space, it would be great if we have a feature where we can see our car highlighted in an augmented way, and also driver sees the passenger highlighted (may bigger than usual). PC: https://www.businessinsider.in/tech\nThis can be extended when a passenger is given an option to experience the car, where (s)he can see the 3D model and information of the car \u0026amp; the driver, and even they can try the car by getting inside the car in ARVR before it comes to pick up.\nMore and more AR services for map and location can be built, when we integrate it with other data. Eg. find nearest charging stations, service stations, fuel stations, etc and visualize them on Visual Positioning System.\nAugmented Safety Notifications Using the ARVR tech, we may locate/augment information for signboards, directions, and the way finders, also instruct the users, drivers, and passengers about the safety on their AR-enabled devices such as HUDs on car, or smart glasses. Drivers can see red lights and traffic situations in the HUDs.\nWith advanced Computer Visions and AI/ML techniques, the user may be instructed about potential emergency situations ahead based on the speed, temperature, weather, location, traffic, battery status, etc.\nSee what Nissan unveiled — I2V (Invisible to Visible), uncovering the blind spaces and making the drive safer and smoother.\n Nissan unveils Invisible-to-Visible technology concept at CES YOKOHAMA, Japan - At the upcoming CES trade show, Nissan will unveil its future vision for a vehicle that helps drivers… global.nissannews.com\n Service and Maintenance There are some great uses of ARVR in service and maintenance assistance especially when the worker’s hands are occupied and he/she needs assistance. The travel and transportation industry may also use the technology for the below use cases.\nA stepwise task workflow could be implemented from task assignment to task completion, where a worker gets a task to perform in the Headset with detailed steps in form of text, audio, video, animations or augmented anchors/markers on the real-world objects. It not only guides the worker right in the working context but also may record the worker’s progress. Task completion can be plugged with proof of completion in the form of a checklist and a snapshot. It improves both efficiencies as well as the accuracy of the process. PC: https://sdtimes.com/softwaredev/the-reality-of-augmented-reality/\nIf they still need assistance, they may get remote assistance from the experts and share what they see with the expert (via front camera feed). Experts can assist the worker with voice, and also can draw/annotate on the environment of the worker. A worker may browse detailed annotated augmented manuals right in front of their eyes by scanning the objects/codes or by talking to the devices.\n Hyundai makes owner\u0026rsquo;s manuals more interesting with augmented reality Augmented reality showrooms are one thing, but Hyundai using that tech to make learning about your new car more… www.engadget.com\n Augmented manuals may provide internals details of the object without getting inside, and may also guide about the possible defects and anomalies just looking at the objects.\nProduct demonstrations and Setup may also utilize the technology at scale. watch the Hyundai\u0026rsquo;s an AR sales tool, demonstrating the features.\nSimulation-based training - Driver training to handle different driving situations in a simulated VR environment may improve safety and efficiency.\nVR Control Centers Control centers gather a lot of data from the fleet they operate, they need to visualize the data in multiple forms to get different aspects of the situations. There are multiple people with different roles interested in different types of data. Multi-View Virtual Desktops — A VR data visualizer could be a great help to create virtual views based on the dynamic need at the control center, with the need for extra hardware. It would help is in taking decisions faster and get more clarity about the situation.\nPrivate Desktop — Virtual visualizer also addresses privacy concerns, as the data is visible only to the user who is wearing the headset.\nConclusion This post covers the use-cases of Extended Reality in the Travel and transportation industry. XR (AR-VR-MR) will be one of the key disruptors as well as enablers for the future businesses of the industry.\nThis article is originally published on XR Practices Publication\n","id":29,"tag":"xr ; ar ; vr ; mr ; travel ; transportation ; usecases ; enterprise ; technology","title":"Augmenting the Travel Reality","type":"post","url":"https://thinkuldeep.com/post/augmenting_the_travel_reality/"},{"content":"This article is in continuation of Static Code Analysis for Unity3D — Part 1 where we talked about setting up a local sonar server, sonar scanner and running the analysis for a Unity3D project.\nIt this article, we will discuss setting up the Static Code Analysis with SonarCube in IDE — Rider. We are using Rider as the “External Script Editor” in Unity. Configure Rider here in Unity\u0026gt; Preferences \u0026gt; External Tools \u0026gt; External Script Editor.\nInstall SonarCube Plugin For Rider Go to JetBrains Rider \u0026gt; Preferences \u0026gt; Plugins \u0026gt; Marketplace \u0026gt; Search SonarQube You will find “SonarCube Community Plugin” \u0026gt; Click Install. It will ask you to restart the IDE to complete the installation. On successful restart you should see it installed. Configure SonarCube Plugin We need to connect to the sonar server, to perform local analysis. Here are the steps to follow.\n Go to JetBrains Rider \u0026gt; Preferences \u0026gt; SonarQube — Click Add and provide details of the local sonar server. Ref to the previous post for more details. Click on +icon for SonarQube resources\n Click Download Resources and Select the Sonar Project that we have added. — UnityFirst For local analysis, we need to provide local script that will be called when ‘Inspect Code’ is performed on IDE. So create a script as follows covering all 3 steps that we discussed in the last post.\n Add Local analysis script — give it a name, path of script and output file. We are now ready to analysis the code.\nStatic Code Analysis in Rider Open the unity project in Rider. We have ButtonController.cs created in last post, let’s analyze that.\n Go to JetBrains Rider \u0026gt; Code \u0026gt; Run Inspection By Name (Option+Command+Shift+I) \u0026gt; Search for Sonar\n Select “SonarQube (new Issues)” \u0026gt; Select Inspection Scope as Whole Project It will run the code analysis. Just like we did in from the command line in the last post. To analyze the issues inline in the files, Go to JetBrains Rider \u0026gt; Code \u0026gt; Run Inspection By Name (Option+Command+Shift+I) \u0026gt; Select “SonarQube” \u0026gt; Select Inspection Scope as Whole Project Issues are available in Sonar Inspection windows and inline. We can further configure rules on sonar to include and exclude files and much more. Each of the analysis is also getting pushed to local sonar server Conclusion In this post, we have learned how to use SonarCube on a unity project in IDE.\nThis article was originally published on XR Practices Publication\n","id":30,"tag":"xr ; unity ; sonarqube ; static code analysis ; ide ; tutorial ; practice ; technology","title":"Static Code Analysis for Unity3D — Part 2","type":"post","url":"https://thinkuldeep.com/post/static_code_analysis_unity_2/"},{"content":"Static code analysis is a standard practice in software development. There are code scanner tools, which scans the code to find vulnerabilities. There are some nice tools for visualizing and managing code quality. One of the most used tool is SonarQube, supports 25+ languages and flexible configurations of the rules.\nThere are not enough resources talking about static code analysis for Unity3D. This post covers steps to configure SonarQube and use it for scanning Unity projects.\nSonarQube Server Setup SonarQube requires a server setup where it manages code quality analysis, configuring rules and extensions. Follow the below steps to install and configure Sonar for local use. Make sure you have Java 8+ installed on your PC.\n Download SonarQube — https://www.sonarqube.org/downloads/ - Download Community Edition Unpack the zip sonarqube-8.0.zip as Directory SONAR_INSTALLATION/sonarqube OS-specific installations are available in the bin directory For Unix based OS provide permissions execute permission on chmod +x SONAR_INSTALLATION/sonarqube/bin/\u0026lt;os-specific-folder\u0026gt; Start the Sonar Server — eg. SONAR_INSTALLATION/sonarqube/bin/macosx-universal-64/sonar.sh console Sonar Server is ready to be used at http://localhost:9000 with credentials admin/admin Set up your first project on Sonar Qube. — Click create +on top right It will ask you for the token which may be used to securely run the analysis on the sonar server. For now, leave it at this step, we will use user credentials admin/admin for simplicity. This project is created with default rules sets and quality gates. Remember the project key. Sonar Scanner Setup Sonar scanner needed to statically analyze the code against the rules on the sonar server and then push the reports to the sonar server. Follow the steps below to set up Sonar Scanner Ref : https://docs.sonarqube.org/latest/analysis/scan/sonarscanner-for-msbuild/\n Download Sonar Scanner — https://github.com/SonarSource/sonar-scanner-msbuild/releases/download/4.7.1.2311/sonar-scanner-msbuild-4.7.1.2311-net46.zip\n Unpack the zip as Directory SONAR_SCANNER_INSTALLATION/sonarscannermsbuild\n For UNIX based OS give execute permissions —\nchmod +x SONAR_SCANNER_INSTALLATION/sonarscannermsbuild/sonar-scanner-\u0026lt;version\u0026gt;/bin/* Sonar setup is ready, let\u0026rsquo;s analyze a Unity Project.\n Analyze Unity Project Create a Unity Project. Below is a simple Unity project with button which toggles its color on every click. Let’s statically analyze this project. Follow the below steps :\n Goto project root — Start Pre-Processing for with Sonar Scanner — on windows we can directly run SonarScanner.MSBuild.exe begin /k:\u0026quot;project-key\u0026quot; comes with Sonar Scanner, but on Mac we need run it with mono as follows.\nmono /Applications/sonarscannermsbuild/SonarScanner.MSBuild.exe begin /k:”UnityFirst” /d:sonar.host.url=”http://localhost:9000\u0026#34; Rebuild Project — MSBuild.exe \u0026lt;path to solution.sln\u0026gt; /t:Rebuild\nOn mac : Post-processing — push report to Sonar Server Windows : SonarScanner.MSBuild.exe end Mac: SONAR_SCANNER_INSTALLATION/sonarscannermsbuild/SonarScanner.MSBuild.exe end Analyze code on Sonar Server — http://localhost:9000/dashboard?id=UnityFirst Dashboard Analyse the issues\n Conclusion In this post, we have learned setting up Sonar Server and Sonar Scanner and using it for Unity Projects. Also, see its usage on Mac. The next post talks about setting it up for IDE and perform inline code analysis\nThis article was originally published on XR Practices Publication\n","id":31,"tag":"xr ; unity ; sonarqube ; static code analysis ; tutorial ; practice ; technology","title":"Static Code Analysis for Unity3D — Part 1","type":"post","url":"https://thinkuldeep.com/post/static_code_analysis_unity_1/"},{"content":"Inception exercise is very common in software development, where we gather the business requirements and define the delivery plan for the MVP(Minimum Viable Product) on a shared understanding. There are a set of tools and practices available to define the MVP requirements and its delivery plan. Famous tools are Lean Value Tree, Business Model Canvas, Plane Map, Elevator Pitch, Futurespective, Empathy Maps, Value Proposition Canvas, User Journey Map, Gamestorming, Process Diagram, MoSCoW, Communication Plan, Project Plan etc.\nWe have recently concluded an Inception Workshop at ThoughtWorks, by Akshay Dhawle, Smita Bhat and Pramod Shaik. It was a 3 days workshop, full of exposure to different tools, exercises and role-plays. Thanks to the mentors and participants, for making it so smooth and fun. We learned a lot of different aspects of the inception and situations. The inception may go wrong if we don\u0026rsquo;t care about the intricacies, does not matter how equipped you or your team is.\nHere are my key takeaways from the workshop :\n Choose the tool wisely, the tool should to simply the process and help us uncover the hidden aspects. A wrong tool selection may take the exercise completely off the track. The inception team must be comfortable with the selected tool and practices, it is better to do a rehearsal using the selected tool or practice. It is advisable that the inception team members should work together for atleast a week or so to understand each other and build synergies among themselves. Always go prepared with detailed inception plan and objective on each day, do a rehearsal with the inception team before facing the client. Do retrospection even if you think that things are going well, to understand the team\u0026rsquo;s performance and improvise on time. Have a backup plan for almost everything, from people/resource unavailability to selection of the tools. Be flexible, and get ready to adapt to the new plans. Define drivers in the inception of different exercises, as well as overall inception, who owns the facilitation. Define the roles and expectations upfront, however as stated earlier alway have a backup. If we stuck with tool/situations than driver needs to push the team forward. Inception is a collaborative exercise, the decisions should be taken with common understanding with client\u0026rsquo;s inception team, there should not be any last-minute surprise for the stakeholders. Utilize the client\u0026rsquo;s time optimally, and try to finish it in the agreed time. Do homework, and go with your well-thought understanding of the problem. Have a domain SME in the team. Have a diverse team for inception. Parallelise the inception activities, whenever required. Get the final presentation deck reviewed with the client stakeholders before presenting it to a larger audience. Record meeting notes and build/update the agreed artifacts after the sessions. Don\u0026rsquo;t delay building showcase/artifacts till the end. Divide and conquer. Share the notes with all the involved stakeholders. Have review checkpoints with the client at a different stage of the inception. While presenting the outcome to the client, it is recommended to present the sections which you were part of, otherwise, get a knowledge dump from the authors before the presentation. Don\u0026rsquo;t over commit to the client, be honest and transparent. Clearly call out pre and post inception activities. Don\u0026rsquo;t ignore any inputs from the team, keep it in the parking lot, review it often. Timebox the exercises, don\u0026rsquo;t stick on one exercise. If one exercise does not complete/get optimal output in a reasonable time, then have a plan for alternatives. Conclusion Inception is planned to bring stakeholders on same understanding of the product and delivery road map. The focus should be on inception goal, rather than on the tool selection. The tool is just a medium, use the simplest tool in which the team is comfortable in. Facilitate the inception as a collaborative exercise, where all the participants can comfortability participate and agree upon.\nThis article was originally published on My LinkedIn Profile\n","id":32,"tag":"thoughtworks ; inception ; workshop ; takeaways ; technology","title":"Inception Workshop | Key Takeaways","type":"post","url":"https://thinkuldeep.com/post/inception_workshop_key_takaways/"},{"content":"Technology defines our interactions with the ambiance, and the way we want to interact with the ambiance further defines the technology. The evolution of technology and user interactions tightly depends on each other.\n \u0026ldquo;Walls have ears!\u0026quot;, is very much true today, and not just that, now the walls can see you, feel your presence and talk to you.\n In this post, we will discuss how the user interactions evolved with time and what is next!\nEvolution of Technology and User Interactions Technology evolves to provide better user interactions, and the users always expect more from technology. Technology has evolved by thinking the user interactions, beyond the boundaries of traditional technologies. Speed of technology evolution is increasing day by day, and so is the user interactions. See below chart. With all the technological advancements, the user interactions have been evolved from almost no interactions to the immersive interactions. Following are the different stages of interactions evolution.\nNo Interactions We have started our journey with almost no interactions in the Batch Age, where software programs fed as the punch cards into the computer (such as UNIVAC, IBM029), and you get to know the results after a couple of hours or days. It continued till the early 60s.\nAttentive Interactions With little more enhancements in technology, the display screen and keyboard used to provide better user experience, and in this Command Line Interface (CLI) Age, users used to get attention whenever any alert/information to be displayed or input needs to be taken for next set of processing. Apple and Xerox invested heavily, and most of this era\u0026rsquo;s product was not commercially successful, but they have created a path for next technology evolutions. Microsoft\u0026rsquo;s DOS became a great software base in the CLI age.\nWYSIWYG Interactions Graphics User Interface(GUI) Age started from the early 80s when technology was focusing on the window-based interface, and form/editor based UI which is called What You See Is What You Get (WYSIWYG) interactions.\nXerox and Apple tried a few costly attempts as Xerox Star, Apple Lisa. GUI was popularity by Microsoft and Apple in the mid-80s with Windows 1.0 and Macintosh, and the race continued. Software and hardware technologies improved with time and GUI age continued for multiple screen sizes.\nSeamless transition After nearly 1.5 decades of technology enhancements in GUI age, the mobile phone and laptop started taking the shape. Internet Age started, and user interaction started shifting from desktop-based GUI to the web or mobile app based GUI, especially after AJAX, which promised desktop like partial GUI rendering.\nPC : http://www.mynseriesblog.com/category/nseries-site-news/\nA need for seamless transitions of GUI across devices was the increasing focus of this era. The mobile device was getting powerful and became a commodity, so the need for mobile-first GUI became popular. Most of the devices were relying on keypad and display screen. Nokia, Motorola were the pioneers in the cell phones. QWERTY keypad was also quite popular by Blackberry. Laptop and Tablets with external keyboards have almost replaced the PC market.\nTouch and Feel Interactions Touch screen interface was popularized by Apple iPhone in 2007, Samsung followed and race of touch and feel still continued, with lot many players.\nPC: https://www.pastemagazine.com/tech/gadgets/the-20-best-gadgets-of-the-decade-2000-2009/?p=2#7-iphone-2007-\nThe winner was again \u0026ldquo;Ease of doing\u0026rdquo;, touch screen interactions surpassed physical keypads phones, while Nokia and Blackberry were still relying on haptic feeling of a physical button. Internet Age continues with better internet, better devices, better interactions with the help of technology like the Internet Of Thing and Cloud. The phone became a smartphone in this era.\nThe Internet Age was transitioning into Interactions Age, the devices getting closer to human, such that they can wear them and feel them. The smartphone became the enabler to other interactions with wearable and other sensing devices. The smartphone serves as the processing unit, controller and internet gateway to the wearable devices. Lightweight communication protocols are invented (MQTT, BLE, etc) to communicate with these IoT/Wearable devices. We have seen some successful Smart Glass and SmartWatch products.\nIn this Interactions Age, we trust the technology and use it so heavily that all our interactions with devices getting recorded and data is increasing multifold. Data will become the next power. In this age, traveling with unknowns is no more a concern, we trust Uber, OLA more than it was before, as we know data is being recorded somewhere, and we feel technology is inducing transparency in the system. We rely more on the text then on phone calls, social media is getting powerful due to this.\nImmersive Interactions Interactions Age will continue, and interactions are going to be immersive, where we interact with the virtual elements of the ambiance, assuming they are real, we feel immersive in the virtual environment.\n Conversational UI - Conversational interfaces, where we talk/text to virtual bots, they respond just like a real human. We don\u0026rsquo;t need to go to a website and find our needed services on a bulky portal/ or on an app, rather we just send text or voice to a bot, and the bot would respond appropriate result. Devices like Google Home, Amazon Eco are getting popular with just voice interface. IoT empowers CUI to build a lot of enterprise use cases.\n Virtual Reality - VR interactions happen in a completely computer-generated environment, which is a real like environment. VR is getting popular in the education and training simulation domain. Travel and Real-estate are also utilizing it for sales and marketing, where users can try the services in the virtual environment before buying them. VR Therapy is getting popular in the healthcare sector. Google, Samsung, Sony, HTC, Facebook are building heavily on it. Augmented Reality (AR) - Augment Reality, augments the information on the real world. The technology can assist the worker by providing needful information while his/her hands are busy, a worker can talk to the head mounted device/smart glass like Google Glass, and get the assistance, it is called Assisted Reality. When an AR device generates a computer-generated environment on top of the real world, and let us interact with these computer-generated elements in the context of a real word, then it is called Mixed Reality (MR). Microsoft Hololens and Magic Leap are some popular devices MR devices, but these are quite costly. With increasing AR support on the mobile phone, AR mobile apps on are getting popular, it will outpace the normal app development. Future mobile apps will be AR enabled. Google is building VPS (AI powered Virtual Position System) to empower its Google Map with Augmented Reality. PC : Microsoft HoloLens 2\n Extended Reality - In the Mixed Reality environment, when we interact with the real-world environment then we call it extended reality interactions. For example, look at the switch, and blink your eye to switch it on.\n Brain-Computer Interface (BCI) - Multiple types of research are going in the area of the brain-computer interfaces, where a computer can directly feed into the brain or vice-versa. Facebook is researching on thought based interfaces. A blind person\u0026rsquo;s mind can be fed with a camera stream to see thru the world and many more. VR and AI will play a great role here.\n Artificial intelligence is enabling immersive interactions to look and feel more real. Hardware is getting foldable, regeneratable (3D printable), and people will wear the devices, it is getting so small that it can be taken as a pills, and can be controlled from outside. In short, technology is going closer to its context and its application. For example, BLE enabled contact lenses for eyes, printed sticker for monitoring BP, heart rate, smart shoes to direct your foot to move in the target direction, smart glasses to record what you view, smart gloves to help your work, a smart shirt to make you comfortable and control your body temperature, and the list is endless.\nFuture will we age of Artificial Interactions, where even interactions itself can be virtuals. An authorized machine will do virtual interactions on your behalf,\nThoughtWorks has published \u0026ldquo;Evolving Interactions\u0026rdquo; as one of the seismic shift. Do read it.\n We must evolve our thinking—and our capabilities— beyond the keyboard and the screen. Mixed reality will become the norm\n Conclusion We have discussed, how we have evolved our interactions with machines, and how technology is filling the need of the time. A lot of experiments and investments have gone into this journey, every other disruptive technology has killed the previous innovation and investments, but it brings a lot of new opportunities. Industries and individuals must keep experimenting and investing in new technologies, to stay relevant, otherwise, someday you will be abandoned.\nRef and Piture Credits : Microsoft HoloLens 2, and other.\nThis article was originally published on My LinkedIn Profile and XR Practices Publication\n","id":33,"tag":"xr ; ar ; vr ; mr ; evolution ; transformation ; ux ; user-interactions ; general ; technology","title":"Evolution of Technology and User Interactions","type":"post","url":"https://thinkuldeep.com/post/evolution_of_technology_and_user_interactions/"},{"content":"IT Modernization is one of the key factors of today\u0026rsquo;s digital transformation. Most of the enterprises trying to define digital transformation strategy. IT organizations trying to redesign the organizations to meet the changing business need, and uplift the technology and people. Non-IT organizations are also trying to setup digital department to take care of companies digital assets and modernize them. However, IT modernization does not always result in a profit, it may lead to revenue loss if it does not fit to the organization design.\nHere is an example statement that you may get from your client:\n We have rewamped our old mobile site to new prograssive web app, and we are observing 6% drop in conversion rate, impacting the revenue.\n Assume that the client is a non-IT organization, and they have a digital department taking care of the digital assets (Website, App, Mobile Site etc), and also trying to upgrade assets to the latest technologies.\nHere are the general patterns and observations, which may be the major cause of the revenue drop even after the tech modernization.\nIts a matter of changing the technology IT modernization is generally considered just as a matter of changing technology. But it first requires people who accept the change and take it forward. We have observed missing ownership from the stakeholders, and the modernization was looked as enforced on the teams then it was embraced. Most of the team was new in the digital department and the existing people were holding a lot of history and knowledge, which they do not want to expose, in a fear of devaluing.\nSometimes the organizations try to project itself as an agile organization, aims to change anything at any time, but this becomes the anti-pattern.\nActivity Oriented Teams The teams are structured on activity and not on the outcome; eg - Product Design, Development, Test, and DevOps are working silos. It is hard to sync up on the quality outcome. Teams structure should be based on outcome and business value. Ref to my earlier post.\nFor example, you noticed that 10-15 AWS instances were getting spawned in every 24 hours, and the existing instances were getting killed after heavy memory utilization. It went unnoticed for long because, as per DevOps SLA auto-scaling was working fine and the site never goes down, and the memory utilization issue is a headache of the development team. The development team is not interested in monitoring production health as it comes under the DevOps purview.\nCross-functional knowledge is missing in such teams, and there is a lot of people dependency, which hinders taking a feature from business analysis to the deployment.\nInadequate Development Practices It is a pattern when the standard development practices are missing, then the feedback cycle to developers are too long. Most of the issues are captured in the production. Test automation and CICD pipe are not up to date, so developers do not trust and maintain the test suites. It will become a practice to bye-pass the test suite and deploy directly to pre-production or production. It some times a result from a lot of pressure for quantitive delivery from the top management than on quality delivery.\nYou will see that the client would have a history of uncaught broken functionalities for a longer period of time, which would also be a cause to not trust the web site, and eventually impact the conversion rate and revenue.\nProduct and Business Observations We have noticed the following observations related to business and product perspective.\n No business continuity - It is observed that the application is not stable enough, and there is no mechanism to keep the user engaged if the user has lost the product order journey. User experience - There are some major usability issues on the new app such as; a lot of clicks require to customize the product and order, redundant information placement. Integration Issues - History of issues with 3rd party gateways. It is taking too long to identify such issues. Define ways to measure 3rd party system\u0026rsquo;s performance and held them responsible for revenue loss if they do not follow the agreed SLAs. Technology Observations Following are kind of technology observations, and these are mainly due to taking technology decisions without proper ownership and knowledge. Technology choices are followed for the sake of changing to a new concept. Eg: Server-Side Rendering on PWA was not well evaluated but applied.\n Memory Leaks - As mentioned in the earlier example, AWS instances are getting killed after getting out of memory. The exceptional increase in NodeJS server memory, which is leading to crashes every 2-5 hours. It is mainly due to, a) Too many global variables declared at the node level, b) Unwanted setTmeouts written to cancel each fetch request. After fixing the leaks the node instances may run in just 300 MB of memory, which earlier may be taking 20GB. Older Tech Stack - NodeJS, NextJS and React are at the older versions. Request Traceability across app is missing. There was a load on API servers due to unnecessary calls from the UI. Incorrect Progress Web App Strategy Issues in Cloud front caching strategy Incorrect Auto-scaling strategy. Instead of fixing the memory leak issue, an unnecessary auto-scaling configuration was done, which gracefully scales a new instance after every 10000 requests to a node. Missing Infra Health Monitoring alerts Conclusion The success of IT modernization is closely linked to the organization design, it may not always lead to success if the changes it may require are not accepted and owned by people. Just by changing the technology will not lead to IT modernization. The new technology comes with great promises but it also comes with its own challenges. We must always choose a new technology with the right tooling to address the challenges.\nThis article is originally published on My LinkedIn Profile\n","id":34,"tag":"general ; cloud ; digital ; modernization ; technology ; transformation ; experience ; troubleshooting ; agile","title":"Can IT modernization lead to revenue loss?","type":"post","url":"https://thinkuldeep.com/post/can_it_modernization_lead_to_revenue_loss/"},{"content":"I just finished a book \u0026ldquo;Life\u0026rsquo;s Amazing Secrets\u0026rdquo; by Gaur Gopal Das. It covers the topic of balance and purpose in life nicely. Few of the concepts from the book I can very well relate to my current organization\u0026rsquo;s focus on cultivation and collaboration culture.\nIt\u0026rsquo;s now more than 6 months of my journey with Thoughtworks, an organization which started as a social experiment more than 25 years ago, and believes in balancing 3 pillars in life ; sustainability, excellence and purpose. It makes the organization as a living entity, and wherever there is life, it has to face the changes and challenges, it\u0026rsquo;s on us how we overcome and excel the art of changing.\nHere are my takeaways from the book, which help us grow through the life, build social capital, and help the people centric organization grow.\n Live by the principle of gratitude, which allow us to see the positivity and believe in the trust-first approach. Start making a gratitude logs. Our attitude towards life affects our social image, the speaking sensitivity is the key while giving or taking feedback. Life can never be perfect, too much corrective feedback may spoil the relationships, if one don\u0026rsquo;t know the art. Feedback without correct motive and purpose has no meaning. Think higher purpose, look beyond the situations and practise forgiveness but we must maintain social decorum of rules and regulations of society/community or organization. Uplift relationship by give and take; exchange of thoughts, values and beliefs Be your own hero, every time you compete with yourself, you will be better than before. Promote culture of healthy competition. Each one is different and not comparable. Go on a journey of introspection, know yourself and find the purpose of life. Develop good character by belief, actions and conduct. Good character has the ability to change lives. We have 2 ears and 1 mouth, so give them chance in that proportion; Listen before you speak and choose to respond (not react). The book presents a great ideology of the ice-cream, the candle and the oxygen mask, to guide us a journey from selfish to selfless. A view from family first to serving the nation, and how serving others can be Joy of life. It also talks about role of spirituality in our life. Overall it is a great book, and I recommend it. This article is originally published on My LinkedIn Profile\n","id":35,"tag":"selfhelp ; book-review ; thoughtworks ; takeaways ; change ; motivational","title":"Embibe purpose and balance in life","type":"post","url":"https://thinkuldeep.com/post/embibe_purpose_and_balance_in_life/"},{"content":"Blockchain or DLT (Distributed Ledger Technology) is getting good traction in the IT world these days. Earlier, this technology was being mostly explored by banks and other finance-related institutions, such as Bitcoin and Ethereum. Now, it is getting explored for other use cases for building distributed applications. Blockchain technology comes with a decentralized and immutable data structure that maintains a connected block of information. Each block is connected using a hash of the previous block, and every new block on the chain is validated (mined) before adding and replicating it. This post is about my learnings and challenges while building an enterprise blockchain platform based on Quorum blockchain technology.\nBlockchain-Based Enterprise Platform We have built a commodity trading platform that matches the trades from different parties and store-related trade data on smart contracts. Trade operations are private to the counterparties; however, packaging and delivery operations are performed privately to all the blockchain network partners. The delivery operation involves multiple delivery partners and approval processes, which are safely recorded in the blockchain environment. The platform acts as a trusted single source of truth and, at the same time, keeping data distributed in the client\u0026rsquo;s autonomy.\nThe platform built on Quorum, which is an Ethereum-based blockchain technology supporting private transactions on top of Ethereum. A React-based GUI is backed by the API layer of Spring Boot + Web3J. Smart contracts are written in Solidity. Read more here to start on a similar technology stack and know the blockchain.\nThe following diagram represents the reference architecture: Blockchain Development Learnings Blockchain comes with a lot of promises to safely perform distributed operations on a peer-to-peer network. It keeps data secure and makes them immutable so that data written on a blockchain can be trusted as the source of truth. However, it comes with its own challenges; we have learned a lot while solving the challenges and making the platform production-ready. These learnings are based on the technology we used; however, it can be co-related any of the similar DLT solutions.\nThe Product, Business, and Technology Alignment Business owners, analysts, and product drives need to understand the technology and its limitation — this will help in proposing a well-aligned solution supported by the technology.\n Not for the real-time system — Blockchain technology supports eventual consistency, which means that data (a block) will be added/available eventually on the network nodes, but it may not be available in real-time. This is because it is an asynchronous system. Products/ applications built using this technology may not be a real-time system where end-users expect the immediate impact of the operation. We may end up building a lot of overhead to make real-time end-user interfaces by implementing polling, web-sockets, time-outs, event-bus, on smart-contract events. The ideal end-user interface would have a request pipeline where the user can see the status of all its requests. Once a request is successful, then only the user will expect the impact. There is no rollback on the blockchain due to its immutability, so atomicity across transactions is not available implicitly. It is a good practice to keep the operation as small as possible and design the end interface accordingly. Blockchain’s transaction order is guaranteed, thus making it more of a sequential system, if a user is submitting multiple requests then they will go in sequentially to the chain. (Private Blockchain) Each user/network node has its own smart contract data for which the node has participated into the transaction on the start contract, so adding new node/user to the contact would not see the past data available on the contract. It may have an implication on business exceptions. Backward compatibility is anti-pattern on the blockchain. It would be better if we may bind business features to the contract/release versions, new features will be only available in the new contract, otherwise implementing backward compatibility takes a huge effort. Architecture and Technology Learnings Think multi-tenancy from the start — Blockchain system is mostly about operations/transactions between multiple parties and multiple organizations. Most enterprise platforms have multiple layers of applications, such as end-user client layer (web, mobile, etc.), API layers, authentications and authorizations, indexes, and integrations with other systems. It is wise to think of multi-tenancy across the layers from the start of the project and design the product accordingly, instead of introducing it at the later stage. Security does not come on its own — Blockchain-based systems generally introduce another layer of integration between the API and data layer. We still need to protect all the critical points at multiple network nodes owned by different participants. So, defining the security process and practices are even more important and complex for blockchain-based solutions than that of the classic web application. Make sure these practices are followed by all the participant\u0026rsquo;s node. A hybrid solution — We might need to keep data index/cache (SQL or NoSQL) to meet the need multiple users read, as reading every time from a smart contract may not meet the performance criteria. The index/cache building module may subscribe to the smart contract transaction event and build data index/cache to serve as reading a copy of blockchain data. Each participant\u0026rsquo;s node will have its own index and end-users\u0026rsquo; interface of that participant organization. It will definitely add complexity to maintain the index, but until blockchain technology becomes as fast as reading cache, we have this option to go Hybrid way (blockchain + read indexes). We can replay a smart-contract transaction event as and when required; it helps in rebuilding indexes/cache in case we lost the index. Carefully add business validation in the smart-contract — we may add business data validation in the smart contract itself, but it has a transaction cost (performance cost), and more complex validation may lead to out of gas issues, also getting a detailed exception from the contract is not fully supported. If we are maintaining indexes, then we may do validation before making the contract transitions by reading from indexes, but it is also not a proof solution, as there might be race conditions. The ideal way is to perform business validation before, and if race condition occurs, just override/ignore the operation and choose the approach wisely based on the business impact. Fear of Losing Transactions — What if we can not process a transaction due to an error? Depending on the case, we either need to wait for the problem to be fixed and halt all the further events or just log the error and proceed. As mentioned in the previous section, choose to override/ignore wisely; it may have a significant business impact. Smart Contract Limitation — Solidity has a limitation on a number of parameters in a method, stack depth, string handling, contract size, gas limits. So, design contract methods accordingly. Tracing — Think of application tracing from web/mobile \u0026lt;-\u0026gt; API server \u0026lt;-\u0026gt; blockchain \u0026lt;-\u0026gt; indexes from the start. Each request should be traceable. Quorum doesn’t support any tracing/message header kind of mechanism to send co-relation identifier; however, we may link the application a trace id to the transaction hash. Configurability/Master Data Management — Every enterprise system needs some master data, which can be used across the organizations since participant\u0026rsquo;s nodes are decentralized. So, this data needs to be synchronized across nodes. Device out a mechanism to replicate this data across the nodes. Any inconsistency may result in the failure of transactions. Smart Contract Backward Compatibility — This is the most complex part of the contract. We need to write a wrapper contract on the older contract and delegate a call to the old one. Note that we cannot access the private data of the existing contract, so we need to implement all the private data in the wrapper contract and conditional delegate call to the old contract. Managing event listeners for different versions are also complex; we may need to keep multiple versions in event listeners in the code base to support multiple versions of the contracts. We also need to make sure all the participants are on the same version of contracts. It would be better if we bound business features with this version. In that case, any new transaction operation may not be available on the old contract. Scaling the API/Client Layer — Since the blockchain processes the transactions in a sequential manner, scaling the API/client layer is complex; we need to implement some locking mechanism to avoid the same transaction getting performed from multiple instances of the API layer. Deployment is another challenge here. Feature toggle on the contract side is also quite complex Testing Learnings Solidity unit tests take a lot of gas as the functionality grows, so it is better to use JS API-based tests for contract unit tests. Define contract migration testing approach and it\u0026rsquo;s automation up front. Automate environment preparation as it needs multiple tenants\u0026rsquo; nodes to interact to test the functionality. API integration tests need to poll the other participants\u0026rsquo; API to see the expected impact. It increases the API test development times as well as execution time. Define approach for test automation around index/cache rebuilding The version-based feature toggles in tests artifacts are complex to maintain. Automation of meta-data management across the nodes Testing of CI/CD scripts as the scripts grow, so complex with time and impact of any issue is critical. Continuous Integration and Deployment Define the blockchain node deployment strategy upfront (private/public cloud); you may not change it once it is deployed without losing data. Securely storing secrets on the cloud. Have a process for access management reviews. Analyze the impact of secret rotation on blockchain node setup and automate the process. Backup and restore strategy for blockchain Making sure all the nodes are on the same version of blockchain and software artifact High availability is a challenge when new deployment has to happen, as we need to stop service completely before deploying the new version to avoid corrupt indexes/blockchain due to the intermediate state of operations (multi-transaction operation). Project Management The blockchain is still a growing technology. It is better to have a dedicated sub-team that takes care of blockchain research-related work and define best practices for the project and industry. Dedicated infra team is needed to manage and improve the CI/CD environment. Security testing resources and audits. Legal point of view on GDPR and storing data with blockchain. Coordination with participant’s infra team Support SLA with each participant. This article was originally published on Dzone\n","id":36,"tag":"blockchain ; java ; experience ; learnings ; micro-services ; architecture ; spring boot ; technology","title":"Lessons Learned From Blockchain-Based Development","type":"post","url":"https://thinkuldeep.com/post/learnings-from-blockchain-based-development/"},{"content":"The book \u0026ldquo;Agile IT Organization Design - For digital transformation and continuous delivery\u0026rdquo; by Sriram Narayan explains software development as a design process. Following statement from the book means a lot.\n \u0026ldquo;Software development is a design process, and the production environment is the factory where product takes place. IT OPS work to keep the factory running..\u0026rdquo;\n It questions the way we traditionally think of operation support, maintenance, production bugs, rework, their costs and budgets and the focus in software development life.\nThinking software development as a design process helps the organisations, focus on today\u0026rsquo;s need of agile development. Here are some thought points, when anything before production deployment is considered as software product design, and the \u0026ldquo;software product\u0026rdquo; is getting produced from the production environment (factory):\n Dev + OPS - A product design which doesn\u0026rsquo;t fit in the process of factory can not be produced by the factory, in other words A factory which do have sufficient resources needed to produce given design can\u0026rsquo;t produce the product. The product designer, and factory operators needs to be in the close collaborated team, to improvise the design, and do production. Continuous Delivery - There is no point of thinking about factory after the product design is done, we may not be able to build the factory optimally. Think of the production factory while design the product. Have continuous feedback in place by continuous integration and delivery. Value driven and outcome oriented teams - A factory can produce the product on a shop floor/production line with checks, balances and integration at each step. It implies to the software development as well, value driven projects, and outcome oriented teams are more helpful in making product successful over the plan driven projects and activity orientated teams. The book covers it very well. Sorter cycle time - Software product design has no value until it is produced, and reached to the people. Sooner we produce, better we get the value - so cycle time for product design must be as small as possible. Luckily in software development we have tools to simulate factory environment/test labs, and try to produce the models well before the actual production. Measure the product quality more than the design quality - It is important to measure product quality than that of the product design (software before prod environment). So metrics which can really measure product quality (software running in production environment) such as velocity in term of \u0026ldquo;Value\u0026rdquo;. Measuring burn-up/burn-down, bugs counts, defect density, code review quality are all product design metrics, it may not matter how good these numbers are if software is not producing value on the production environment. Conclusion The book covers an agile organization design with a great breadth and depth on structural, cultural, operational, political and physical aspects. We can relate many of these aspects while thinking the software development as a design process.\nThis article was originally published on My LinkedIn Profile\n","id":37,"tag":"software ; book-review ; process ; design ; organization ; takeaways ; agile ; technology","title":"Software Development as a Design Process","type":"post","url":"https://thinkuldeep.com/post/software-development-as-design-process/"},{"content":"Most web applications are hosted behind a load balancer or web-server such as Nginx/HTTPD, which intercepts all the requests and directs dynamic content requests to the application server, such as Tomcat. Correlating requests traversing from the front-end server to the backend servers are general requirements. In this post, we will discuss tracing the request in the simplest way in an Nginx and Spring Boot-based application without using an external heavyweight library like Slueth.\nAssign an Identifier to Each Request Coming Into Nginx Nginx keeps request identifier for HTTP request in a variable $request_id, which is a 32 haxadecimal characters string. $request_id can be passed to further downstream with request headers. Following configuration passes the $request_id as X-Request-ID HTTP request header to the application server.\nserver { listen 80; location / { proxy_pass http://apiserver; proxy_set_header X-Request-ID $request_id; } } Log the Request Identifier in Front-end Access Logs Include the $request_id in the log format of the Nginx configuration file as follows.\nlog_format req_id_log \u0026#39;$remote_addr - $remote_user [$time_local] $request_id \u0026#34;$request\u0026#34; \u0026#39; \u0026#39;$status $body_bytes_sent \u0026#34;$http_referer\u0026#34; \u0026#34;$http_user_agent\u0026#34; \u0026#39; \u0026#39;\u0026#34;$http_x_forwarded_for\u0026#34;\u0026#39;; access_log /dev/stdout req_id_log; It will print the access logs in the following format:\n172.13.0.1 - - [28/Sep/2018:05:28:26 +0000] 7f75b81cec7ae4a1d70411fefe5a6ace \u0026#34;GET /v0/status HTTP/1.1\u0026#34; 200 184 \u0026#34;http://localhost:80/\u0026#34; \u0026#34;Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.100 Safari/537.36\u0026#34; \u0026#34;-\u0026#34; Intercept the HTTP Request on the Application Server Every HTTP request coming into the application server will now have the header X-Request-ID, which can be intercepted either in the interceptor or servlet filter. From there, it can be logged along with every log we print in the logs, as follows.\n####Define a Request Filter Using MDC Define the following request filter, which reads the request header and puts it in the MDC (read about MDC here). It basically keeps the values in the thread local.\n... @Component public class RegLogger implements Filter { @Override public void doFilter(ServletRequest servletRequest, ServletResponse servletResponse, FilterChain filterChain) throws IOException, ServletException { try { MDC.put(\u0026#34;X-Request-ID\u0026#34;, ((HttpServletRequest)servletRequest).getHeader(\u0026#34;X-Request-ID\u0026#34;)); filterChain.doFilter(servletRequest, servletResponse); } finally { MDC.clear(); } } @Override public void init(FilterConfig filterConfig) throws ServletException {} @Override public void destroy() {} } Configure the Logger to Print the Request id I have used logback, which can read MDC variables in the %X{variable_name} pattern. Update the logback pattern as follows:\n\u0026lt;?xml version=\u0026#34;1.0\u0026#34; encoding=\u0026#34;UTF-8\u0026#34;?\u0026gt; \u0026lt;configuration\u0026gt; \u0026lt;appender name=\u0026#34;STDOUT\u0026#34; class=\u0026#34;ch.qos.logback.core.ConsoleAppender\u0026#34;\u0026gt; \u0026lt;encoder class=\u0026#34;ch.qos.logback.classic.encoder.PatternLayoutEncoder\u0026#34;\u0026gt; \u0026lt;Pattern\u0026gt;%d{HH:mm:ss.SSS} [%thread] %X{X-Request-ID} %-5level %logger{36} - %msg%n\u0026lt;/Pattern\u0026gt; \u0026lt;/encoder\u0026gt; \u0026lt;/appender\u0026gt; \u0026lt;root level=\u0026#34;info\u0026#34;\u0026gt; \u0026lt;appender-ref ref=\u0026#34;STDOUT\u0026#34; /\u0026gt; \u0026lt;/root\u0026gt; \u0026lt;/configuration\u0026gt; It will print the following logs:\n17:28:26.011 [http-nio-8090-exec-7] 7f75b81cec7ae4a1d70411fefe5a6ace INFO c.d.e.controllers.StatusController - received request This way, you can see the request id in all the logs from the origin to the end. We can configure ELK to aggregate the logs and trace them in a nice user interface to help with troubleshooting.\nGet the Generated Request id Back in the Response Nginx can also add the request id in the response header with the following configuration:\nserver { listen 80; add_header X-Request-ID $request_id; # add to response header .... As you see here: Conclusion We have explained a simple way to assign a unique identifier to each request, then trace the request end-to-end. You can get more details here.\nThis article was originally published on Dzone\n","id":38,"tag":"request-tracing ; troubleshooting ; java ; web-server ; micro-services ; spring boot ; technology","title":"Request Tracing Using Nginx and Spring Boot","type":"post","url":"https://thinkuldeep.com/post/request-tracing/"},{"content":"Agile software development methodology is the key selling point for most of the IT organisations, however many of the organisations have gone by the literal meaning of the word \u0026ldquo;Agile\u0026rdquo;. They follow agile practices without really being agile. If we go by Manifesto for Agile Software Development, it focus more on human factor (people, interaction, collaboration) which produces working software as per the need of time, than on processes, tools and strict planning. As described in the book Agile IT Organization Design, organisation culture plays key role in the in becoming true agile organisation. Agile culture is more around collaboration and cultivation than a controlling and competence culture. This post describe, how the test first paired development can boost organisation agility and culture. This article is also my takeaways from the book.\nThe Test First Pair Development: When a pair of experts think about, how a function will be used before developing the function, is the test first development. The book \u0026lsquo;Extreme programming explained: embrace change\u0026rsquo; [3] talks a lot about TDD(Test Driven Development), pair programming, tools, principles and practices. The term test first pair development is generalised from TDD and Pair Programming, however it is not restricted only to developers but also applicable to other software development roles (BA, QA, TL, TA, etc) and cross roles. Following are the key concepts :\nTest First Development First-Fail-Test : Think of a very basic test for a business functionality that you want to implement. Implement/define the test first. Just-Pass-Test : Write code/artifact which is necessary to pass the test. Don\u0026rsquo;t write unnecessary code to satisfy your assumptions and intuition. Next-Fail-Test : Keep writing test until next fail test and then perform step 2. Keep doing this until you reach at just sufficient outcome. Refactor/beautify the code/artifact and make sure all tests are passing. Do it in between 2 and 3. Get the outcome reviewed, collect feedback and follow above steps again to improvise the feedback. Pair Development: Pair sign up - Two person sign up for one outcome. Pair rotation - Pair should be rotated on an interval. Collaboration - Pair must work together. One person is reviewing, while other is doing the task, and then exchange their position when needed. Development environment and seating should be such that both can work together. Pair with diversity - Make a pair of diverse personality in term of role, experience and skills, so that everyone learn from each other and respect. eg : Pair with QA to write automated acceptance tests, Pair with Infra Expert to fix DevOps issues. Combining Together: Pair follows the test first development practice for example :\nOne person think and write of a test and other person implement the functionality needed to just pass the test. Other person think and write the next test and the first one will implement the functionality to make it pass. Pair with QA/Tester/PO during development to review the outcome before releasing it to QA.\nBenefits of Test First Pair Development: Following are 5 major boosts for organisation for using the approach.\n1. Fail Early We get feedback well early in the development life cycle. The artifact we produce is getting reviewed at very first place of development in two ways. First, it is getting tested by the test artifacts and second, it is getting reviewed by peer as and when it is written. So it will produce batter quality software. As described in test pyramid [4], we build an organisation practice to strengthen the base to testing pyramid.\nIt is not just in term of product feedback but also we get feedback on people for alignment towards agile.\n2. People and Interaction Test first pair development need a great interaction between the pair, and gives equal opportunity to each one. Pair gets rotated after some time, which helps in building the team bonding. People start believing in the word \u0026ldquo;Team\u0026rdquo; and understand that if one expect support from other then, he/she need to support others. In such a team, people build trust and respect among other. It promotes collaboration culture in organisation, which is the key for any real agile organisation.\n3. Outcome Oriented Teams Test first pair development works well with outcome oriented team, where people can cross pair (pair across roles) to achieve desired outcome efficiently. When people cross pair, the understand each other\u0026rsquo;s pain, and better work for each other\u0026rsquo;s success, eventually lead to better business outcome. Test first methodology promote to build software to fulfil immediate business need, and avoid building unnecessary source code.\n4. Change with Confidence and Satisfaction In traditional way, it is always a feared delivery when changing any existing code, we never know what will break, and sometime we get the feedback from the end user. But here, people are more confident in making changes in the existing code which is well covered by the tests. We get immediate feedback by the failing tests, if all existing tests are working then we are good to go, with passing tests for changed functionality. Another eye is always there to watch you. With this approach, people are more satisfied with what they are doing, as they are more interacting, getting more support from team, and are more focused.\n5. Overall cost effectiveness People feel that pair development is a double effort then the traditional development, as two person are assigned to a task which can be done by one person. It is not true for most of the cases due to following reasons:\n Quick on-boarding - Paired development don\u0026rsquo;t need huge on-boarding cycle, new joiner can start contributing from the day one and will learn on the job, while working with the pair. It is pair\u0026rsquo;s responsibility to make the new comer comfortable, it is for his own advantage. No lengthy review cycle - We don\u0026rsquo;t need special review cycle for the code. Almost 10% of implementation effort is assumed for reviews which is be saved. Less defect rework - Since we get feedback well early, so we can fix the issues as and when they found. Rework is proportional to defect age ( defect detected date - defect induced date). Around 20% overall effort is assumed for defect fixes rework. we may save good amount here. Less testing effort - Test first approach promote more of automation tests, and tries to maintain the test pyramid, thus lessor effort for testing Effectiveness - There are less wastage at various level, team is focus on the delivering right business requirement at right time in iterations, inefficiencies can be removed well early based on the feedback at various levels. Conclusion We have discussed the test first pair development methodology, which is about marrying TDD and Pair Programming. We described key concepts of the test first and the pair programming, and then we described the key benefits to the organisation.\nReferences :\n[1] - http://agilemanifesto.org/\n[2] - https://info.thoughtworks.com/PS-15-Q1-Agile-IT-Organization-Design-Sriram-Narayan_Staging.html\n[3] http://ptgmedia.pearsoncmg.com/images/9780321278654/samplepages/9780321278654.pdf\n[4] https://martinfowler.com/articles/practical-test-pyramid.html\nThis article was originally published on My LinkedIn Profile\n","id":39,"tag":"tdd ; takeaways ; book-review ; xp ; practice ; agile ; pair-programming ; thoughtworks","title":"Boost your organisation agility by test-first-pair development","type":"post","url":"https://thinkuldeep.com/post/test-first-pair-development/"},{"content":"Java 11\u0026rsquo;s release candidate is already here, and the industry is still roaming around Java 8. Every six months, we will see a new release. It is good that Java is evolving at a fast speed to catch up the challengers, but at the same time, it is also scary to catch its speed, even the Java ecosystem (build tools, IDE, etc.) is not catching up that fast. It feels like we are losing track. If I can\u0026rsquo;t catch up with my favorite language, then I will probably choose another one, as it is equally as good to adapt to the new one. Below, we will discuss some of the useful features from Java 8, 9 and 10 that you need to know before jumping into Java 11.\nBefore Java 8? Too Late! Anyone before Java 8? Unfortunately, you will need to consider yourself out of the scope of this discussion — you are too late. If you want to learn what\u0026rsquo;s new after Java 7, then Java is just like any new language for you!\nJava 8: A Tradition Shift Java 8 was released four years ago. Everything that was new in Java 8 has become quite old now. The good thing is that it will still be supported for some time in parallel to the future versions. However, Oracle is already planning to make its support a paid one, as it is the most used and preferred version to date. Java 8 was a tradition shift, which made the Java useful for today and future applications. If you need to talk to a developer today, you can\u0026rsquo;t just keep talking about OOP concepts — this is the age of JavaScript, Scala, and Kotlin, and you must know the language of expressions, streams, and functional interfaces. Java 8 came with these functional features, which kept Java in the mainstream. These functional features will stay valuable amongst its functional rivals, Scala and JavaScript.\nA Quick Recap Lambda Expression: (parameters) -\u0026gt; {body} Lambda expressions opened the gates for the functional programming lovers to keep using Java. Lambda expressions expect zero or more parameters, which can be accessed in the expression body and returned with the evaluated result.\nRead my earlier article on Introduction to Lambda Expressions\nComparator\u0026lt;Integer\u0026gt; comparator = (a, b) -\u0026gt; a-b; System.out.println(comparator.compare(3, 4)); // -1 Functional Interface: an Interface With Only one Method The lambda expression is, itself, treated as a function interface that can be assigned to the functional interface, as shown above. Java 8 has also provided a new functional construct shown below:\nBiFunction\u0026lt;Integer, Integer, Integer\u0026gt; comparator = (a, b) -\u0026gt; a-b; System.out.println(comparator.apply(3, 4)); // -1 Refer to the package java.util.function for more functional constructs: Function, Supplier, Consumer, Predicate, etc. One can also define the functional interface using @FunctionalInterface.\nInterfaces may also have one or more default implementations for a method and may still remain as a functional interface. It helps avoid unnecessary abstract base classes for default implementation.\nStatic and instance methods can be accessed with :: operator, and constructors may be accessed with ::new, and they can be passed as a functional parameter, e.g. System.out::println.\nStreams: Much More Than Iterations Streams are a sequence of objects and operations. A lot of default methods have been added in the interfaces to support forEach, filter, map , and reduce constructs of the streams. Java libraries, which were providing collections, now support the streams. e.g. BufferredReader.lines(). All the collections can be easily converted to streams. Parallel stream operations are also supported, which distributes the operations on the multiple CPUs internally.\nRead my earlier article on Introduction to Steam APIs\nIntermediate Operations: the Lazy Operation For intermediate operations performed lazily, nothing happens until the terminating operation is called.\n map (mapping): Each element is one-to-one and converted into another form. filter (predicate): filter elements for which the given predicate is true. peek () , limit(), and sorted () are the other intermediate operations. Terminating Operations: the Resulting Operations forEach (consumer): iterate over the each element and consume the element reduce (initialValue, accumulator): It starts with initialValue and is iterated over each element and kept updating at a value that is eventually returned. collect (collector): this is a lazily evaluated result that needs to be collected using collectors, such as java.util.stream.Collectors, including toList(), joining(), summarizingX(), averagingX(), groupBy(), and partitionBy(). Optional: Get Rid of the Null Programming Null-based programming is considered bad, but there was hardly any option to avoid it earlier. Instead of testing for null, we can now test for isPresent() in the optional object. Read about it — there are multiple constructs and operations for streams as well, which returns optional.\nJVM Changes: PermGen Retired The PermGen has been removed completely and replaced by MetaSpace. Metaspace is no more part of the heap memory, but of the native memory allocated to the process. JVM tuning needs different aspects now, as monitoring is required, not just for the heap, but also for the native memory.\nSome combinations of GCs has deprecated. GC is allocated automatically based on the environment configurations.\nThere were other changes in NIO, DateTime, Security, compact JDK profiles, and tools like jDeps, jjs, the JavaScript Engine, etc.\nHashMap Improvements Java 8 has improved the HashMap collisions, have a look the HashMap Performance benchmarking\nJava 9: Continue the Tradition Java 9 has been around us for more than a year now. Its key feature module system is still not well adapted. In my opinion, it will take more time to really adopt such features in the mainstream. It challenges developers in the way they design classes. They now need to think more in terms of application modules than just a group of classes. Anyway, it is a similar challenge to what a traditional developer faces through microservice-based development. Java 9 continued adding functional programming features to keep Java alive and also improved JVM internals.\nJava Platform Module System: Small Is Big The most known feature of Java 9 is the Java Platform Module System (JPMS). It is a great step towards the real encapsulation. Breaking a bigger module in small and clear modules consists of closely related code and data. It is similar to an OSGi bundle, where each bundle defines dependencies it consumes and exposes things on which other modules depend.\nIt introduces an assemble phase between compile and runtime that can build a custom runtime image of JDK and JRE. Now, JDK itself consists of modules.\n~ java --list-modules java.activation@9.0.2 java.base@9.0.2 java.compiler@9.0.2 java.corba@9.0.2 ... These modules are called system modules. A jar loaded without a module information is loaded in an unnamed module. We can define our own application module by providing the following information in file module-info.java:\n requires — dependencies on other modules exports — export public APIs/interfaces of the packages in the module opens — open package for reflection access uses — similar to requires. To learn more, here is a quick start guide.\nHere are the quick steps in the IntelliJ IDE:\n Create Module in IntelliJ: Go to File \u0026gt; New \u0026gt; Module - \u0026ldquo;first.module\u0026rdquo;\n Create a Java class in /first.module/src\npackage com.test.modules.print; public class Printer { public static void print(String input){ System.out.println(input); } } Add module-info.java : /first.module/src \u0026gt; New \u0026gt; package module first.module { exports com.test.modules.print; // exports public apis of the package. } Similarly, you need to create another module main.module and Main.java:\nmodule main.module { requires first.module; } package com.test.modules.main; import com.test.modules.print.Printer; public class Main { public static void main(String[] args) { Printer.print(\u0026#34;Hello World\u0026#34;); } } IntelliJ automatically compiles it and keeps a record of dependencies and --module-source-path\n To run the Main.java, it needs --module-path or -m:\njava -p /Workspaces/RnD/out/production/main.module:/Workspaces/RnD/out/production/first.module -m main.module/com.test.modules.main.Main Hello World Process finished with exit code 0 So, this way, we can define the modules. Java 9 comes with many additional features. Some of the important ones are listed below\nCatching up With the Rivals Reacting Programming — Java 9 has introduced reactive-streams, which supports React, like async/await communication between publisher and consumers. It added the standard interfaces in the Flow class.\n JShell – the Java Shell - Just like any other scripting language, Java can now be used as a scripting language.\n Stream and Collections enhancement: Java 9 added a few APIs related to \u0026ldquo;ordered\u0026rdquo; and \u0026ldquo;optional\u0026rdquo; stream operations. of() operation is added to ease up creating collections, just like JavaScript.\n Self-Tuning JVM G1 is made the default GC, and there have been improvements in the self-tuning features in GC. CMS has been deprecated.\nAccess to Stack The StackWalker class is added to lazy access to the stack frames, and we can traverse and filter into it.\nMulti-Release JAR Files: MRJAR One Java program may contain classes compatible with multiple versions. To be honest, I am not sure how useful this feature might be.\nJava 10: Getting Closer to the Functional Languages Java 10 comes with the old favorite var of JavaScript. You can not only declare types of free variables but you can also construct the collection type free. The following are valid in Java:\nvar test = \u0026#34;9\u0026#34;; var set = Set.of(5, \u0026#34;X\u0026#34;, 6.5, new Object()); The code is getting less verbose and the magic of scripting languages is getting added in Java. It will definitely bring the negatives of these features to Java, but it has given a lot of power to the developer.\nMore Powerful JVM This was introduced in parallelism in the case that full GC happens for G1 to improve the overall performance.\nHeap allocation can be allocated on an alternative memory device attached to the system. It will help prioritize Java processes on the system. The low priority one may use a slow memory as compared to the important ones.\njava 10 also Improved thread handling in handshaking the thread locally. Ahead-Of-Time compilation (experimental) was also added. Bytecode generation enhancement for loops was another interesting feature with Java 10.\nEnhanced Language In Java 10, we Improved Optional, unmodifiable collections API’s.\nConclusion We have seen the journey from Java 8 to Java 10 and the influence of other functional and scripting languages in Java. Java is a strong object-oriented programming language, and at the same time, now it supports a lot of functional constructs. Java will not only bring top features from other languages, but it will also keep improving the internals. It is evolving at a great speed, so stay tuned — before it phases you out! Because, Java 11, 12 are on the way!\nThis article was originally published on DZone\n","id":40,"tag":"upgrade ; java ; java 11 ; java 8 ; java 9 ; java 10 ; technology","title":"A Quick Catch up Before Java 11","type":"post","url":"https://thinkuldeep.com/post/before_java11/"},{"content":"The change is inevitable and constant; it is a part of our life. Sooner or later, we have to accept the change. 13 years ago, I joined a small IT organisation of around 100 people, as a developer. I have grown with the organisation; a 40 times growth journey. I really thank the organisation for giving me opportunity, and transforming me, what I am today. Now, the time has come to accept a change, and start a new journey. I am writing this post to share my point of view about, when to accept the change.\nWhen to change the organisation? Intention behind this post is not to answer this question, but to address, what you get and what you lose while staying long in the same organisation. I have also taken inputs from many senior folks who joined us or left us after the long stay. You may have different opinion. Please ignore this post incase, it hurts your sentiments in any way. When to change is really an individual’s choice. If you are aware of the pros and cons of staying long, then you may probably work on the cons and stay relevant even longer, or don\u0026rsquo;t change at all.\nWhat do you get? When you join a small software organisation and stay long, say more than 10 years, and grow with the organisation\u0026rsquo;s pace, you get following which keeps you attached to the same organisation\n You are tried and tested - You become a tried and tested resource of the organisation, many stakeholders may like to take you in their group. You don’t need to prove every-time to get into any nice opportunity in the organisation. You know the beats - Every organisation grows in its own style and nurture its own culture. You know the beats of the organisation, you know how the X thing is done in a certain way, you know the history. At time you become a consultant for many things in the organisation. People believe in your inputs, you become an encyclopaedia of org information. Since you have participated in almost every things, you see yourself in every part of organisation, be it hiring, training, process or the system. Breadth and Depth - You may get chance to work on different departments and chance to setup groups and lead multiple initiatives. This gives you a chance to gain the breadth and depth, and a rich experience to lead the organisation. Attractive role and appreciations - You may be working on really attracting role at even lessor experience in terms of years. You may be recognised and appreciated well. Emotional connect - After these many years, your relationship with organisation is no more a professional but it becomes an emotional one. Organisation becomes a family, and you will have a lots of friends. Comfort zone - You are in your comfort zone, and can work very confidently in the same zone. Stability - Industry may see you as a stable resource, who don’t change too often. What do you lose? Staying long will accumulate following, and you may keep building on top of these, unknowingly. You may work to overcome these and stay prepared to go even longer in the journey.\n Lack of diversity - Since you have worked in the same organisation for such a long period, you start thinking that certain things can happen only in a certain way. These certain things could be dealing with processes, people, resource and even technology solutions. You know the things well, which worked well and which not for you in that organisation. Thinking beyond your area of easy imagination does not come obvious. It may start blocking your path and at time may become frustrating. Resistant to the change - After staying so long, you become resistant to the change in the processes, people and the system. It becomes hard to accept the change, if that is not according to you. Out of comfort zone - It becomes difficult to work beyond your comfort zone. You may start running away from the situation, which takes you out the comfort zone. Risk taking capability may also reduced, as you worked in really safe and secure zone for long. You may have not learned to deal with the failures, and have not been exploited and exposed well since long. We learns well when we get exposed time to time. You are for granted - You may be treated for granted by your peers of similar experience and by management, they know all the pluses and minuses of yours, as you have grown in front of them. They may use you the way they want and wherever they want. It will be hard to change the perception of people around you. You may also be a target for the new comers, as they don’t really know the history and achievements that you might have. Energy and satisfaction - After so many years, you may not have the energy to fight against the changes, which may kill the culture you are addicted to. Remember that change is necessary, to scale the organisation, to sustain and to go beyond the barriers, but you might not be aligned with the changes. You may not be satisfied with the situations and with what you get. In such a situation, you start creating a negative environments, which is not good for both you and your organisation. Confidence in Industry - as the years progress, jobs become limited at your level. Market outside may not be ready to accept you, in the way you are, you need to adapt to the need outside. You come to know your value and level, only when you try outside. If you keep trying outside time to time, you gain lot of confidence, and that can help in your current job as well. Conclusion I have shared pros and cons of a long stay in the same IT organisation, it may help you in identify the losses you are accumulating year by year after certain years. You may plan to overcome the losses, and stay useful to the same organisation you are in, or take the next step.\nIt is originally published at My LinkedIn Profile\n","id":41,"tag":"general ; change ; nagarro ; thoughtworks ; motivational","title":"Accept the change!","type":"post","url":"https://thinkuldeep.com/post/accept_the_change/"},{"content":"Microservices-based development is happening all around the industry; more than 70% are trying development of microservice-based software. Microservices simplify integration of the businesses, processes, technology, and people by breaking down the big-bang monolith problem to a smaller set that can be handled independently. However, it also comes with the problem of managing relations between these smaller sets. We used to manage fewer independent units, so there was less operation and planning effort. We need different processes, tools, training, methodology, and teams to ease microservices development.\nOur Microservices-Based Project We have been developing a highly complex project on microservice architecture, where we import gigs of observation data every day and build statistical models to predict future demand. End users may interact to influence the statistical model and prediction methods. Users may analyze the demand by simulating the impact. There are around 50+ bounded contexts with 100+ independent deployment units communicating over REST and messaging. 200+ process instances are needed to run the whole system. We started this project from scratch with almost no practical experience on microservices and we faced lots of issues in project planning, training, testing, quality management, deployment, and operations.\nLearnings I am sharing the top five lessons from my experience that helped us overcome those problems.\n1. Align Development Methodology and Project Planning Agile development methodology is considered best for microservice development, but only if aligned well. Monolithic development has one deliverable and one process pipeline, but here, we have multiple deliverables, so unless we align the process pipeline for each deliverable, we won\u0026rsquo;t be able to achieve the desired effectiveness of microservice development.\nWe also faced project planning issues, as we could not plan the product and user stories well, which can produce independent products, and we could apply the process pipeline. For seven sprints, we could not demonstrate the business value to the end user, as our product workflow was ready only after that. We used to have very big user stories, which sometimes go beyond multiple sprints and impact the number of microservices.\nConsider the following aspects of project planning:\n Run parallel sprint pipelines for Requirement Definition, Architecture, Development, DevOps, and Infrastructure. Have a Scrum of Scrum for common concerns and integration points. Keep few initial sprints for Architecture and DevOps, and start the Development sprint only after the first stable version of the architecture and DevOps is setup. Architectural PoCs and Decision tasks should be planned a couple of sprints before the actual development script. Define metrics for each sprint to measure project quality quantitatively. Clearly call out architectural changes in the backlog and prioritize them well. Consider their adaptation efforts based on the current size of the project and impact on microservices. Have an infrastructure resource (expert, software, hardware, or tool) plan. Configuration management. Include agile training in the project induction. Include multiple sprint artifact dependencies in the Definition of Ready (DoR) and Definition of Done. Train the product owner and project planner to plan the scrums for requirement definition, architecture, etc such that they fulfill the DoR. Have smaller user stories, making sure the stories selected in a sprint are really of the unit size which will impact very few deployment unit. If a new microservice is getting added in a particular sprint, then consider the effort for CI/CD, Infrastructure, DevOps. 2. Define an Infrastructure Management Strategy In a monolithic world, infrastructure management is not that critical in the start of a project, so infra-related tasks may get delayed until stable deliveries start coming, but, in microservices development, the deployment units are small; thus, they start coming early, and the number of deployment units is also high, so a strong infrastructure management strategy is needed.\nWe delayed defining the infrastructure management strategy and faced a lot of issues getting to know the appropriate capacity of the infrastructure and getting it on time. We had not tracked the deployment/uses of infra components well, which caused a delay in adapting the infra, and we ended up having less knowledge of the infrastructure. We had to put lot of effort into streamlining the infra components in the middle of the project, and that had a lot of side effects on the functional scope getting implemented.\nInfrastructure here includes cross-cutting components, supporting tools, and hardware/software needed for running the system. Things like service registry, discovery, API management, configurations, tracing, log management, monitoring, and service health checks may need separate tools. Consider at least the following in infrastructure management:\n Capacity planning – Do capacity planning from the start of the project, and then review/adjust it periodically. Get the required infrastructure (software/hardware/tools) ahead of time and test them well before the team adopts them. Define a Hardware/Software/Service onboarding plan which covers details of the tools in different physical environments, like development testing, QA testing, performance testing, staging, UAT, Prod, etc. Consider multiple extended development testing/integration environments, as multiple developers need to test their artifacts, and their development machine may not be capable of holding required services. Onboard an infrastructure management expert to accelerate the project setup. Define a deployment strategy and plan its implementation in the early stages of the project. Don’t go for intermediate deployment methodology. If you want to go for Docker and Kubernetes-based deployment, then do it from the start of the project — don’t wait and delay its implementation. Define access management and resource provisioning policies. Have automated, proactive monitoring on your infrastructure. Track infrastructure development in parallel to the project scope. 3. Define Microservices-Based Architecture and Its Evolutions Microservices can be developed and deployed independently, but in the end, it is hard to maintain standards and practices throughout development across the services. A base architecture that needs to be followed by the microservices and then let the architecture evolve may help here.\nWe had defined a very basic architecture with a core platform covering logging, boot, and a few common aspects. However, we considered lot of things to come in evolutions such as messaging, database, caching, folder structures, compression/decompression, etc. and it resulted in the platform being changed heavily in parallel to the functional scope in microservices. We had not given enough time to the core platform before jumping to the functional scope sprints.\nConsider the following in the base architecture, and implement it well before the functional scope implementation. Don’t rely too much on the statement “Learn from the system and then improvise.” Define the architecture in advance, stay ahead of the situation, and gain knowledge as soon as possible.\n Define a core platform covering cross-cutting concerns and abstractions. The core platform may cover logging, tracing, boot, compression/decompression, encryption/decryption, common aspects, interceptors, request filters, configurations, exceptions, etc. Abstractions of messaging, caching, and database may also be included in the platform.\n Microservice structure – Define a folder and code structure with naming conversions. Don’t delay it. Late introduction will cost a lot.\n Build a mechanism for CI/CD – Define a CD strategy, even for the local QA environment, to avoid facing issues directly in the UAT/pre-UAT environment.\n Define an architecture change strategy – how architecture changes will be delivered and how they will be adapted.\n A version strategy for Source Code, API, Builds, Configurations, and documents.\n Keep validating the design against NFRs.\n Define a Test Architecture to cover the testing strategy.\n Document the module architecture with clearly defined bounded contexts and data isolations.\n 4. Team Management The microservice world needs a different mindset than the monolithic one. Each microservice may be considered independent, so developers of different microservices are independent. It brings a different kind of challenge: we want our developers to manage code consistency across units, follow the same coding standards, and build on the top of the core platform, and at the same time, we want them not to trust other microservices\u0026rsquo; code, as it was developed by some other company’s developer.\nConsider the following in your team management:\n Define the responsibility of “Configuration Management” to a few team members who are responsible for maintaining the configuration and dependencies information. They are more of an information aggregator, but can be considered a source of truth when it comes to configuration. Define a “Contract Management” team consisting of developers/architects who are responsible for defining the interaction between microservices. Assign module owners and teams based on bounded context. They are responsible for everything related to their assigned module, and informing the “Configuration Management” team of public concerns. Team seating may be considered module-wise; developers should talk to each other only via contract, otherwise they are a completely separate team. If any change is needed in the contract, then it should come via “Contract Management.” Define the DevOps team from development team. One may rotate people so everybody gets the knowledge of Ops. Encourage multi-skilling in the team. Self-motivated team. Continuous Training Programs. 5. Keep Sharing the Knowledge Microservices are evolving day by day, and a lot of new tools and concepts are being introduced. Teams need to be up to date; due to microservice architecture, you may change the technology stack of a microservice if needed. Since teams are independent, we need to keep sharing the learning and knowledge across teams.\nWe faced issues where the same/similar issues were being replicated by different teams and they tried to fix them in different ways. Teams faced issues in understanding bounded context, data isolationss etc.\nConsider the following:\n Educate teams on domain-driven design, bounded context, data isolation, integration patterns, event design, continuous deployment, etc.\n Create a learning database where each team may submit entries in the sprint retrospection.\n Train teams to follow unit testing, mock, and integration testing. Most of the time, the definition of a “unit” is misunderstood by developers. “Integration testing” is given the lowest priority. It must be followed; if taken correctly, it should be the simplest thing to adhere to.\n Share knowledge of performance engineering — for example:\n Don’t over loop Use cache efficiently Use RabbitMQ messaging as a flow, not as data storage Concurrent consumer and publishers Database partitioning and clustering Do not repeat Conclusion Microservices are being adopted at a good pace and things are getting more mature with time. I have shared a few of the hard lessons that we experienced in our microservice-based project. I hope this will be beneficial for your project to avoid mistakes.\nIt was originally published at DZone and also shared on LinkedIn\n","id":42,"tag":"micro-services ; learnings ; experience ; java ; architecture ; spring boot ; technology","title":"5 Hard Lessons From Microservices Development","type":"post","url":"https://thinkuldeep.com/post/lessons_from_microservices_development/"},{"content":"This article is for the Java developer who wants to learn Apache Spark but don\u0026rsquo;t know much of Linux, Python, Scala, R, and Hadoop. Around 50% of developers are using Microsoft Windows environment for development, and they don\u0026rsquo;t need to change their development environment to learn Spark. This is the first article of a series, \u0026ldquo;Apache Spark on Windows\u0026rdquo;, which covers a step-by-step guide to start the Apache Spark application on Windows environment with challenges faced and thier resolutions.\nA Spark Application A Spark application can be a Windows-shell script or it can be a custom program in written Java, Scala, Python, or R. You need Windows executables installed on your system to run these applications. Scala statements can be directly entered on CLI \u0026ldquo;spark-shell\u0026rdquo;; however, bundled programs need CLI \u0026ldquo;spark-submit.\u0026rdquo; These CLIs come with the Windows executables.\nDownload and Install Spark Download Spark from https://spark.apache.org/downloads.html and choose \u0026ldquo;Pre-built for Apache Hadoop 2.7 and later\u0026rdquo; Unpack download artifact - spark-2.3.0-bin-hadoop2.7.tgz in a directory. Clearing the Startup Hurdles You may follow the Spark\u0026rsquo;s quick start guide to start your first program. However, it is not that straightforward, andyou will face various issues as listed below, along with their resolutions.\nPlease note that you must have administrative permission to the user or you need to run command tool as administrator.\nIssue 1: Failed to Locate winutils Binary Even if you don\u0026rsquo;t use Hadoop, Windows needs Hadoop to initialize the \u0026ldquo;hive\u0026rdquo; context. You get the following error if Hadoop is not installed.\nC:\\Installations\\spark-2.3.0-bin-hadoop2.7\u0026gt;.\\bin\\spark-shell 2018-05-28 12:32:44 ERROR Shell:397 - Failed to locate the winutils binary in the hadoop binary path java.io.IOException: Could not locate executable null\\bin\\winutils.exe in the Hadoop binaries. at org.apache.hadoop.util.Shell.getQualifiedBinPath(Shell.java:379) at org.apache.hadoop.util.Shell.getWinUtilsPath(Shell.java:394) at org.apache.hadoop.util.Shell.\u0026lt;clinit\u0026gt;(Shell.java:387) at org.apache.hadoop.util.StringUtils.\u0026lt;clinit\u0026gt;(StringUtils.java:80) at org.apache.hadoop.security.SecurityUtil.getAuthenticationMethod(SecurityUtil.java:611) at org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:273) at org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:261) at org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:791) at org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:761) at org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:634) at org.apache.spark.util.Utils$anonfun$getCurrentUserName$1.apply(Utils.scala:2464) at org.apache.spark.util.Utils$anonfun$getCurrentUserName$1.apply(Utils.scala:2464) at scala.Option.getOrElse(Option.scala:121) This can be fixed by adding a dummy Hadoop installation. Spark expects winutils.exe in the Hadoop installation \u0026ldquo;\u0026lt;Hadoop Installation Directory\u0026gt;/bin/winutils.exe\u0026rdquo; (note the \u0026ldquo;bin\u0026rdquo; folder).\n Download Hadoop 2.7\u0026rsquo;s winutils.exe and place it in a directory C:\\Installations\\Hadoop\\bin\n Now set HADOOP_HOME = C:\\Installations\\Hadoop environment variables. Now start the Windows shell; you may get few warnings, which you may ignore for now.\nC:\\Installations\\spark-2.3.0-bin-hadoop2.7\u0026gt;.\\bin\\spark-shell 2018-05-28 15:05:08 WARN NativeCodeLoader:62 - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Setting default log level to \u0026#34;WARN\u0026#34;. To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel). Spark context Web UI available at http://kuldeep.local:4040 Spark context available as \u0026#39;sc\u0026#39; (master = local[*], app id = local-1527500116728). Spark session available as \u0026#39;spark\u0026#39;. Welcome to ____ __ / __/__ ___ _____/ /__ _\\ \\/ _ \\/ _ `/ __/ \u0026#39;_/ /___/ .__/\\_,_/_/ /_/\\_\\ version 2.3.0 /_/ Using Scala version 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_151) Type in expressions to have them evaluated. Type :help for more information. scala\u0026gt; Issue 2: File Permission Issue for /tmp/hive Let\u0026rsquo;s run the first program as suggested by Spark\u0026rsquo;s quick start guide. Don\u0026rsquo;t worry about the Scala syntax for now.\nscala\u0026gt; val textFile = spark.read.textFile(\u0026#34;README.md\u0026#34;) 2018-05-28 15:12:13 WARN General:96 - Plugin (Bundle) \u0026#34;org.datanucleus.api.jdo\u0026#34; is already registered. Ensure you dont have multiple JAR versions of the same plugin in the classpath. The URL \u0026#34;file:/C:/Installations/spark-2.3.0-bin-hadoop2.7/jars/datanucleus-api-jdo-3.2.6.jar\u0026#34; is already registered, and you are trying to register an identical plugin located at URL \u0026#34;file:/C:/Installations/spark-2.3.0-bin-hadoop2.7/bin/../jars/datanucleus-api-jdo-3.2.6.jar.\u0026#34; 2018-05-28 15:12:13 WARN General:96 - Plugin (Bundle) \u0026#34;org.datanucleus.store.rdbms\u0026#34; is already registered. Ensure you dont have multiple JAR versions of the same plugin in the classpath. The URL \u0026#34;file:/C:/Installations/spark-2.3.0-bin-hadoop2.7/bin/../jars/datanucleus-rdbms-3.2.9.jar\u0026#34; is already registered, and you are trying to register an identical plugin located at URL \u0026#34;file:/C:/Installations/spark-2.3.0-bin-hadoop2.7/jars/datanucleus-rdbms-3.2.9.jar.\u0026#34; 2018-05-28 15:12:13 WARN General:96 - Plugin (Bundle) \u0026#34;org.datanucleus\u0026#34; is already registered. Ensure you dont have multiple JAR versions of the same plugin in the classpath. The URL \u0026#34;file:/C:/Installations/spark-2.3.0-bin-hadoop2.7/bin/../jars/datanucleus-core-3.2.10.jar\u0026#34; is already registered, and you are trying to register an identical plugin located at URL \u0026#34;file:/C:/Installations/spark-2.3.0-bin-hadoop2.7/jars/datanucleus-core-3.2.10.jar.\u0026#34; 2018-05-28 15:12:17 WARN ObjectStore:6666 - Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 1.2.0 2018-05-28 15:12:18 WARN ObjectStore:568 - Failed to get database default, returning NoSuchObjectException org.apache.spark.sql.AnalysisException: java.lang.RuntimeException: java.lang.RuntimeException: The root scratch dir: /tmp/hive on HDFS should be writable. Current permissions are: rw-rw-rw-; at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:106) at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:194) at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:114) at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:102) You may ignore plugin\u0026rsquo;s warning for now, but \u0026ldquo;/tmp/hive on HDFS should be writable\u0026rdquo; has to be fixed.\nThis can be fixed by changing permissions on \u0026ldquo;/tmp/hive\u0026rdquo; (which is C:/tmp/hive) directory using winutils.exe as follows. You may run basic Linux commands on Windows using winutils.exe.\nc:\\Installations\\hadoop\\bin\u0026gt;winutils.exe ls \\tmp\\hive drw-rw-rw- 1 TT\\kuldeep\\Domain Users 0 May 28 2018 \\tmp\\hive c:\\Installations\\hadoop\\bin\u0026gt;winutils.exe chmod 777 \\tmp\\hive c:\\Installations\\hadoop\\bin\u0026gt;winutils.exe ls \\tmp\\hive drwxrwxrwx 1 TT\\kuldeep\\Domain Users 0 May 28 2018 \\tmp\\hive c:\\Installations\\hadoop\\bin\u0026gt; Issue 3: Failed to Start Database \u0026ldquo;metastore_db\u0026rdquo; If you run the same command \u0026ldquo;val textFile = spark.read.textFile(\u0026quot;README.md\u0026quot;)\u0026rdquo; again you may get following exception :\n2018-05-28 15:23:49 ERROR Schema:125 - Failed initialising database. Unable to open a test connection to the given database. JDBC url = jdbc:derby:;databaseName=metastore_db;create=true, username = APP. Terminating connection pool (set lazyInit to true if you expect to start your database after your app). Original Exception: ------ java.sql.SQLException: Failed to start database \u0026#39;metastore_db\u0026#39; with class loader org.apache.spark.sql.hive.client.IsolatedClientLoader$anon$1@34ac72c3, see the next exception for details. at org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown Source) at org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown Source) at org.apache.derby.impl.jdbc.Util.seeNextException(Unknown Source) at org.apache.derby.impl.jdbc.EmbedConnection.bootDatabase(Unknown Source) This can be fixed just be removing the \u0026ldquo;metastore_db\u0026rdquo; directory from Windows installation \u0026ldquo;C:/Installations/spark-2.3.0-bin-hadoop2.7\u0026rdquo; and running it again.\nRun Spark Application on spark-shell Run your first program as suggested by Spark\u0026rsquo;s quick start guide.\nDataSet: \u0026lsquo;org.apache.spark.sql.Dataset\u0026rsquo; is the primary abstraction of Spark. Dataset maintains a distributed collection of items. In the example below, we will create Dataset from a file and perform operations on it.\nSparkSession: This is entry point to Spark programming. \u0026lsquo;org.apache.spark.sql.SparkSession\u0026rsquo;.\nStart the spark-shell.\nC:\\Installations\\spark-2.3.0-bin-hadoop2.7\u0026gt;.\\bin\\spark-shell Spark context available as \u0026#39;sc\u0026#39; (master = local[*], app id = local-1527500116728). Spark session available as \u0026#39;spark\u0026#39;. Welcome to ____ __ / __/__ ___ _____/ /__ _\\ \\/ _ \\/ _ `/ __/ \u0026#39;_/ /___/ .__/\\_,_/_/ /_/\\_\\ version 2.3.0 /_/ Using Scala version 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_151) Type in expressions to have them evaluated. Type :help for more information. scala\u0026gt; Spark shell initializes a Windowscontext \u0026lsquo;sc\u0026rsquo; and Windowssession named \u0026lsquo;spark\u0026rsquo;. We can get the DataFrameReader from the session which can read a text file, as a DataSet, where each line is read as an item of the dataset. Following Scala commands creates data set named \u0026ldquo;textFile\u0026rdquo; and then run operations on dataset such as count() , first() , and filter().\nscala\u0026gt; val textFile = spark.read.textFile(\u0026#34;README.md\u0026#34;) textFile: org.apache.spark.sql.Dataset[String] = [value: string] scala\u0026gt; textFile.count() res0: Long = 103 scala\u0026gt; textFile.first() res1: String = # Apache Spark scala\u0026gt; val linesWithSpark = textFile.filter(line =\u0026gt; line.contains(\u0026#34;Spark\u0026#34;)) linesWithSpark: org.apache.spark.sql.Dataset[String] = [value: string] scala\u0026gt; textFile.filter(line =\u0026gt; line.contains(\u0026#34;Spark\u0026#34;)).count() res2: Long = 20 Some more operations of map(), reduce(), collect() .\nscala\u0026gt; textFile.map(line =\u0026gt; line.split(\u0026#34; \u0026#34;).size).reduce((a, b) =\u0026gt; if (a \u0026gt; b) a else b) res3: Int = 22 scala\u0026gt; val wordCounts = textFile.flatMap(line =\u0026gt; line.split(\u0026#34; \u0026#34;)).groupByKey(identity).count() wordCounts: org.apache.spark.sql.Dataset[(String, Long)] = [value: string, count(1): bigint] scala\u0026gt; wordCounts.collect() res4: Array[(String, Long)] = Array((online,1), (graphs,1), ([\u0026#34;Parallel,1), ([\u0026#34;Building,1), (thread,1), (documentation,3), (command,,2), (abbreviated,1), (overview,1), (rich,1), (set,2), (-DskipTests,1), (name,1), (page](http://spark.apache.org/documentation.html).,1), ([\u0026#34;Specifying,1), (stream,1), (run:,1), (not,1), (programs,2), (tests,2), (./dev/run-tests,1), (will,1), ([run,1), (particular,2), (option,1), (Alternatively,,1), (by,1), (must,1), (using,5), (you,4), (MLlib,1), (DataFrames,,1), (variable,1), (Note,1), (core,1), (more,1), (protocols,1), (guidance,2), (shell:,2), (can,7), (site,,1), (systems.,1), (Maven,1), ([building,1), (configure,1), (for,12), (README,1), (Interactive,2), (how,3), ([Configuration,1), (Hive,2), (system,1), (provides,1), (Hadoop-supported,1), (pre-built,1... scala\u0026gt; Run Spark Application on spark-submit In the last example, we ran the Windows application as Scala script on \u0026lsquo;spark-shell\u0026rsquo;, now we will run a Spark application built in Java. Unlike spark-shell, we need to first create a SparkSession and at the end, the SparkSession must be stopped programmatically.\nLook at the below SparkApp.Java it read a text file and then count the number of lines.\npackage com.korg.spark.test; import org.apache.spark.sql.Dataset; import org.apache.spark.sql.SparkSession; public class SparkApp { public static void main(String[] args) { SparkSession spark = SparkSession.builder().appName(\u0026#34;SparkApp\u0026#34;).config(\u0026#34;spark.master\u0026#34;, \u0026#34;local\u0026#34;).getOrCreate(); Dataset\u0026lt;String\u0026gt; textFile = spark.read().textFile(args[0]); System.out.println(\u0026#34;Number of lines \u0026#34; + textFile.count()); spark.stop(); } } Create above Java file in a Maven project with following pom dependencies :\n\u0026lt;project xmlns=\u0026#34;http://maven.apache.org/POM/4.0.0\u0026#34; xmlns:xsi=\u0026#34;http://www.w3.org/2001/XMLSchema-instance\u0026#34; xsi:schemaLocation=\u0026#34;http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\u0026#34;\u0026gt; \u0026lt;modelVersion\u0026gt;4.0.0\u0026lt;/modelVersion\u0026gt; \u0026lt;groupId\u0026gt;com.korg.spark.test\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;spark-test\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;0.0.1-SNAPSHOT\u0026lt;/version\u0026gt; \u0026lt;name\u0026gt;SparkTest\u0026lt;/name\u0026gt; \u0026lt;description\u0026gt;SparkTest\u0026lt;/description\u0026gt; \u0026lt;dependencies\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.apache.spark\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;spark-core_2.11\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;2.3.0\u0026lt;/version\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;!-- Spark dependency --\u0026gt; \u0026lt;groupId\u0026gt;org.apache.spark\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;spark-sql_2.11\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;2.3.0\u0026lt;/version\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;/dependencies\u0026gt; \u0026lt;/project\u0026gt; Build the Maven project it will generate jar artifact \u0026ldquo;target/spark-test-0.0.1-SNAPSHOT.jar\u0026rdquo;\nNow submit this Windows application to Windows as follows: (Excluded some logs for clarity)\nC:\\Installations\\spark-2.3.0-bin-hadoop2.7\u0026gt;.\\bin\\spark-submit --class \u0026#34;com.korg.spark.test.SparkApp\u0026#34; E:\\Workspaces\\RnD\\spark-test\\target\\spark-test-0.0.1-SNAPSHOT.jar README.md 2018-05-29 12:59:15 WARN NativeCodeLoader:62 - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2018-05-29 12:59:15 INFO SparkContext:54 - Running Spark version 2.3.0 2018-05-29 12:59:15 INFO SparkContext:54 - Submitted application: SparkApp 2018-05-29 12:59:15 INFO SecurityManager:54 - Changing view acls to: kuldeep 2018-05-29 12:59:16 INFO Utils:54 - Successfully started service \u0026#39;sparkDriver\u0026#39; on port 51471. 2018-05-29 12:59:16 INFO SparkEnv:54 - Registering OutputCommitCoordinator 2018-05-29 12:59:16 INFO log:192 - Logging initialized @7244ms 2018-05-29 12:59:16 INFO Server:346 - jetty-9.3.z-SNAPSHOT 2018-05-29 12:59:16 INFO Server:414 - Started @7323ms 2018-05-29 12:59:17 INFO SparkUI:54 - Bound SparkUI to 0.0.0.0, and started at http://kuldeep.local:4040 2018-05-29 12:59:17 INFO SparkContext:54 - Added JAR file:/E:/Workspaces/RnD/spark-test/target/spark-test-0.0.1-SNAPSHOT.jar at spark://kuldeep.local:51471/jars/spark-test-0.0.1-SNAPSHOT.jar with timestamp 1527578957193 2018-05-29 12:59:17 INFO SharedState:54 - Warehouse path is \u0026#39;file:/C:/Installations/spark-2.3.0-bin-hadoop2.7/spark-warehouse\u0026#39;. 2018-05-29 12:59:18 INFO StateStoreCoordinatorRef:54 - Registered StateStoreCoordinator endpoint 2018-05-29 12:59:21 INFO FileSourceScanExec:54 - Pushed Filters: 2018-05-29 12:59:21 INFO ContextCleaner:54 - Cleaned accumulator 0 2018-05-29 12:59:22 INFO CodeGenerator:54 - Code generated in 264.159592 ms 2018-05-29 12:59:22 INFO BlockManagerInfo:54 - Added broadcast_0_piece0 in memory on kuldeep425.Nagarro.local:51480 (size: 23.3 KB, free: 366.3 MB) 2018-05-29 12:59:22 INFO SparkContext:54 - Created broadcast 0 from count at SparkApp.java:11 2018-05-29 12:59:22 INFO SparkContext:54 - Starting job: count at SparkApp.java:11 2018-05-29 12:59:22 INFO DAGScheduler:54 - Registering RDD 2 (count at SparkApp.java:11) 2018-05-29 12:59:22 INFO DAGScheduler:54 - Got job 0 (count at SparkApp.java:11) with 1 output partitions 2018-05-29 12:59:22 INFO Executor:54 - Fetching spark://kuldeep.local:51471/jars/spark-test-0.0.1-SNAPSHOT.jar with timestamp 1527578957193 2018-05-29 12:59:23 INFO TransportClientFactory:267 - Successfully created connection to kuldeep425.Nagarro.local/10.0.75.1:51471 after 33 ms (0 ms spent in bootstraps) 2018-05-29 12:59:23 INFO Utils:54 - Fetching spark://kuldeep.local:51471/jars/spark-test-0.0.1-SNAPSHOT.jar to C:\\Users\\kuldeep\\AppData\\Local\\Temp\\spark-5a3ddefe-e64c-4730-8800-9442ad72bdd1\\userFiles-0b51b538-6f4d-4ddd-85c1-4595749c09ea\\fetchFileTemp210545020631632357.tmp 2018-05-29 12:59:23 INFO Executor:54 - Adding file:/C:/Users/kuldeep/AppData/Local/Temp/spark-5a3ddefe-e64c-4730-8800-9442ad72bdd1/userFiles-0b51b538-6f4d-4ddd-85c1-4595749c09ea/spark-test-0.0.1-SNAPSHOT.jar to class loader 2018-05-29 12:59:23 INFO FileScanRDD:54 - Reading File path: file:///C:/Installations/spark-2.3.0-bin-hadoop2.7/README.md, range: 0-3809, partition values: [empty row] 2018-05-29 12:59:23 INFO CodeGenerator:54 - Code generated in 10.064959 ms 2018-05-29 12:59:23 INFO Executor:54 - Finished task 0.0 in stage 0.0 (TID 0). 1643 bytes result sent to driver 2018-05-29 12:59:23 INFO TaskSetManager:54 - Finished task 0.0 in stage 0.0 (TID 0) in 466 ms on localhost (executor driver) (1/1) 2018-05-29 12:59:23 INFO TaskSchedulerImpl:54 - Removed TaskSet 0.0, whose tasks have all completed, from pool 2018-05-29 12:59:23 INFO DAGScheduler:54 - ShuffleMapStage 0 (count at SparkApp.java:11) finished in 0.569 s 2018-05-29 12:59:23 INFO DAGScheduler:54 - ResultStage 1 (count at SparkApp.java:11) finished in 0.127 s 2018-05-29 12:59:23 INFO TaskSchedulerImpl:54 - Removed TaskSet 1.0, whose tasks have all completed, from pool 2018-05-29 12:59:23 INFO DAGScheduler:54 - Job 0 finished: count at SparkApp.java:11, took 0.792077 s Number of lines 103 2018-05-29 12:59:23 INFO AbstractConnector:318 - Stopped Spark@63405b0d{HTTP/1.1,[http/1.1]}{0.0.0.0:4040} 2018-05-29 12:59:23 INFO SparkUI:54 - Stopped Spark web UI at http://kuldeep.local:4040 2018-05-29 12:59:23 INFO MapOutputTrackerMasterEndpoint:54 - MapOutputTrackerMasterEndpoint stopped! 2018-05-29 12:59:23 INFO BlockManager:54 - BlockManager stopped 2018-05-29 12:59:23 INFO BlockManagerMaster:54 - BlockManagerMaster stopped Congratulations! You are done with your first Windows application on Windows environment.\nThis article was originally published on Dzone\n","id":43,"tag":"spark ; troubleshooting ; windows ; apache ; java ; scala ; technology","title":"Apache Spark on Windows","type":"post","url":"https://thinkuldeep.com/post/apache_spark_on_windows/"},{"content":" IOT 101 A Primer on Internet of Things from Kuldeep Singh The IoT can be described as “Connecting the Things to internet”.\nA comprehensive IoT ecosystem consists of many different parts such as electronic circuitry, sensing and acting capability, embedded systems, edge computing, network protocols, communication networks, cloud computing, big data management and analytics, business rules etcetera. This maze of varied parts can be better classified into 4 broad categories:\nDevices IoT devices are capable of sensing the environment and then act upon the instructions that they receive.\nThese devices consist of sensors and actuators connected to the machines and electronic environment. The electronic environment of device may pre-process data sensed from the sensor and then send it to IoT platform. This electronic environment can often also post-process the data or instruction received from the IoT platform before passing them to actuators for further action.\nConnectivity The second key part of the IoT system is connectivity. This involves connecting the devices to the IoT platform to send data sensed by the devices and receive instructions from the platform.\nElectronic environment of the device has capability to connect over internet directly or via internet gateways. For connecting to the internet directly there are multiple wired and wireless communication protocols, including some that use low powered communication networks. The devices which connect via internet gateway, generally communicate over short range radio frequency protocols or wired protocols. The internet gateway in turn further communicates with IoT platform over long range radio frequency protocols.\nOne of the key elements \u0026amp; challenge of connectivity is security. IoT system introduces lot of data exchange interfaces between IoT devices, gateways, IoT platform, integrated enterprise solutions, and visualization tools. Connection between these interfaces must safeguard the input and output data in transit.\nPlatform IoT platform is the brain of an IoT system. It is responsible for efficiently receiving the data ingested from the devices, then analysing that data in real-time and storing it for history building and for further processing in future. It also provides services to remotely monitor, control and manage the devices.\nThe Platform routes the data to other integrated enterprise systems based on the business rules available in the system and provides the services to visualize the data on multiple connected tools such as web interfaces, mobile and wearables. Finally, it aggregates the information to the context of the users so that the user gets the right information at right time.\nBusiness Model The advent of IoT has the potential to redefine the business models that would open new opportunities for new sources of revenue, improved margins and higher customer satisfaction.\nThere are broadly 5 trends in business model innovation: product \u0026amp; service bundling, assured performance, pay as you go, process optimization, predictive and prescriptive maintenance.\n#Conclusion IoT systems are complex due to the heterogeneity of technology and business needs. We expect this complexity to continue to increase driven by continuous proliferation of more IoT platforms that will seek to provide technology and business use case specific value proposition.\nIrrespective of the nature of the IoT system we believe that a holistic IoT system can be explained as a sum of 4 parts: Devices, Connectivity, Platform and Business Model.\nIt was originally published at Nagarro Blogs and LinkedIn \n","id":44,"tag":"guide ; iot ; cloud ; fundamentals ; nagarro ; introduction ; technology","title":"IOT 101: A primer on Internet of Things","type":"post","url":"https://thinkuldeep.com/post/iot_simplified/"},{"content":"This article describes the fundamentals of light-weight smart glasses such as Google Glass and Vuzix M100 It also explain the development practices with pros and cons of its usage, limitation and future of these glasses in our lives.\nGoogle Glass Enterprise Edition Google Glass enterprise edition is a plain Android based smart glass that you can wear and performs operations just like a smartphone and with the use of a small screen located in the front of your right eye can perform a decent range of tasks. Basically, they are nothing more than a pair of glasses powered by nothing less than Google cutting edge technology. The team behind Google Glass Geeks managed to put their hands on two pairs of Google Glass enterprise edition.\nThe google glass enterprise edition is built for the Enterprise Industry and is best suited for handsfree use case (for example Automobile Service Assistant).\nUsage The Google glass is equipped with a trackpad on the right side near the display. There are 4 main gestures to operate the glass.\n Slide forward/backwards: Slide forward or backwards to slide through various cards (forward for right and backwards for left). Tap: Tap to select any item. Swipe down: Swipe down to return to previous menu. Swipe down with 2 fingers: Swipe down with 2 fingers to return to home and turn off the display. Specs Glass OS EE (Based on Android) Intel based chipset (OS is 32 bit) 2 GB RAM Sessors - Ambient Light Sensor, Inertial Measurement Units, Barometer, Capacitive Head Sensor, Wink (Eye Gesture) Sensor, Blink (Look-at-Screen) Sensor, Hinge Effect Sensor, Assisted GPS + GLONASS, Magnetometer (Digital Compass) 640 x 360 pixel screen Prism projector(equivalent of a 25 in, 64 cm screen from 8 ft./2.4 m away Wi-Fi and Bluetooth connectivity touchpad/voice commands. Screen can be casted over ADB Price tag at 1500$ US 1.5hours for heavy use and 3.2hours for medium usage, and 2.5hours of charging time Hello World with Google Glass Applications for google glass is built using Android Studio. Follow the steps below to build your first program for google glass.\nBefore getting into development, please understand the design principle of google glass here\n Open Android Studio Go to File\u0026gt; New Project Provide project and package name, and go Next Check the Glass option from check list and uncheck rest of the option and go Next Select Immersion Activity or Live card activity and click Next Edit activity name and package and click Finish Android studio will initialize a project with mentioned activity. It generates following MainActivity Class\n..... /** * An {@link Activity} showing a tuggable \u0026#34;Hello World!\u0026#34; card. * \u0026lt;p\u0026gt; * The main content view is composed of a one-card {@link CardScrollView} that provides tugging * feedback to the user when swipe gestures are detected. * If your Glassware intends to intercept swipe gestures, you should set the content view directly * and use a {@link com.google.android.glass.touchpad.GestureDetector}. * @see \u0026lt;a href=\u0026#34;https://developers.google.com/glass/develop/gdk/touch\u0026#34;\u0026gt;GDK Developer Guide\u0026lt;/a\u0026gt; */ public class MainActivity extends Activity { /** * {@link CardScrollView} to use as the main content view. */ private CardScrollView mCardScroller; /** * \u0026#34;Hello World!\u0026#34; {@link View} generated by {@link #buildView()}. */ private View mView; @Override protected void onCreate(Bundle bundle) { super.onCreate(bundle); mView = buildView(); mCardScroller = new CardScrollView(this); mCardScroller.setAdapter(new CardScrollAdapter() { @Override public int getCount() { return 1; } @Override public Object getItem(int position) { return mView; } @Override public View getView(int position, View convertView, ViewGroup parent) { return mView; } @Override public int getPosition(Object item) { if (mView.equals(item)) { return 0; } return AdapterView.INVALID_POSITION; } }); // Handle the TAP event. mCardScroller.setOnItemClickListener(new AdapterView.OnItemClickListener() { @Override public void onItemClick(AdapterView\u0026lt;?\u0026gt; parent, View view, int position, long id) { // Plays disallowed sound to indicate that TAP actions are not supported. AudioManager am = (AudioManager) getSystemService(Context.AUDIO_SERVICE); am.playSoundEffect(Sounds.DISALLOWED); }}); setContentView(mCardScroller); } @Override protected void onResume() { super.onResume(); mCardScroller.activate(); } @Override protected void onPause() { mCardScroller.deactivate(); super.onPause(); } /** * Builds a Glass styled \u0026#34;Hello World!\u0026#34; view using the {@link CardBuilder} class. */ private View buildView() { CardBuilder card = new CardBuilder(this, CardBuilder.Layout.TEXT); card.setText(R.string.hello_world); return card.getView(); } } Here it runs. Known issues/constraints Following are the list of known issues known to us,\n Glass gets hot while doing resource intensive tasks and becomes unusable. Battery life is not so great. Takes long time to charge. Vuzix m100 Vuzix M100 Smart Glass is an android-based wearable device, it has a monocular display with a wide variety of enterprise applications on it. Vuzix M100 offers hand-free use of the glass with most of the Android smart-phone features.\nUsage The Vuzix M100 4 physical buttons which are used to control the input to the device.\n Forward button: The forward button is placed on the top front portion of the glass and slide through various cards/ menus in the forward direction. Backward button: The backward button is placed on the top middle portion of the glass and slide through various cards/ menus in the backward direction. Select Button: The select button is placed on the top back portion of the glass and helps to select any item or return to the previous menu if long pressed. Power Button: The power button is placed on the bottom back portion of the glass helps to turn the device/display on and off. Specs Android OS 4.0.4 OMAP4430 CPU at 1GHz 1 GB RAM Sensors and actuators - 3 DOF gesture engine (L/R,,N/F), Proximity, Ambient light, 3-degree of freedom head tracking, 3 axis gyro, 3 axis accelerometer, 3 axis mag/integrated compass 400 x 240, 16:9, WQVGA, full color display Wi-Fi and Bluetooth connectivity physical buttons and voice commands inputs. ADB connectivity At price tag 999.99$ Battery - 1hours for heavy use and 3hours for light usage, and 1hour charing time. Hello World with Vuzix M100 Install Vuzix SDK: Open the Android SDK Manager Go to Tools \u0026gt; Manage Add-on Sites\u0026hellip; \u0026gt; User Defined Sites tab. \u0026gt; New\u0026hellip; Enter https://www.vuzix.com/k79g75yXoS/addon.xml and OK Close it and the SDK Manager will refresh with the M100 SDK Add-on in the Android 4.0.3 (API15) and Extras sections. Select the add-on packages you want to install and click Install packages\u0026hellip; Accept the license for each package you want to install and click Install. The SDK Manager will now start downloading and installing the packages. After the SDK Manager finishes downloading and installing the new packages, you will need to restart the android Studio before the Vuzix SDK is available to use for your apps. Steps to use the SDK File \u0026gt; New Project \u0026gt; Select Android \u0026gt; Android Application Project and click Next. Set Minimum Required SDK to an API version no later than API 15, as this is the API version that runs on the M100. Set Target SDK: to API 15: Android 4.0.3 (IceCreamSandwich). Select Vuzix M100 Add-On (Vuzix Corporation) (API 15) from the Compile With: drop-down box. Everything else can be configured as you like. After the project has been created you will have access to all the Vuzix API libraries. Hello World here The following code can be used to create a voice controlled app to create and manipulate circles and squares (graphically).\nMainActivity.java \n// // A simple activity class that allows us to use Vuzix Voice Control // to create and manipulate circles and squares (graphically) // public class MainActivity extends Activity { private ColorDemoSpeechRecognizer mSR; private Canvas mCanvas; private Bitmap mBG; private ArrayList\u0026lt;ShapeInfo\u0026gt; mShapes; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); // set up speech recognizer mSR = new ColorDemoSpeechRecognizer(this); if (mSR == null) { Toast.makeText(this,\u0026#34;Unable to create Speech Recognizer\u0026#34;,Toast.LENGTH_SHORT).show(); return; } loadCustomGrammar(); Toast.makeText(this,\u0026#34;Speech Recognizer CREATED, turning on\u0026#34;,Toast.LENGTH_SHORT).show(); android.util.Log.i(\u0026#34;VUZIX-VoiceDemo\u0026#34;,\u0026#34;Turning on Speech Recognition\u0026#34;); mSR.on(); // Set up drawing canvas mBG = Bitmap.createBitmap(432,200,Bitmap.Config.ARGB_8888); mCanvas = new Canvas(mBG); } @Override public void onDestroy() { if (mSR != null) { mSR.destroy(); } super.onDestroy(); } @Override public void onPause() { android.util.Log.i(\u0026#34;VUZIX-VoiceDemo\u0026#34;,\u0026#34;onPause(), stopping speech recognition\u0026#34;); if (mSR != null) { mSR.off(); } super.onPause(); } @Override public void onResume() { super.onResume(); android.util.Log.i(\u0026#34;VUZIX-VoiceDemo\u0026#34;, \u0026#34;onResume(), restarting speech recognition\u0026#34;); if (mSR != null) { mSR.on(); } } @Override public boolean onCreateOptionsMenu(Menu menu) { getMenuInflater().inflate(R.menu.menu_main, menu); return true; } @Override public boolean onOptionsItemSelected(MenuItem item) { int id = item.getItemId(); //noinspection SimplifiableIfStatement if (id == R.id.action_settings) { return true; } return super.onOptionsItemSelected(item); } private void loadCustomGrammar(){ android.util.Log.i(\u0026#34;VUZIX-VoiceDemo\u0026#34;, \u0026#34;copyGrammar()\u0026#34;); final android.content.res.Resources resources = getResources(); final android.content.res.AssetManager assets = resources.getAssets(); try { final java.io.InputStream fi = assets.open(\u0026#34;drawing_custom.lcf\u0026#34;); byte[] buf = new byte[fi.available()]; while (fi.read(buf) \u0026gt; 0){ } fi.close(); mSR.addGrammar(buf,\u0026#34;Color Demo Grammar\u0026#34;); } catch(java.io.IOException ex) { android.util.Log.e(\u0026#34;VUZIX-VoiceDemo\u0026#34;,\u0026#34;\u0026#34;+ex); } android.util.Log.i(\u0026#34;VUZIX-VoiceDemo\u0026#34;, \u0026#34;Done writing grammar files.\\n\u0026#34;); } } ColorDemoSpeechRecognizer.java\n// our custom extension of the com.vuzix.speech.VoiceControl class. This class // will contain the callback (onRecognition()) which is used to handle spoken // commands detected. public class ColorDemoSpeechRecognizer extends VoiceControl { private int mSelectedItem = 0; public ColorDemoSpeechRecognizer(Context context) { super(context); } protected void onRecognition(String result) { android.util.Log.i(\u0026#34;VUZIX-VoiceDemo\u0026#34;,\u0026#34;onRecognition: \u0026#34;+result); Toast.makeText(MainActivity.this,\u0026#34;Speech Command Received: \u0026#34;+result,Toast.LENGTH_SHORT).show(); } } This program shows are test when it recognizes the speech.\nKnown issues/constraints Following are the list of known issues,\n Device is not well balanced. Display quality is not good enough. Battery life is very short. Conclusion The the smart glass are unique piece of technology with a lot of potential in the future.\nIt can be used in various domains where hands free is the key requirement and it can be used to improve the efficiency and effectiveness.\nThough it suffers quite a few issues due to the lessor available technology in this time, but this is something that will overcome with time.\n","id":45,"tag":"xr ; ar ; google ; enterprise ; android ; vuzix ; smart-glass ; tutorial ; technology","title":"Exploring the Smart Glasses","type":"post","url":"https://thinkuldeep.com/post/exploring-google-glass/"},{"content":"Technological evolution causes disruption. It replaces traditional, less optimal, complex, lengthy, and costly technology, people, and processes with economical, innovative, optimal, and simpler alternatives. We have seen disruption by mobile phones, and up next is disruption from IoT, wearables, AR-VR, machine Learning, AI, and other innovations. Software development methodology, tools, and people will also face disruption. I expect that the traditional developer will be replaced by the multi-skilled or full-stack developers.\nIoT Development Challenges Heterogeneous connectivity IoT doesn\u0026rsquo;t just connect Things but also a variety of Internet-enabled systems, processes, and people. IoT solutions involve integrating hardware with software. IoT development requires knowledge of the entire connected environment.\nHuge ecosystem IoT solutions involve multiple technologies ranging from embedded to front end, wearables, and connected enterprise solutions. IoT solution implementation needs multiple specialized skills at different levels, such as native developers (C, C++, embedded), gateway developers (Arduino, C, C++, Python, Node.js), network developers, cloud developers, web developers (.NET, Java, PHP, Python), mobile app developers (Android, iOS, Windows), IoT platform developers (Azure, AWS, Xively, Predix), AR-VR developers (Smart Glass, HoloLens, Unity), Big Data developers, enterprise solutions developers (ERP,CRM, SAP, Salesforce), business analysts, graphics designers, build engineers, architects, project managers, etc.\nDevelopment cost The cost of building IoT solutions is high due to the involvement of multiple people from multiple skillsets. Looking at the huge ecosystem, it is a difficult and costly affair to coordinate and maintain consistency among multiple people from different backgrounds and experience levels. With the traditional developer mindset, we would need at least 20 people to build an end-to-end IoT PoC, which is a huge cost. Global development is also difficult due to unavailability of real environments, hardware, and extreme coordination needs. IoT solutions require stakeholders to be more connected to each other, which results in higher costs as more and more people are needed.\nSolving the Chaos The traditional developer mindset, which binds people to Java or .NET or an otherwise specific technology will be discouraged in the near future. People with multiple skills or full-stack knowledge—who work with multiple technologies, tools, and processes—will reduce the cost of IoT development, and thus, they will replace the traditional workers.\nIt was originally published at Dzone and LinkedIn \n","id":46,"tag":"transformation ; iot ; disruption ; multi-skill ; technology","title":"Another IoT Disruption: A Need of Multi-Skilled Devs","type":"post","url":"https://thinkuldeep.com/post/iot-disruption-multi-skilled-people/"},{"content":"Insects are the most diverse species of animals found on earth. While every species is required to maintain the balance of nature, excessive accumulation of any one type can have devastating effects on humans and environment.\nInsect management relies on the early detection and accuracy of insect population monitoring techniquessensors-12-15801. Integrated data gathering about insect population together with the ecological factors like temperature, humidity, light etc., makes it possible to execute appropriate pest control at the right time in the right place.\nA well-known technique to perform pest control monitoring is based on the use of insect traps conveniently spread over the specified control area. Human operators typically performs periodical surveys of the traps disseminated through the field. This is a labour, time and cost-consuming activity. This paper proposes an affordable, autonomous monitoring system that uses image processing to detect insect population at regular intervals and sends that data to a web server via an IOT protocol (MQTT in this case). A web dashboard visualises the insect density at different time instances, with a warning system if the density crosses a set threshold value.\nSolution Concept Due to the rapid development of digital technology, there is an opportunity for image processing technology to be used for the automation of insect detection. This paper illustrates a system based on a low cost, low power imaging and processing device operated through a wireless sensor network that is able to automatically acquire images of the trapping area and process them to count the insect density. The data is then sent to a web server where it gets stored in a database. The web server hosts a web dashboard through which human operators can remotely monitor the insect traps. The insect monitoring system is developed using open source software on a Linux based machine. The app runs on a small device like Raspberry Pi 2 with the help of Raspbian Jessie OS. The work flow is as such that the application running on the raspberry pi 2 captures few seconds worth of video footage at regular intervals. The frames captured are then processed using openCV image processing library to calculate the insect density during that time frame. Once the data is available, it is then to a web server using the IoT protocol MQTT (Message Queuing Telemetry Transport). The web server receives the data sent and stores it in the database. We have a web dashboard hosted on the web server. It retrieves the data from the database and visualises the insect density at different time instances, with a warning system if the density crosses a set threshold value.\nHere are the key components of the system : OpenCv Image Processing - OpenCV (Open Source Computer Vision) is a library of programming functions mainly aimed at real-time computer vision and image processing. We use the C/C++ interface to capture images from the webcam and process those images to calculate the insect count.\n Mosquitto MQTT Broker - Mosquitto is an open source message broker that implements the MQTT protocol for carrying out messages using publisher/subscriber method. We send out the json messages from our application to the web server.\n Paho MQTT client for python - Paho MQTT python client library implements the MQTT protocol and provides easy to use API which enables out python script to connect to the MQTT broker and receive messages from the raspberry pi application.\n SQLite Database - SQLite provides a lightweight disk-based database that doesn’t require a separate server process and allows accessing the database using a nonstandard variant of the SQL query language. We use the sqlite database to store the data received by the python script.\n Apache web server for the User Web Dashboard - We have used apache2 as our web server to host the web dashboard. The dashboard is written as a python script which we execute as a CGI program in the apache web server.\n Implementation Strategy Our webcam will be located inside the trap, fixed at the top so that it faces the bottom of the trap. It will be connected to the raspberry pi (located somewhere outside the trap) with a USB cable.\nThe C++ application deployed in the raspberry pi accesses the webcam to capture the images of the trap and process them using opencv library to calculate the insect count. When the program is first initialised, it captures initial video frames of the empty insect trap to be used as a reference. After that it captures the change in the scene at regular intervals. The image frames captured are then processed using the opencv functions. In our system, we use their background subtraction algorithm to generate the foreground masks that clearly show the insects trapped. Following figure shows the image captured by the webcam and the foreground mask generated. The foreground masks generated are then processed to calculate the number of insects trapped. A JSON string is created with current timestamp and the insect count. This string is then sent to the web server by publishing it using the Mosquitto broker and its corresponding mosquito_pub client. A python script is executing on the web server that receives this json message we have sent. It uses the paho mqtt client to subscribe to the same topic as our c++ application. The json data received is parsed and stored in the sqlite database on the server. The SQLite database contains only a single table called insect_density. It contains two columns.\n date time: It contains the datetime data and stores the timestamp received in the json string. It also acts as a primary key. density: It contains integer data and stores the number of insects at time instance stored in the date_time column. Finally, the data stored in the database is consumed by the web dashboard that visualises the data into a line chart format. The dashboard is written in a python script that is executed as a CGI program on the apache server and can be accessed using any browser. The dashboard also shows a warning message if the insect density exceeds a set threshold limit.\nBenefits Monitoring of insect infestation relies on manpower, however automatic monitoring has been advancing in order to minimize human efforts and errors. The aim of this paper is to establish a fully computerized system that overcome traditional and classical methods of monitoring and is environmentally and user friendly. The main advantages of our proposal are:\n A more accurate and reliable system for monitoring insects\u0026rsquo; activity as it mitigates human made errors Reduces human effort hence lowering time and cost efforts Image analysis provides a realistic opportunity for the automation of insect pest detection high degree of accuracy at least when tried under controlled environments. effective use of pest surveillance and monitoring system which may result to efficient timing of interventions The user interface and the software are friendly and easy to follow; they are understandable by normal users. Higher scalability, being able to deploy in small monitoring areas (greenhouses) as in large plantation extensions Trap monitoring data may be available in real time through an Internet connection Constraints and Challenges Currently the Insect Monitoring system have following limitations that can be worked upon in coming future:\n Presently, the configuration for the server IP address, buffer message limit and the density threshold value have to be set manually. We plan to create a configuration page for the user to remotely configure the application running on the raspberry pi. We had planned to send the image of the trap along with the timestamp and the insect count in the json string via mqtt. We ran into problems while serialising the image and it became difficult to do so within the given time frame. Conclusion The Insect Monitoring System displays lot of potential and can be extended to implement advance features and functionalities. The advance feature list includes but not limited to the following:\n Higher scalability: The system can be scaled to a large number of devices over large areas. Stream Analytics: We can port the system over to the cloud and apply stream analytics and/or machine learning algorithms to get accurate predictions and patterns of insect activity according to ecological factors. Advanced Image Processing: In the future, other image processing techniques may be used to enable the detection and extraction of insects more efficient and accurate. As of now, we are only processing the count of insects without identifying what kind of insect it is. Identification of different type of insects can be done. Show Live Feed: We can incorporate the option to show live feed from the webcam. Web dashboard can contain an option which will enable the user to select a particular device and show the live feed of the insect trap from the webcam. References http://www.pyimagesearch.com/2016/04/18/install-guide-raspberry-pi-3-raspbian-jessie-opencv-3/ http://opencv-srf.blogspot.in/2011/09/capturing-images-videos.html http://docs.opencv.org/3.1.0/d1/dc5/tutorial_background_subtraction.html http://raspberrywebserver.com/sql-databases/accessing-an-sqlite-database-with-python.html https://pypi.python.org/pypi/paho-mqtt/1.1 https://help.ubuntu.com/lts/serverguide/httpd.html http://raspberrywebserver.com/cgiscripting/rpi-temperature-logger/building-a-web-user-interface-for-the-temperature-monitor.html \u0026ldquo;Pest Detection and Extraction Using Image Processing Techniques” by Johnny L. Miranda, Bobby D. Gerardo, and Bartolome T. Tanguilig III. International Journal of Computer and Communication Engineering, Vol. 3, No. 3, May 2014 “Counting Insects Using Image Processing Approach” by Mu\u0026rsquo;ayad AlRawashda, Thabet AlJuba, Mohammad AlGazzawi, Suzan Sultan+ and Rami Arafeh. Palestine Polytechnic University “Monitoring Of Pest Insect Traps Using Image Sensors \u0026amp; Dspic” by C.Thulasi Priya, K.Praveen, A.Srividya. International Journal of Engineering Trends and Technology-Volume 4 Issue 9-September2013 “Monitoring Pest Insect Traps by Means of Low-Power Image Sensor Technologies” by Otoniel López, Miguel Martinez Rach, Hector Migallon, Manuel P. Malumbres, Alberto Bonastre and Juan J. Serrano. Miguel Hernandez University, Spain This concept was supported by my colleague Ricktam K at IoT Center of Excellence of Nagarro\n","id":47,"tag":"insect monitoring ; IOT ; raspberry pi ; concept ; c++ ; opencv ; python ; MQTT ; nagarro ; technology","title":"IoT based insect monitoring concept","type":"post","url":"https://thinkuldeep.com/post/iot-insect-monitoring/"},{"content":"The ‘Internet of Things’ (IoT) presents tremendous opportunity in all the industry verticals and business domains. Specifically, the ‘Industrial Internet of Things’(IIoT) promises to increase the value proposition by making self-optimizing business processes. As discussed in the post “The Driving Forces Behind IoT”, there are several challenges to be addressed to realize the full IoT potentials, but following steps will help organization to systemically make their products participate in the IoT journey and stay in the IoT race. These steps will help in modernize their existing products or to define strategy to build futuristic products.\n5 Steps of the IoT race 1. Preparation Analyze the product and services for their strengths and weaknesses. Analyze the sensing and acting capabilites of the products, or in other word see what all ambient information the product can sense and how it can be actuated, or how it can listen to the instructions. Analyze the integrated, embedded services/components of the product and benchmark the product on usability, self-service capability, optimizations, minimal human interference etc. Research the competitive products are offering, and the customer experience.\nDefine a clear IoT vision for your organization and address where do you want to stand, what are your targets, whom do you want to compete with and then define IoT vision for the products and services aligned to your organization’s IoT vision.\nPlan the ROI in term of projected financials, reductions in –ve ROI (human trainings, involvement etc.), and innovation. Consider complete eco-system for the products and services provided internally or externally by partners, suppliers and dealers. Plan the roadmap for achieving the IoT vision.\n2. Registration Once preparation is over, you have a clear vision and roadmap. Now it’s time to register for the IoT race by engaging each of the stakeholders (internal as well as external) to the IoT program. Declare your participation in the IoT race by letting your employees, customers, and partners know about benefits derived from the new vision. Collect views from the stakeholder and adjust the IoT program such that it also meets expectations of the stakeholders.\n3. Warm-up During warm-up, you should be able to complete the technology exploration and feasibility studies. Define a common framework to generate consistent output and incorporate feedbacks. The Framework should include processes for building various proof of concepts for the integrated product with potential sensory architecture, cloud platforms, data storage, communication protocols and networks. The Framework should also have processes to plan the development of pilot projects and get feedback on them. If needed new devices (such as mobile phone, smart glasses, wearables or BYOD) may be introduced here to enhance the customer experiences.\n4. Participation Once you are done with warm-up, you know the potential approaches to be followed, and the devices to be bought into the game. Now you need to plan for the step-by-step development of the product features with a highly agile and flexible methodology, keeping the deployment as small as possible such that we derive the benefits inclemently, and also incorporating the feedback at the same time. Implementation process also need to be optimized time to time by building accelerator, monitors and tools which may help in speeding up the implementation. Eg, building reusable components, frameworks, standards, guidelines, best practices and checklists.\n5. Stay ahead While delivering the product features, also plan for their scaling. Develop SDKs to implement extensions and plugins on the product features. Call for the new/existing partners and service integrators to customize and enrich your product. Let the partner eco-system exploit the product capability.\nHope, these five steps will jump start your IoT journey, and keep you in the race, as a front runner. At every steps feedback is really important, every investment need to be well validated against ROI and then well-rehearsed in by pilot implementation and well exploited by partners. At the end it will be a win-win situation for all the stakeholders.\nIt was originally published at LinkedIn \n","id":48,"tag":"race ; IOT ; disruption ; product ; technology","title":"5 Steps of the IoT race","type":"post","url":"https://thinkuldeep.com/post/5-steps-of-the-iot-race/"},{"content":"The industry has been going through an evolution. Industry 4.0 is the fourth industrial revolution where the key will be on digital transformation. Industry 4.0 creates what has been called a \u0026ldquo;smart factory\u0026rdquo;. Within the modular structured smart factories, cyber-physical systems monitor physical processes, create a virtual copy of the physical world and make decentralized decisions. Over the Internet of Things, cyber-physical systems communicate and cooperate with each other and with humans in real time, and via the Internet of Services, both internal and cross-organizational services are offered and used by participants of the value chain. The core of industry 4.0 is technology. Digital trust and data analytics are the foundation of this. Focus will be on people and culture to drive transformation. All this is achieved through Industrial Internet of Things (IIoT) which falls under the ecosystem of IoT. But the industry is in complex of itself, where there is a need to connect different kind of heterogeneous systems, high availability and scalability without compromising on the security aspect of the system. Hence there a need to have a specific architecture reference specially designed for the IIoT. This document is all about the architecture reference for IIoT.\nFoundation Principle and Concept Agility With the ever-growing challenges and opportunities that face companies today, a company’s IT division is a competitive differentiator that supports business performance, product development, and operations. A leading benefit companies seek when creating an IIoT solution is the ability to efficiently quantify opportunities. These opportunities are derived from reliable sensor data, remote diagnostics, and remote command and control between users and devices. Companies that can effectively collect these metrics open the door to explore different business hypotheses based on their IIoT data. For example, manufacturers can build predictive analytics solutions to measure, test, and tune the ideal maintenance cycle for their products over time. The IIoT lifecycle is comprised of multiple stages that are required to procure, manufacture, onboard, test, deploy, and manage large fleets of physical devices.\nInteroperability The ability of machines, devices, sensors, and people to connect and communicate with each other via the Internet of Things (IoT) or the Internet of People (IoP). In order for a company’s IoT strategy to be a competitive advantage, the IT organization relies on having a broad set of tools that promote interoperability throughout the IoT solution and among a heterogeneous mix of devices.\nInformation transparency The ability of information systems to create a virtual copy of the physical world by enriching digital plant models with sensor data. This requires the aggregation of raw sensor data to higher-value context information.\nSecurity Security is an essential quality of an IIoT system and it is tightly related to specific security features which are often a basic prerequisite for enabling Trust and Privacy qualities in a system. Ability of the system to enforce the intended confidentiality, integrity and service access policies and to detect and recover from failure in these security mechanisms.\nPerformance and Scalability This perspective addresses two quality properties that are closely related: Performance and Scalability. Both are, compared to traditional information systems, even harder to cope with in a highly distributed scenario as we have it in IIoT. The ability of the system to predictably execute within its mandated performance profile and to handle increased processing volumes in the future if required. Any system with complex, unclear, or ambitious performance requirements; systems whose architecture includes elements whose performance is unknown; and systems where future expansion is likely to be significant.\nArchitecture Components Below is the high level architecture of a IIoT System.\nProtocols (Device Connectivity) One of the issues encountered in the transition to the IIoT is the fact that different edge-of-network devices have historically used different protocols for sending and receiving data. While there are a number of different communication protocols currently in use, such as OPC-UA, the Message Queueing Telemetry Transport (MQTT) transfer protocol is quickly emerging as the standard for IIoT, due to its lightweight overhead, publish/subscribe model, and bidirectional capabilities.\nService Orchestration Service orchestration would happen at this level, where the system will interact with different business adaptors. The services are more of the nature of event based and it is based out the principle of Reactive Manifesto. The services will be responsive, resilient, elastic and message driven.\nRule Engine The Rules Engine makes it possible to build IIoT applications that gather, process, analyze and act on data generated by connected devices at global scale without having to manage any infrastructure. The Rules Engine evaluates inbound messages published into IIoT and transforms and delivers them to another device or a cloud service, based on business rules you define. A rule can apply to data from one or many devices, and it can take one or many actions in parallel.\nDevice Provisioning (Data Identity and State Store) Device provisioning uses the identity store to create identities for new devices in the scope of the system or to remove devices from the system. Devices can also be enabled or disabled. When they are disabled, they have no access to the system, but all access rules, keys, and metadata stay in place.\nSecurity Security is going to be one the fundamental to the system. The data across every tier would go through proper security mechanism.\nThis article was also supported by Ram Chauhan\n","id":49,"tag":"architecture ; IOT ; industrial ; Cloud ; smart factory ; Industry 4.0 ; enterprise ; technology","title":"Industrial IoT (IIOT) Reference Architecture","type":"post","url":"https://thinkuldeep.com/post/iiot-reference-architecture/"},{"content":"The rapid growth of Internet of Things (IoT) and miniature wearable biosensors have generated new opportunities for personalized eHealth and mHealth services. We present a case study of an intelligent cap that can measure the amount of water in the bottle, monitor activity of opening or closing of bottle cap. This paper presents a Proof of Concept implementation for such a connected smart bottle that sends measurements to the IoT Platform.\nExponential growth of the number of physical objects connected to the Internet, also known as Internet of Things or IoT, is expected to reach 50 billion devices by 2020 [1]. New applications include smart homes, transportation, healthcare, and industrial automation. Smart objects in our environment can provide context awareness that is often missing during monitoring of activities of daily living [2], [3].\nOne of the most important factors for health and wellbeing is proper hydration. In our busy lives, it is hard to remember to drink enough water. And most of the time we forget to drink enough water whether we are in the home, office or on-the-go. Around 75% of Americans are chronically dehydrated all the time.\nWater makes up 60% of our body, 75% of our brain, and 83% of our blood. An intelligent hydration monitoring and management platform can provide automatic, accurate, and reliable monitoring of fluid consumption and provide configurable advice for optimum personalized hydration management. A smart water bottle can satisfy the needs of a variety of groups, including athletes that want to enhance performance, dieters that want to achieve their weight goals, as well as elderly in group homes or living alone that often suffer from dehydration. While most users who seek to improve, their wellness have insufficient water intake, some medical applications require limiting of water intake. Typical examples include kidney disease and congestive heart failure (CHF) patients that need to comply with recommended water intake protocols and limit the total amount of water taken during the day while still taking water regularly throughout the day. An integrated hydration management platform enables users to receive alerts and reminders of when to drink, set goals for how much to drink, show current and previous consumption levels etc. The first intelligent water bottle on the market was Hydra Coach. The bottle features a display that presents hydration information, but no wireless connectivity [4]. The new generation of bottles, such as HydrateMe and Trago feature Bluetooth wireless interfaces and custom smartphone and smartwatch applications [5].\nA smart water bottle measures the amount of water a person is drinking through sensor technology, usually located in the cap of the bottle. The features of a smart water bottle can also include, temperature sensors, freshness timers, drinking reminders, and daily hydration statistics. These additional features can help the drinker know when to fill up their bottle, how often they are drinking and how much more water they should include into their daily goal.\nSolution Concept A smart bottle integrated through an ultrasonic sensor, a push button, a NodeMCU and a LIPO battery as presented in Fig. 1. The controller (NodeMCU) communicates with the cloud using ESP8266 12-E wireless interface. The controller processes data from the smart bottle and sends processed information to the cloud thingspeak.\nProgramming in NodeMCU is same as Arduino. Now let’s discuss the implementation of the system in steps:\n Ultrasonic Sensor and pushbutton is integrated with the NodeMCU digital I/O pins. Ultrasonic Sensor sends the distance of the water level from the cap of the bottle. Based on the distance of the water, controller calculate the percentage of water and sends to the thingspeak cloud using http POST request. Similarly, push button also detects the pressed event and sends the event in the thingspeak. Working Demo here Constraints and Challenges Currently the Smart Bottle system have following limitations that can be worked upon in coming future:\n The ultrasonic sensor is not safe from water so, bottle can’t be tilted in closed condition. Conclusion The Smart Bottle System displays lot of potential and can be extended to implement advance features and functionalities. The advance feature list includes but not limited to the following:\n Water proof ultrasonic sensor to overcome our constraints. The same can be implemented using HX711-Weight sensor with load cell. Instead of using the push button we can use the hall sensor to detect the opening of bottle cap. References [1] A. Al-Fuqaha, M. Guizani, M. Mohammadi, M. Aledhari, and M. Ayyash, “Internet of Things: A Survey on Enabling Technologies, Protocols, and Applications,” IEEE Commun. Surv. Tutor., vol. 17, no. 4, pp. 2347–2376, Fourthquarter 2015. [2] C. Perera, A. Zaslavsky, P. Christen, and D. Georgakopoulos, “Context Aware Computing for The Internet of Things: A Survey,” IEEE Commun. Surv. Tutor., vol. 16, no. 1, pp. 414–454, First 2014. [3] L. Hood, J. C. Lovejoy, and N. D. Price, “Integrating big data and actionable health coaching to optimize wellness,” BMC Med., vol. 13, p. 4, 2015 [4] Emil Jovanov, Vindhya R Nallathimmareddygari, Jonathan E. Pryor,”SmartStuff: A Case Study of a Smart Water Bottle”, 978-1-4577-0220-4/16, 2016 [5] “The future of fitness: Custom Gatorade, smart water bottles, and a patch that analyzes your sweat | Macworld.” http://www.macworld.com/article/3044555/hardware/the-future-of-fitness-custom-gatorade-smart-waterbottles-and-a-patch-that-analyzes-your-sweat.html I worked on this concept with my colleague Ricktam K at IoT Center of Excellence of Nagarro\n","id":50,"tag":"smart bottle ; IOT ; smart cap ; concept ; enterprise ; Node MCU ; Ultrasonic ; thingspeak ; nagarro ; technology","title":"Smart bottle IOT concept","type":"post","url":"https://thinkuldeep.com/post/building-smart-bottle-concept/"},{"content":"On a usual day at office, if an employee wishes to book a meeting room, he or she first has to open their outlook account, check the calendar for its availability and then send a booking invitation. If they directly go to a meeting room, they have no way of knowing whether it is booked or not without checking the calendar.\nSmart Meeting Room is a step forward in automating the availability and booking procedure of a meeting room.\nThe main idea behind this app is to avoid the hassle of opening outlook, checking for room availability and then booking the room in which a user is already in. The user will just have to give a voice command to check the availability of room and another one to book the room. The application will recognize the user’s command and give a speech output. In order to confirm the identity of person, a camera is initiated, capturing the user’s image and recognizing it using the face detection API.\nTo fully automate a meeting room, an AC control module and a TV control module have been integrated into the app which allows the user to control the room’s air conditioning unit and the TV unit through simple voice commands.\nSolution Concept The application targets controlling the meeting rooms of the company using a simple natural interface. The solution is divided into four modules. Checking Room Availability - A user can directly go to a meeting room and ask for its availability using simple voice commands. The application will recognize the user’s command and give a speech output. Room Booking - If the room is available, we initiate a camera, capture the user’s image and recognize it using the face detection API in order to confirm the identity of person. Once the user is recognised, a meeting request is sent on behalf of the user and the room gets booked. Controlling the AC- The air conditioning unit of the room is controlled using simple voice commands. The application sends the user’s commands to a device attached to the AC which then transmits appropriate Infrared signals. Controlling the TV - Controlling the television unit of the room using voice commands works much the same way as the AC. Benefits The Smart Meeting Room project eases the way in which an employee can interact with a meeting room.\n Using the Speech Recognition and Face Recognition APIs, it provides for very rich and natural interaction between the user and the application. It provides a single interface through which a user can control all the aspects of a meeting room. Whether it is checking for room availability and its booking or controlling the room’s AC and TV, everything can be done using simple voice commands. It mitigates the need for finding and using multiple remote controls for the AC and the TV. A simple voice command does it all. It removes the hassle of manually checking the calendar on outlook and then creating a booking. Design and Architecture The architecture of the Smart Meeting Room system was designed to provide a stable foundation, both for current functionality and for future requirements.\n Universal Windows Application: Universal Windows Platform (UWP) provides a common app platform available on every device that runs Windows 10. The Universal windows app written for the Smart Meeting Room will run on Raspberry Pi 3 installed with windows IoT Core operating system. UWP provides a large set of APIs to work with which makes it easy to integrate richer experiences with your devices such as natural user interfaces, searching, online storage and cloud based services.\n User Interface: A user interface created using XAML provides users to interact with the system directly. Media Control: The Media Control component in the universal apps enables us to use the webcam attached with our system and show a live camera feed to capture the image. Bluetooth Low Energy (BLE): The power-efficiency of Bluetooth with low energy functionality makes it perfect for devices that run for long periods on limited power sources. We use the Bluetooth GATT profile to connect with the Arduino board in order to send the commands for the AC and the TV operations. Microsoft Speech Platform: The Microsoft Speech Platform provides a comprehensive Speech Platform Runtime management in voice-enabled applications. It provides the ability to recognize spoken words (speech recognition) and to generate synthesized speech (text-to-speech or TTS) to enhance users\u0026rsquo; interaction.\n Microsoft Exchange Web Service: The Microsoft’s Exchange Web Service allows us to connect to the Office 365 account and access the public folders and calendars and retrieve the free/busy status and availability of the meeting rooms.\n Microsoft Face API: The Project Oxford’s Face API for windows universal apps allows for programmatic access to the cognitive service subscription. We use the API to detect faces in the picture captured, identify the faces and retrieve their stored information.\n Microsoft Office 365 Web API: The Office 365 Managed APIs provides a managed interface for developing the application that allow programmatic access to public folders and calendars.\n Microsoft Project Oxford Face Detection API: The Microsoft cognitive services Face API Detect one or more human faces in an image, organize people into groups according to visual similarity, and identify previously tagged people in images.\n Arduino Application: The application to control the AC and the TV is run on a single board microcontroller such as Arduino Uno.\n Smart Meeting Room Web Application: A separate web application running on Microsoft Azure Platform is made for users to register themselves.\n Constraints and Challenges The Arduino application is written for a specific model of air conditioner and television. Smart Meeting Room application will not work for any AC or TV. We had to manually retrieve the IR commands to be sent using the IR receiver sensor. Every permutation of the AC operation was generated manually, which was a very tedious work. While writing the Arduino application, we constantly had to keep in mind the size of the code. Due to limited memory, the number of Serial. Write() were to be kept in check which made logging and debugging a challenge. We had to use the Exchange Web Service, writing soap requests and parsing soap responses manually. This was due to the fact that the EWS Managed API generated a popup which cannot be handled in Raspberry Pi. We have implemented the Speech Interpretation in low confidence. This might create a problem in proper interpretation of commands due to different voice accents. We generated the Bluetooth GATT profile on our own with its services and characteristics exposed. No particular standard was followed. For the IR emitter sensor to properly work, it was to be positioned in such a way that it was in a direct line of sight of both the AC and the TV. This became a challenge depending on the how and where the AC and the TV were installed with respect to each other. Conclusion The Smart Meeting Room application is a good experiment, it can be extended to implement more features in future such as\n Implementing meeting check in functionality on Smart Display Controlling room condition of specific meeting room using a web based dashboard Controlling AC and TV of specific meeting room remotely using a web dashboard Turning the air conditioner on/off by sensing the presence of person References Microsoft Cognitive Services Face API Microsoft Azure Blob Storage Universal Windows Platform Speech Recognition https://msdn.microsoft.com/en-us/windows/uwp/input-and-devices/speech-interactions https://blogs.msdn.microsoft.com/cdndevs/2015/05/25/universal-windows-platform-speech-recognition-part-2/ Universal Windows Platform Bluetooth GATT https://developer.microsoft.com/en-us/windows/iot/samples/blegatt2 Microsoft Exchange Web Service Office 365 OAuth https://github.com/OfficeDev/O365-WebApp-OAuthController This concept was supported by my colleagues Ricktam K and Harsh Godha at IoT Center of Excellence of Nagarro\n","id":51,"tag":"smart meeting room ; IOT ; azure ; cloud ; concept ; enterprise ; Arduino ; Raspberry PI ; BLE ; Microsoft ; technology","title":"Building a smart meeting room concept","type":"post","url":"https://thinkuldeep.com/post/building-smart-meeting-room-concept/"},{"content":"Android Things, an android based embedded operating system, is the new “it” thing in the Internet of Things(IOT) space. Developed by Google under the codename “Brillo, a.k.a. Project Brillo”, it takes the usual Android development stack—Android Studio, the official SDK, and Google Play Services—and applies it to the IOT.\nGoogle formally took the veils off of Brillo at its I/O 2015 conference. It is aimed to be used with low-power and memory constrained IOT devices, allowing the developers to build a smart device using Android APIs and Google Services. Google released the first developer preview version in December 2016 with an aim to roll out updates every 6-8 weeks.\nAndroid Things provides a turnkey solution to start building on development boards like Intel Edison, Intel Joule, NXP Pico, NXP Argon i.IMX6UL, and Raspberry Pi 3. At its core, it supports a System-On-Module (SoM) architecture, where a core computing module can be initially used with development boards and then easily scaled to large production runs with custom designs, while continuing to use the same Board Support Package (BSP) from Google.\nIt provides support for Bluetooth low energy and Wi-Fi. Like the OTA updates for Android Phones, developers can push Google-provided OS updates and custom app updates using the same OTA infrastructure that Google uses for its products and services. It takes advantage of regular best-in-class security updates by building on top of the Android OS. Support for Weave, Google’s IoT communications platform, is on the roadmap and will come in a later developer preview.\nAndroid Things makes developing connected embedded devices easy by providing the same Android development tools, best-in-class Android framework, and Google APIs that make developers successful on mobile. It extends the core Android framework with additional APIs provided by the Things Support Library.\nDifference between Android and Android Things Development Major differences between the core Android development and Android Things.\n Android Things platform is streamlined for a single application use. It doesn\u0026rsquo;t include the standard suite of system apps and content providers. The application uploaded by the developer is automatically launched on startup.\n Although Android Things supports graphical user interfaces using the same UI toolkit available to traditional Android applications, it doesn’t require a display. On devices where a graphical display is not present, activities are still a primary component of the Android Things app. All input events are delivered to the foreground activity.\n Android Things expects one application to expose a \u0026ldquo;home activity\u0026rdquo; in its manifest as the main entry point for the system to automatically launch on boot. This activity must contain an intent filter that includes both CATEGORY_DEFAULT and IOT_LAUNCHER.\n Android Things supports a subset of the Google APIs for Android. As a general rule, APIs that require user input or authentication credentials aren\u0026rsquo;t available to apps.\n Requesting Permissions at Runtime is not supported because embedded devices aren\u0026rsquo;t guaranteed to have a UI to accept the runtime dialog. All normal and dangerous permissions declared in the app\u0026rsquo;s manifest are granted at install time.\n Since there is no system-wide status bar in Android Things, notifications are not supported.\n Let’s start with Android Things Android Things provide turnkey hardware solutions for certified development boards. As of now, it provides support for Intel Edison, Intel Joules, Raspberry Pi 3, and NXP Pico i.MX6UL.\nFlash the image Binary system image files are available on the official Android Things page. Download the image file and install them on the development board of your choice from the approved list of developer kits. Pre-requisites and install instructions for each of the development boards are provided on the official website. Once Android Things OS has been installed on the device, it can be accessed like any other android device using the Android debug Bridge(adb) tool.\n$ adb devices List of devices attached Edion61aa device AP0C52890C device Boot your device. When the boot process finishes, the result is shown below (if your device is attached to a display): If you get this screenshot it means that Android Internet of things is running successfully. The Android Things Launcher shows information about the board on the display, including the IP address. After flashing your board, it is strongly recommended to connect it to the internet. This allows your device to deliver crash reports and receive updates.\nTo connect your board to Wi-Fi using adb:\n Send an intent to the Wi-Fi service that includes the SSID and passcode of your local network:\n$ adb shell am startservice \\ -n com.google.wifisetup/.WifiSetupService \\ -a WifiSetupService.Connect \\ -e ssid \u0026lt;Network_SSID\u0026gt; \\ -e passphrase \u0026lt;Network_Passcode\u0026gt; Note: You can remove the passphrase argument if your network doesn\u0026rsquo;t require a passcode.\n Verify that the connection was successful through logcat:\n$ adb logcat -d | grep Wifi ... V WifiWatcher: Network state changed to CONNECTED V WifiWatcher: SSID changed: ... I WifiConfigurator: Successfully connected to ... Test that you can access a remote IP address:\n$ adb shell ping 8.8.8.8 Building First Android Things App Things apps use the same structure as those designed for phones and tablets. This similarity means you can modify your existing apps to also run on embedded things or create new apps based on what you already know about building apps for Android. Let’s start building our very first app for Android Things. Use a development board of your choice with Android Things OS image installed and the device booted up.\nPre-requisites Before building apps for Things, these requirements must be fulfilled:\n Download Android Studio Update the SDK tools to version 24 or higher Update the SDK with Android 7.0 (API 24) or higher Create or update the app project to target Android 7.0 (API level 24) or higher in order to access new APIs for Things Add the library Android Things devices expose APIs through support libraries that are not part of the Android SDK.\n Add the dependency artifact to the app-level build.gradle file:\ndependencies { ... provided \u0026#39;com.google.android.things:androidthings:0.2-devpreview\u0026#39; } Add the things shared library entry to the app\u0026rsquo;s manifest file:\n\u0026lt;application ...\u0026gt; \u0026lt;uses-library android:name=\u0026#34;com.google.android.things\u0026#34;/\u0026gt; ... \u0026lt;/application\u0026gt; Declare a home activity An application intending to run on an embedded device must declare an activity in its manifest as the main entry point after the device boots. Apply an intent filter containing the following attributes:\n\u0026lt;application android:label=\u0026#34;@string/app_name\u0026#34;\u0026gt; \u0026lt;uses-library android:name=\u0026#34;com.google.android.things\u0026#34;/\u0026gt; \u0026lt;activity android:name=\u0026#34;.MainActivity\u0026#34;\u0026gt; \u0026lt;!-- Launch activity as default from Android Studio --\u0026gt; \u0026lt;intent-filter\u0026gt; \u0026lt;action android:name=\u0026#34;android.intent.action.MAIN\u0026#34;/\u0026gt; \u0026lt;category android:name=\u0026#34;android.intent.category.LAUNCHER\u0026#34;/\u0026gt; \u0026lt;/intent-filter\u0026gt; \u0026lt;!-- Launch activity automatically on boot --\u0026gt; \u0026lt;intent-filter\u0026gt; \u0026lt;action android:name=\u0026#34;android.intent.action.MAIN\u0026#34;/\u0026gt; \u0026lt;category android:name=\u0026#34;android.intent.category.IOT_LAUNCHER\u0026#34;/\u0026gt; \u0026lt;category android:name=\u0026#34;android.intent.category.DEFAULT\u0026#34;/\u0026gt; \u0026lt;/intent-filter\u0026gt; \u0026lt;/activity\u0026gt; \u0026lt;/application Connect the Hardware We will develop a push button application. A button sensor will be connected to the development board, which will generate an event every time it is pushed.\n Connect the GND pin of the button sensor to the GND of the device Connect the VCC of button to the 3.3V pin of the device Connect the SIG pin of button to any GPIO input pin. Interact with Peripherals Android Things Peripheral I/O APIs provide a way to discover and communicate with General Purpose Input Ouput (GPIO) ports.\n The system service responsible for managing peripheral connections is PeripheralManagerService Use this service to list the available ports for all known peripheral types. public class HomeActivity extends Activity { private static final String TAG = \u0026#34;HomeActivity\u0026#34;; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); PeripheralManagerService service = new PeripheralManagerService(); Log.d(TAG, \u0026#34;Available GPIO: \u0026#34; + service.getGpioList()); } } To receive events when a button connected to GPIO is pressed:\n Use PeripheralManagerService to open a connection with the GPIO port wired to the button. Configure the port as input with DIRECTION_IN. Configure which state transitions will generate callback events with setEdgeTriggerType(). Register a GpioCallback to receive edge trigger events. Return true within onGpioEdge() to continue receiving future edge trigger events. When the application no longer needs the GPIO connection, close the Gpio resource. public class HomeActivity extends Activity { private static final String TAG =HomeActivity.class.getSimpleName(); private static final String GPIO_PIN_NAME = “BCM23”; private Gpio mButtonGpio; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); PeripheralManagerService service = new PeripheralManagerService(); Log.d(TAG, \u0026#34;Available GPIO: \u0026#34; + service.getGpioList()); try { // Step 1. Create GPIO connection. mButtonGpio = service.openGpio(GPIO_PIN_NAME); // Step 2. Configure as an input. mButtonGpio.setDirection(Gpio.DIRECTION_IN); // Step 3. Enable edge trigger events. mButtonGpio.setEdgeTriggerType(Gpio.EDGE_FALLING); // Step 4. Register an event callback. mButtonGpio.registerGpioCallback(mCallback); } catch (IOException e) { Log.e(TAG, \u0026#34;Error on PeripheralIO API\u0026#34;, e); } } // Step 4. Register an event callback. private GpioCallback mCallback = new GpioCallback() { @Override public boolean onGpioEdge(Gpio gpio) { Log.i(TAG, \u0026#34;GPIO changed, button pressed\u0026#34;); // Step 5. Return true to keep callback active. return true; } }; @Override protected void onDestroy() { super.onDestroy(); // Step 6. Close the resource if (mButtonGpio != null) { mButtonGpio.unregisterGpioCallback(mCallback); try { mButtonGpio.close(); } catch (IOException e) { Log.e(TAG, \u0026#34;Error on PeripheralIO API\u0026#34;, e); } } } } Deploy the app It’s time to deploy the app to the device. Connect to the device using adb. Deploy the app to the device by clicking on the Run button on the android studio and selecting the device. Press the button on the button sensor. The result can be seen on the logcat tab on the android studio. This app can be extended to include a LED sensor. Connect the LED to an output GPIO pin. Whenever the button press event is received, call the PeripheralManagerService to toggle the state of the LED.\npublic class BlinkActivity extends Activity { private static final String TAG = \u0026#34;BlinkActivity\u0026#34;; private static final int INTERVAL_BETWEEN_BLINKS_MS = 1000; private static final String GPIO_PIN_NAME = ...; private Handler mHandler = new Handler(); private Gpio mLedGpio; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState);\\ // Step 1. Create GPIO connection. PeripheralManagerService service = new PeripheralManagerService(); try { mLedGpio = service.openGpio(GPIO_PIN_NAME); // Step 2. Configure as an output. mLedGpio.setDirection(Gpio.DIRECTION_OUT_INITIALLY_LOW); // Step 4. Repeat using a handler. mHandler.post(mBlinkRunnable); } catch (IOException e) { Log.e(TAG, \u0026#34;Error on PeripheralIO API\u0026#34;, e); } } @Override protected void onDestroy() { super.onDestroy(); // Step 4. Remove handler events on close. mHandler.removeCallbacks(mBlinkRunnable); // Step 5. Close the resource. if (mLedGpio != null) { try { mLedGpio.close(); } catch (IOException e) { Log.e(TAG, \u0026#34;Error on PeripheralIO API\u0026#34;, e); } } } private Runnable mBlinkRunnable = new Runnable() { @Override public void run() { // Exit if the GPIO is already closed if (mLedGpio == null) { return; } try { // Step 3. Toggle the LED state mLedGpio.setValue(!mLedGpio.getValue()); // Step 4. Schedule another event after delay. mHandler.postDelayed(mBlinkRunnable, INTERVAL_BETWEEN_BLINKS_MS); } catch (IOException e) { Log.e(TAG, \u0026#34;Error on PeripheralIO API\u0026#34;, e); } } }; } Conclusion This completes the hello world, and trying out the Raspberry PI with AndroidThings. We can utilise standard andorid concepts and still enjoy the IOT.\n One thing to remember is that, at the time of creating this article, Android Things is in the third iteration of its developer preview. This means that, because it\u0026rsquo;s an early release for testing and feedback, some features are currently unavailable or may be buggy as the platform is tested and built out. Currently, support for simple analog sensors is not included in the Android Things general-purpose input/output (GPIO) classes—though there is a technical reasoning for this, SPI and I2C can still be used. As this platform is still new, there are not many drivers for sensors or other hardware, so developers using the platform will need to either create their own drivers or make do with what is currently available or open sourced by other developers in the Android Things community. References [1].\thttps://developer.android.com/things/index.html [2].\thttps://en.wikipedia.org/wiki/Android_Things [3].\thttp://www.androidauthority.com/project-brillo-google-io-612159/ [4].\thttp://www.theverge.com/2015/5/28/8677119/google-project-brillo-iot-google-io-2015 [5].\thttps://arstechnica.com/gadgets/2016/12/google-brillo-rebrands-as-android-things-googles-internet-of-things-os/ [6].\thttps://blog.mindorks.com/google-released-the-developer-preview-of-android-things-iot-75cb49b9ce24#.6nxlbpte7 [7].\thttps://developer.android.com/things/sdk/pio/gpio.html [8].\thttps://developer.android.com/things/sdk/drivers/index.html [9].\thttps://www.novoda.com/blog/writing-your-first-android-things-driver-p1/ I explored this while working at IoT Center of Excellence of Nagarro\n","id":52,"tag":"androidthings ; IOT ; android ; os ; tutorial ; nagarro ; raspberry pi ; technology","title":"Exploring AndroidThings IoT","type":"post","url":"https://thinkuldeep.com/post/exploring-android-things/"},{"content":" Your house will remain clean for longer duration if you follow certain rules strictly such as put things at right place after use, clear any mesh just after doing it, repair/upgrade things on time, daily dusting, vacuum cleaning etc. We may avoid a massive cleanup if we apply a series of small efforts daily.\n Let for the same while writing the code – Your code will remain maintainable for longer duration if you follow certain rules strictly such as follow the coding standards and guidelines, clean the code just after it, fix/refactor on time, periodic design review and refactoring. We may avoid complete rewrite if we apply a series of small effort daily.\n This article was originally published on My LinkedIn Profile\n","id":53,"tag":"clean-code ; practice ; maintainability ; technology","title":"Writing maintainable code is like house cleaning","type":"post","url":"https://thinkuldeep.com/post/clean-code/"},{"content":"Everybody in the software industry is talking about IoT in terms of billions of devices, their growth in coming years, changing business models, and adaptation of the IoT in the market. Connecting the things has become so prevalent nowadays, that every software company wants to expand in IoT by building IoT-centric products and services.\nThere are many industrial applications which connect to sensors and actuators, and can be controlled remotely. They have been around for a long time.\nWhy Is There so Much Hype About IoT? Yes, the connected things were already there, but with time, the technology used in connecting the things has been enhanced. IoT is about more than just connecting ordinary things and making them accessible remotely. The “Connected Things” becomes the “Internet of Things” when they are connected to people and their processes. Connected things produce huge amounts of data which can be monitored and visualized remotely, but in the IoT era, this data is analyzed to generate useful insights, build patterns, predict situations, prescribe solutions, and instruct ordinary things to make them self-optimizing.\nThe IoT is a package of:\n The things with the capability to sense and act. Communication protocols to connect the things. A platform to collect and store the data from the things, then analyze the data and instruct the connected things to perform optimally. A network to connect all the things with the platform. Interactive user interface to visualize the data and monitor/control/track the things. Supporting Forces The \u0026ldquo;Connected Things\u0026rdquo; of past was converted to the \u0026ldquo;Internet of Things\u0026rdquo; due to following supportive forces:\n Reliance on software: Software usage is increasing for making the hardware better. Software plays a critical role in a lot of mission critical applications. Many hardware vendors are building embedded software pieces which can be upgraded/enriched later, without really changing the hardware. Technology commoditization: Technology is reaching the hand of the common people. We are now living with technology, our basic devices such as phones, watches, tv, washing machines, shoes, etc. are getting smarter every day. Technology is becoming more affordable. Hardware is getting cheaper due to mass production and process enhancement. Connection everywhere: Network speed and coverage is increasing. The world is getting closer every day by connecting to the Internet. The Internet is getting cheaper and more accessible. Communication overhead is also getting reduced by using the better protocols, thus data access speed is also increasing. More data and more science: Necessity is the mother of invention. The focus on data analytics and science is increasing, as more data is getting generated. Data storage capacity is also increasing with time. Rapid development: Companies are investing in frameworks and platforms to build, deploy, and produce the software faster. Cloud services are getting cheaper and easily accessible. Increase the revenue by reducing the cost: Enterprises are targeting to increase revenue by reducing cost through predictive maintenance and the use of self-optimizing machines. Stay in the race: Based on the predictions of IoT growth by 2020 or so, everybody wants to stay in the race. Obstacles for IoT Growth Lack of standards: Current IoT development is more consensus-based standards. Sets of companies come together and define their own standards, and this leads to the interoperability issues. Security tradeoff: IoT devices are smaller, less capable and may not perform well when high-security standards are required. Investment: A lot of companies are talking about IoT but very few are really investing. Most companies playing safe by waiting and watching. Huge ecosystem: IoT’s technical ecosystem is huge. It involves everything including hardware, network, protocol, software, data science, data storage, mobility, IoT platform, cloud infrastructure, maintenance and integration services. It is really difficult for companies to build such as a huge ecosystem. Companies need to partner with specialists and share their customers and revenue. Integration to existing infrastructure: Integrating the IoT tech stack to traditional communication infrastructure is challenging. For example connecting over HTTPS, SSL, SOAP, RDBMS, SFTP for CRM, ERP, ECM portal. Conclusion IoT is not new; it has been there for a long time. The “Connected Things/M2M\u0026rdquo; of the past is today’s IoT with a new set of communication protocols, improved hardware, better data collection, and analytics capability with nice visualization to monitor and interact with IoT devices. There are a lot of forces to keep IoT growing in future, but there are some obstacles too, which need to be crossed by the IoT solution providers.\nIt was originally published at Dzone and LinkedIn \n","id":54,"tag":"guide ; iot ; fundamentals ; connected things ; forces ; technology","title":"The Driving Forces Behind IoT","type":"post","url":"https://thinkuldeep.com/post/the-driving-forces-iot/"},{"content":"As we understand “IoT Platform” is an essential building block of IoT Ecosystem . A platform essentially decouples the business application from low level details of the technology stack and required services. Thus, it makes more sense to go for an off-the-shelf platform that provide all the relevant features and required flexibility, instead of developing the whole IoT stack from scratch. Selection of an IoT Platform is a key to develop a scalable and robust IoT Solution. Choice of IoT Platform is also important in reducing the solution’s time to market-readiness.\nChallenges Since IoT is not a single technology but a complex mix of varying technologies, its relevance to business verticals is not uniform. Choosing a platform, that suits to specific needs or the one that suits most of the generic IoT needs, is a cumbersome task. A simple web search on Google for IoT Platforms results in more than 200 IoT Platforms, ranging from market leaders like Microsoft Azure, Amazon Web Services to open-source platform like SiteWhere \u0026amp; Kaa and established ones like PTC Thingworx, Bosch IoT Suite. These platforms not only vary in features and services they provide, but also in the terminology they use for similar features.\nTo tackle these challenges of selecting an IoT Platform, we have devised a strategic approach that compares and analyzes multiple platforms by mapping them against a Common Framework.\nThe Common Framework Below image shows an IoT reference architecture and depicts a Common Framework for IoT Platform. IoT reference architecture shows three major components:\n IoT Nodes: Devices composed of sensors and / or actuators on one part, a communication module on another. Data sensed by these devices is either utilized in computation at the edge gateways or exchanged with IoT Platform for further analysis. IoT Platform: Facilitates communication with devices, data-pipelining, device management, analytics and application development. Business and external systems: Such systems leverage IoT in business functions like SAP ERP etc. and visualization systems like Augmented Reality, Virtual Reality, Mobile, Web Browsers etc. The Common Framework defines features of IoT Platform under following categories, each of which is further divided in factors:\n Connectivity: connectivity enables easy and reliable integration of devices with platform by providing standard protocols. Connectivity can be evaluated on following factors:\n Data Ingestion - How reliable and scalable consumption of device data in the platform? Integration SDKs - How is the support for multiple languages, how does the platform gives more control to application developers. Key Protocols - does the platform supports for key protocols and ensure decoupling the business applications from underlying communication. Device Management: Provides an easy and reliable way for operators to manage or monitor collection of devices. Following factors impacts Device Management:\n Data Identity \u0026amp; Registry - The platform should have mechanism to maintain a device registry to store the Device Identity and security credentials. Device State Storage - Easy asynchronous update of device latest state \u0026amp; capabilities in device twin or shadow. Device Security - Authenticate connected devices using security credentials Remote Software Upgrade - Remote update to collection or individual IoT Node. Analytics \u0026amp; Visualization: Device data analysis to gain insights for business profitability and visualizing in user friendly way. Following analytics features are expected in the platform:\n Batch Analytics - An efficient way of processing high volumes of historical data Stream Processing - Analyze events and data on the fly. Predictive/ Preventive Maintenance - Deep machine learning, which leads to predictive or prevent analysis. Visualization - Displaying result of analysis (historical or real-time) on multiple visualization means Data Routing: Ease of defining data routing rules to execute complex use cases that consumes device and sensor data. Rule engine capabilities are factored as:\n GUI Based Rules - Ease for non-technical users to create or modify rules. Nested Conditions - Support for complex use cases Routing Control - Have a control on routing of data to other services. Rule Repository - Version and maintain multiple rules. Storage: Valuable data generated by the devices shall be securely persisted and backed up regularly to prevent loss from any disaster. Following are important decision factor for storage and backup:\n Data Store Types - Multiple types of data stores provides flexibility in designing application Backup \u0026amp; Recovery - Platform should provide means to back up the data and perform recovery actions if needed Disaster Recovery - Disaster recovery requires that data redundancy is maintained at multiple sites Security: Security is the biggest concern in any IoT solution and needs to be taken care at all layers. Following highlights the key security considerations in platform:\n IT compliances - deploying services in hybrid cloud behind the organization firewall \u0026amp; within organization IT compliance. Identity \u0026amp; Access Mgmt - Ensure only the right individuals access the right resources at the right times for the right reasons Key Management - Encrypt keys and small secrets like passwords using keys stored in hardware security modules MARS: Maintainability, Availability, Reliability and Scalability, these are important aspects for cost optimization and to prevent business losses from downtime. Factors:\n Administration - Service and Application logging and monitoring Elasticity - On demand scale-up and scale-down. Enterprise Messaging - prevent leakage or queue overflows when data flows through services. SLA - prevent downtime losses Enterprise Integration, API Gateway, Application Development: platform’s integration capabilities are critical to leverage IoT in existing business systems. Support for application development and concept of an API Gateway allows easy integration with external visualization systems like Google Glass, HoloLens, Mobile \u0026amp; Web.\n Conclusion The Common Framework makes it easier to evaluate multiple IoT Platforms. A simple process of evaluation starts with breaking down of an IoT Platform’s features and then mapping these features to the Common Framework (categories and factors within categories). After doing a similar mapping for all platforms, an apple to apple comparison of features in different platforms is possible. One can remove and add the categories or factors as per relevance to their evaluation requirements. Read our next white paper on IoT Platform evaluation in details to understand how to score the features and complete the evaluation exercise to find a best fit for your IoT Platform needs.\nContributors : This paper was also supported by Umang Garg and Manish Asija\n","id":55,"tag":"framework ; IOT ; platform ; comparison ; analysis ; cloud ; security ; technology","title":"Framework for choosing an IoT Platform","type":"post","url":"https://thinkuldeep.com/post/choosing_iot_platform/"},{"content":"SIGFOX is providing a communication solution dedicated to the Internet of Things\nDedicated to the IOT means:\n Simplicity: No configuration, no pairing, no signaling Autonomy: Very low energy consumption, allowing years of autonomy on battery without maintenance Small messages: No large assets or multimedia, only small messages It is a LPWA (Low-Power Wide-Area) network, currently operating in 20 countries in Europe, Americas and Asia/Pacific. Communications over the SIGFOX network are bi-directional: uplink from device \u0026amp; downlink to the device. Specifically,\nCharacteristics of SigFox Network SigFox sets up antennas on towers (like a cell phone company), and receives data transmissions from devices like parking sensors or water meters. These transmissions use frequencies that are unlicensed. SigFox deploys Low-Power Wide Area Networks (LPWAN) that work in concert with hardware that manufacturers can integrate into their products. Any device with integrated SigFox hardware can connect to the internet – in regions where a SigFox network has been deployed – without any external hardware, like a Wi-Fi or ZigBee router. SigFox wireless systems send very small amounts of data (12 bytes) very slowly, at just 100 bits per second so it can reach farther distances with fewer base stations. Up to 140 messages per object per day Low Battery consumption as sensors can \u0026ldquo;go to sleep\u0026rdquo; when not transmitting data, consuming energy only when they need to. The focus on low-power, low-bandwidth communications also makes the SigFox network relatively easy to deploy. While adding new SigFox node, reconfiguration to existing nodes is not required. Data transportation becomes very long range (distances up to 40km in open field) and communication with buried, underground equipment becomes possible, all this being achieved with high reliability and minimal power consumption. Furthermore, the narrow throughput transmission combined with sophisticated signal processing provides effective protection against interference. This also ensures that the integrity of the data transmitted is respected. This technology is a good fit for any application that needs to send small, infrequent bursts of data. Frequencies The SIGFOX network is relying on Ultra-Narrow Band(UNB) modulation, and operating in unlicensed sub-GHz frequency bands. This protocol offers a great resistance to jamming \u0026amp; standard interferers, as well as a great capacity per receiving base stations SIGFOX complies with local regulations, adjusting central frequency, power output and duty cycles SIGFOX Requirements A SIGFOX-ready module or transceiver (SIGFOX solution) Ex- SI446x A valid subscription. Development kits \u0026amp; evaluation boards come with an included one-year subscription Be in a covered area (In SIGFOX Network). SIGFOX Network SIGFOX’s network is designed around a hierarchical structure:\n UNB modems communicate with base stations, or cells, covering large areas of several hundred square kilometers. UNB - Ultra Narrow Band, technology uses free frequency radio bands (no license needed) to transmit data over a very narrow spectrum to and from connected objects. Base stations route messages to server. Servers check data integrity and route the messages to your information system Messaging Sending a message through SIGFOX –\n There is no signaling, nor negotiation between a device and a receiving station. The device decides when to send its message, picking up a pseudo-random frequency. It\u0026rsquo;s up to the network to detect the incoming messages, as well as validating \u0026amp; reduplicating them. The message is then available in the SIGFOX cloud, and forwarded to any 3rd party cloud platform chosen by the user. Security Data security provided by SIGFOX –\n Every message is signed, with information proper to the device (incl. a unique private key) and to the message itself. This prevent spoofing, alteration or replay of genuine messages. Encryption \u0026amp; scrambling of the data are supported, with the choice of the more appropriate solution up to the consumer. Hardware solutions: SIGFOX makes its IP available at no cost to silicon and module vendors. This strategy has encouraged leading silicon and module vendors to integrate SIGFOX in their existing assets, giving solution enablers a wide choice of very competitively priced components. SIGFOX enables bi-directional as well as mono-directional communications and can co-exist with other communication protocols, both short and long range. High energy consumption has been a major obstacle with regards to unleashing the potential of the Internet of Things, which is why one of the key elements of the SIGFOX strategy is to continue to push the boundaries for energy consumption lower and low. SIGFOX Web Application Web application provided by SIGFOX system will allow users to access application and perform their desired tasks for devices. There are number of REST end points exposed by the server.\nThis research was supported by my colleagues at IoT Center of Excellence and Nagarro and SIGFOX\n","id":56,"tag":"sigfox ; IOT ; platform ; introduction ; guide ; nagarro ; technology","title":"SIGFOX - An Introduction","type":"post","url":"https://thinkuldeep.com/post/sigfox-an-introduction/"},{"content":"We have recently migrated source code from Hibernate ORM to JDBC (Spring JDBC template) based implementation. Performance has been improved 10 times.\nUser case (Oracle 11g, JBoss 6, JDK 6, Hibernate, Spring 3) : A tree structure in database is getting populated from a deep file system (directory structure) having around 75000 nodes. Each node (directory) contains text files, which get parsed based on business rules and then populate the database ( BRANCHs representing a node, tables referring to branch, tree_nodes). The database tables was well mapped to Hibernate JPA entries and on saving Branch object, its relevant entities were automatically getting saved. Whole operation was performed in recursive manner by traversing over directory tree. Each table’s primary key is auto generated from a separate sequence.\nAs per development level performance testing, it was estimated that initial tree will take 30 hours to load whole tree. This was not acceptable, as UAT cannot be started without this migration.\nWe have tried batching,flushing in hibernate, and seen the improvement but Spring JDBC is 10 times faster then that of Hibernate based approaches.\nWhile migrating to Spring JDBC based implementation, the biggest issue was to resolve the generated ID referred in other queries. Since ID is getting generated by a sequence and Spring JDBC template’s ‘batchUpdate’ do not have provision to fetch generated IDs. Hibernate was doing this automatically by updating ID field of the entity object.\n We fetched bulk ids from the sequence, in a single query (select customer_id_seq.nextval from (select level from dual connect by LEVEL \u0026lt;=?size)) . Set ids in entity object while iterating BatchPreparedStatementSetter.setValues method. This way entity object get populated in the same way as it is done in hibernate, without any special iteration/processing. Once entity object is populated, all the dependent batches can be fired so that entity.getId returns the correct value.\nThis way we were able to migrate Hibernate based code-base to Spring JDBC with minimal changes in source code.\nThis article was originally published on My LinkedIn Profile\n","id":57,"tag":"orm ; java ; hibernate ; spring ; technology","title":"Choose ORM carefully!","type":"post","url":"https://thinkuldeep.com/post/choose-orm-carefully/"},{"content":"JavaScript has been regarded as a functional programing language by most of the developers. This book is targeted for all the JavaScript developers to shift their thinking from the functional to the object oriented programing language.\nThe course requires basic knowledge of JavaScript syntax. If you already know some JavaScript, you will find a lot of eye-openers as you go along and learn what more the JavaScript can do for you.\nThis course starts with the basics of JavaScript programming language and the object oriented programing (OOP) concepts and how to achieve the same using JavaScript. The additional JavaScript concepts like AMD, JSON, popular frameworks, and libraries are also included in the document.\nThis course covers the JavaScript OOP concepts in comparison with the well-known Object Oriented Programing language Java. You will have the comparative code in Java and JavaScript and the use of the famous IDE Eclipse.\nThe usage of JavaScript has been changed a lot since its invention. Earlier it was used only as the language for the web browser for the client-side scripting, but now it is also being used for server-side scripting, desktop applications, mobile apps, gaming, graphics, charting-reporting, etc. The following diagram depicts its usages: Programming in JavaScript Let’s recall the JavaScript language, this section covers the very basics of JavaScript, but it may be an eye opener for some developers, so try the examples given below and learn the flexibility of the JavaScript programming language. I hope you know how to run the example given below, just add these in \u0026lt;script\u0026gt; tag on a HTML page.\nGeneral Statements Following are different type of statements of JavaScript language.\n// Single Line Comment /** * * Multi-Line * Comment */ //Declare var variable1; //Declare and define var variable1 = 10; //Assign variable1 = \u0026#34;New value\u0026#34;; //Check type of a variable alert (typeof variable1); // string - dynamic type language. console.log (\u0026#34;variable1=\u0026#34; + variable1); //log to console. I hope you know ‘console’, if not //press F12 in chrome or IE and see the tab ‘Console’. Operators Equality Checking ==\tequal to ===\tequal value and equal type !=\tnot equal !==\tnot equal value or not equal type \u0026gt;\tgreater than \u0026lt;\tless than \u0026gt;=\tgreater than or equal to \u0026lt;=\tless than or equal to Guard/Default Operator \u0026amp;\u0026amp; - and || - or The logical operators are and, or, and not. The \u0026amp;\u0026amp; and || operators accept two operands and provides their logical result. \u0026amp;\u0026amp; and || are short circuit operators. If the result is guaranteed after the evaluation of the first operand, it skips the second operand.\nTechnically, the exact return value of these two operators is also equal to the final operand that is evaluated. This is why the \u0026amp;\u0026amp; operator and the || operator are also known as the guard operator and the default operator respectively.\nArray and Map See the beauty of JavaScript, flexible use of array.\n//Array var arrayVar = []; arrayVar [0] = \u0026#34;Text\u0026#34;; arrayVar [1] = 10; //Run time memory allocation. arrayVar [arrayVar.length] = \u0026#34;Next\u0026#34;; console.log (arrayVar.length); // 3 console.log (arrayVar [0]); // Text //Map Usage of same Array. arrayVar [\u0026#34;Name\u0026#34;] = \u0026#34;Kuldeep\u0026#34;; arrayVar [\u0026#34;Group\u0026#34;] = \u0026#34;JSAG\u0026#34;; console.log (arrayVar [\u0026#34;Name\u0026#34;]); // Kuldeep //You can also access String Key with DOT operator. console.log (arrayVar.Name); // Kuldeep //You can also access index as map way. console.log (arrayVar [\u0026#34;0\u0026#34;]); // Text console.log (arrayVar.length); //3 – Please note length of array is increase only for numeric values arrayVar [\u0026#34;3\u0026#34;] = \u0026#34;JSAG\u0026#34;; console.log (arrayVar.length) //4 – Numeric values in String also treated as numeric. Auto cast arrayVar [500] = \u0026#34;JSAG\u0026#34;; console.log (arrayVar.length) //501 Object Literals Object literals can be defined using JSON syntax.\nvar person1 = { name: \u0026#34;My Name\u0026#34;, age: 30, cars: [1, 3], addresses: [{ street1: \u0026#34;\u0026#34;, pin: \u0026#34;\u0026#34; }, { street1: \u0026#34;\u0026#34;, pin : \u0026#34;\u0026#34; } ] }; console.log (person1.name); console.log (person1.addresses [0].street1); Control Statements If-else-if chain Same as any other language.\nvar value; var message = \u0026#34;\u0026#34;; if (value) { message = \u0026#34;Some valid value\u0026#34;; if (value == \u0026#34;O\u0026#34;) { message = \u0026#34;Old Value\u0026#34;; } else if (value == \u0026#34;N\u0026#34;) { message = \u0026#34;New Value\u0026#34;; } else { message = \u0026#34;Other Value\u0026#34;; } } else if (typeof (value) == \u0026#34;undefined\u0026#34;) { message = \u0026#34;Undefined Value\u0026#34;; } else { message = value; } console.log (\u0026#34;Message : \u0026#34; + message); Switch Switch accepts both numeric and string in same block.\nvar value = \u0026#34;O\u0026#34;; var message = \u0026#34;\u0026#34;; switch (value) { case 1: message = \u0026#34;One\u0026#34;; break; case \u0026#34;O\u0026#34;: message = \u0026#34;Old Value\u0026#34;; break; case \u0026#34;N\u0026#34;: message = \u0026#34;New Value\u0026#34;; break; case 2: message = \u0026#34;Two\u0026#34;; break; default: message = \u0026#34;Unknown\u0026#34;; } console.log (\u0026#34;Message : \u0026#34; + message); Loop The while loop, continue, break statements same as Java.\n//The while loop while (value \u0026gt; 5) { value --; if (value == 100) { console.log (\u0026#34;Break Condition\u0026#34;); break; } else if (value \u0026gt; 10) { console.log (\u0026#34;Value more than 10\u0026#34;); continue; } console.log (\u0026#34;Value : \u0026#34; + value); } The for loop //For loop on array for (var index = 0; index \u0026lt; arrayVar.length; index++) { console.log (\u0026#34;For Index - \u0026#34;, index, arrayVar [index]); // Note that it will only print the array with index numeric indexes (because of ++operation). } //For each loop (similar to Java) for (var key in arrayVar) { console.log (\u0026#34;For Key - \u0026#34;, key, arrayVar [key]); // Note that it will print all the values of array/map w.r.t keys. } The do-while loop // Do while loop. do { console.log (\u0026#34;Do while\u0026#34;); if (value) { value = false; } } while (value) Function Similar to the other programming language a function is a set of statements to perform some functionality.\nDefine The functions are defined as follows:\n// Declarative way /** * Perform the given action * * @param {String} actionName - Name of the action * @param {Object} payload - payload for the action to be performed. * @return {boolean} result of action */ function performAction (actionName, payload) { console.log (\u0026#34;perform action:” actionName,” payload:” payload); return true; } Since JS is dynamic language, we don’t need to specify the type of parameters and type of return value.\nInvoke We can invoke a function as follows\nvar returnValue = performAction (\u0026#34;delete\u0026#34;, \u0026#34;userData\u0026#34;); console.log (returnValue); The ‘arguments’ variable One can access/pass arguments in a function even if they are not defined in the original function. The ‘arguments’ variable is an object of type array-like, you can perform most of the operation of an array on this.\n/** * Perform an action * * @return {boolean} result of action */ function performAction () { console.log (arguments.length); console.log (\u0026#34;perform action:” arguments [0],” payload:” arguments [1]); return true; } var returnValue = performAction (\u0026#34;delete\u0026#34;, \u0026#34;userData\u0026#34;); console.log (returnValue); Function as Object We can assign a function definition to an object, and access the function using the objects.\n/** * Perform an action * * @return {boolean} result of action */ var myFunction = function performAction () { console.log (arguments.length); console.log (\u0026#34;perform action :” arguments [0],” payload:” arguments [1]); Return true; }; console.log (typeof myFunction); //function var returnValue = myFunction (\u0026#34;delete\u0026#34;, \u0026#34;userData\u0026#34;); Function inside Function In Javascript, you can define functions inside a function.\n/** * Perform an action * * @return {boolean} result of action */ var myFunction = function performAction () { console.log (arguments.length); function doInternalAction (actionName, payload) { console.log (\u0026#34;perform action:” actionName,” payload:” payload); return true; } return doInternalAction (arguments [0], arguments [1]); }; Exception Handling JavaScript supports exception handling similar to other languages.\nvar obj; try { console.log (\u0026#34;Value = \u0026#34; + obj.check); } catch (e) { console.log (e.toString ()); console.log (\u0026#34;Error occurred of type : \u0026#34; + e.name); console.log (\u0026#34;Error Details: \u0026#34; + e.message); } TypeError: Cannot read property 'check' of undefined Error occurred of type: TypeError Error Details: Cannot read property 'check' of undefined The Error object has name and message. You can also throw your own errors having name and message.\nfunction check (obj) { if (! obj.hasOwnProperty (\u0026#34;check\u0026#34;)) { throw {name: \u0026#34;MyCheck\u0026#34;, message: \u0026#34;Check property does not exists\u0026#34;}; // Also try... throw new Error (“message”); } } var obj = {} try { check (obj); } catch (e) { console.log (e.toString ()); console.log (\u0026#34;Error occurred of type: \u0026#34; + e.name); console.log (\u0026#34;Error Details: \u0026#34; + e.message); } [object Object] Error occurred of type: MyCheck Error Details: Check property does not exists Context this The context of a function is accessed via “this” keyword. It is a reference to the object using which the function is invoked. If we call a function directly, “this” refers to the window object.\nfunction log (message) { console.log (this, message); } log (\u0026#34;hi\u0026#34;); var Logger = {} Logger.info = function info (message) { console.log (this, message); } Logger.info (\u0026#34;hi\u0026#34;); Window {top: Window, window: Window, location: Location, external: Object, chrome: Object…} \u0026quot;hi\u0026quot; Object {info: function} \u0026quot;hi\u0026quot; Invoking functions with context You can change the context of a function while invoking using “call” and “apply” syntax.\n[function object].call ([context], arg1, arg2, arg3 …); [function object].apply ([context], [argument array]); function info (message) { console.log (this, \u0026#34;INFO:\u0026#34; + this.name + \u0026#34;:\u0026#34; + this.message +\u0026#34; input: “+ message); } var iObj = { name: \u0026#34;My Name\u0026#34;, message: \u0026#34;This is a message\u0026#34;, }; info.call (iObj, \u0026#34;hi\u0026#34;); info.apply (iObj, [\u0026#34;hi\u0026#34;]); Object {name: \u0026quot;My Name\u0026quot;, message: \u0026quot;This is a message\u0026quot;} \u0026quot;INFO: My Name: This is a message input: hi\u0026quot; Object {name: \u0026quot;My Name\u0026quot;, message: \u0026quot;This is a message\u0026quot;} \u0026quot;INFO: My Name: This is a message input: hi\u0026quot; Scope The scope of a variable is limited to the function definition. The variable remains alive till the existence of the function.\nfunction script_js () { var info = function info (message) { console.log (\u0026#34;INFO:\u0026#34; + message); }; var message = “This is a message\u0026#34;; } info (message); Uncaught ReferenceError: message is not defined If a variable is not defined with a var keyword, it is referred as a global variable which is attached to the window context.\nfunction script_js () { info = function info (message) { console.log (\u0026#34;INFO:\u0026#34; + message); }; message = “This is a message\u0026#34;; } script_js (); info (message); INFO: This is a message But for accessing such global variables we need to call the methods once to initialize them in the scope. We can limit a variable to a scope w.r.t to a js file, or to a block by defining it inside a function.\nThe scoping functions need to be called immediately after the definition; this is called IIFE (Immediately Invoked Function Expression). Instead of explicitly calling the scoping functions we can call the function along with its definition as follows.\n(function script_js () { info = function info (message) { console.log (\u0026#34;INFO:\u0026#34; + message); }; message = “This is a message\u0026#34;; })(); info (message); IIFE can be used to limit the scope of the variables w.r.t a file, block, etc. and whichever needs to be exposed outside the scope can be assigned to the global variables.\nReflection Reflection is examining the structure of a program and its data. The language of a program is called a programming language (PL), and the language in which the examination is done is called a meta programming language (MPL). PL and MPL can be the same language. For example,\nfunction getCarData () { var myCar = { name: \u0026#34;Toyota\u0026#34;, model: \u0026#34;Corolla\u0026#34;, power_hp: 1800, getColor: function () { // ... } }; alert (myCar.hasOwnProperty (\u0026#39;constructor\u0026#39;)); // false alert (myCar.hasOwnProperty (\u0026#39;getColor\u0026#39;)); // true alert (myCar.hasOwnProperty (\u0026#39;megaFooBarNonExisting\u0026#39;)); // false if (myCar.hasOwnProperty (\u0026#39;power_hp\u0026#39;)) { alert (typeof (myCar.power_hp)); // number } } getCarData (); Eval: The eval () method evaluates a JavaScript code represented as a string, for example:\nvar x = 10; var y = 20; var a = eval (\u0026#34;x * y\u0026#34;); var b = eval (\u0026#34;2 + 3\u0026#34;); var c = eval (\u0026#34;x + 11\u0026#34;); alert (\u0026#39;a=\u0026#39;+a); alert (\u0026#39;b=\u0026#39;+b); alert (\u0026#39;c=\u0026#39;+c); ####Timers and Intervals With JavaScript, we can execute some code at the specified time-intervals, called timing events. To time events in JavaScript is very easy. The two key methods that are used are:\n setInterval() - executes a function, over and over again, at the specified time intervals it wait a specified number of milliseconds, and then execute a specified function, and it will continue to execute the function, once at every given time-interval.\n window.setInterval (\u0026quot;javascript function\u0026quot;, milliseconds); //Alert \u0026quot;hello\u0026quot; every 2 secondss setInterval (function () {alert (\u0026quot;Hello\u0026quot;)}, 2000); setTimeout() - executes a function, once, after waiting a specified number of milliseconds\n window.setTimeout (\u0026quot;javascript function\u0026quot;, milliseconds); Object Oriented Programing in JavaScript As HTML5 standards are getting matured, JavaScript has become the most popular programming language of the web for client-side environments, and is gaining its ground rapidly on the server-side as well. Despite the popularity, major concern of JavaScript is code quality, as JavaScript code-bases grow larger and more complex, the object-oriented capabilities of the language become increasingly crucial. Although JavaScript’s object-oriented capabilities are not as powerful as those of static server-side languages like Java, its current object-oriented model can be utilized to write a better code.\nThe Class and Object Construction A class consists of data and operations on those data. A class has construction and initialization mechanisms. Let’s discuss how a class can be defined in JavaScript. There is no special keyword in JavaScript to define a class; a class is a set of following:\n Constructor - A constructor is defined as a normal function.\n/** * This is the constructor. */ function MyClass () { }; Let’s make it better to avoid calling it a function.\n/** * This the constructor. */ function MyClass () { if (! (this instanceof MyClass)){ throw new Error (\u0026#34;Constructor can\u0026#39;t be called as function\u0026#34;); } }; MyClass (); Uncaught Error: Constructor can't be called as function Every object has a prototype associated to it. For the above class, the prototype will be:\nMyClass.prototype { constructor: MyClass () { if (! (this instanceof MyClass)){ throw new Error (\u0026#34;Constructor can\u0026#39;t be called as function\u0026#34;); } }, __proto__: Object {...} } Instance Fields You can define the instance fields by attaching the variables with “this” or class’s prototype, inside the constructor or after creating the object. Class’s prototype can also be defined using JSON Syntax.\n/** * This the constructor. * @constructor */ function MyClass () { if (! (this instanceof MyClass)){ throw new Error (\u0026#34;Constructor can\u0026#39;t be called as function\u0026#34;); } this.instanceField1 = \u0026#34;instanceField1\u0026#34;; this.instanceField2 = \u0026#34;instanceField2\u0026#34;; }; MyClass.prototype.name = \u0026#34;test\u0026#34;; Let’s initialize the fields using constructor\n/** * This the constructor. * @param {String} arg1 * @param {int} arg2 * @constructor */ function MyClass (arg1, arg2) { if (! (this instanceof MyClass)){ throw new Error (\u0026#34;Constructor can\u0026#39;t be called as function\u0026#34;); } this.instanceField1 = arg1; this.instanceField2 = arg2; }; Instance Methods - The same way we define instance methods.\n/** * This the constructor. * @param {String} arg1 * @param {int} arg2 * @constructor */ function MyClass (arg1, arg2) { if (! (this instanceof MyClass)){ throw new Error (\u0026#34;Constructor can\u0026#39;t be called as function\u0026#34;); } this.instanceField1 = arg1; this.instanceField2 = arg2; /** * Print the fields */ this.printFields = function printFields () { console.log (\u0026#34;instanceField1: \u0026#34; + this.instanceField1 + \u0026#34; instanceField2: \u0026#34; + this.instanceField2); }; }; MyClass.prototype.printName = function printName () { console.log (this.name); }; Instance Creation - An instance can be created using a new keyword.\nvar myInstance = new MyClass (\u0026#34;Kuldeep\u0026#34;, 31); console.log (myInstance.instanceField1); console.log (myInstance.instanceField2); myInstance.printName (); myInstance.printFields (); var myInstance1 = new MyClass (\u0026#34;Sanjay\u0026#34;, 30); console.log (myInstance1.instanceField1); console.log (myInstance1.instanceField2); myInstance1.printFields (); myInstance1.printName (); Kuldeep 31 test instanceField1: Kuldeep instanceField2: 31 Sanjay 30 instanceField1: Sanjay instanceField2: 30 test You can add/update/delete instance fields from the instance variable as well, any change in an instance variable will only be effective in that instance variable.\nmyInstance.instanceField1 = \u0026#34;Kuldeep Singh\u0026#34;; myInstance1.instanceField3 = \u0026#34;Rajasthan\u0026#34;; myInstance1.name = \u0026#34;New test\u0026#34;; MyClass.prototype.name = 100; myInstance.printFields = function () { console.log (\u0026#34;1: \u0026#34; + this.instanceField1 + \u0026#34; 2: \u0026#34; + this.instanceField2 +\u0026#34; 3: “+ this.instanceField3); }; myInstance.printFields (); myInstance1.printFields (); myInstance.printFields.call (myInstance1); myInstance1.printName (); 1: Kuldeep Singh 2: 31 3: undefined script.js:227 instanceField1: Sanjay instanceField2: 30 script.js:198 1: Sanjay 2: 30 3: Rajasthan script.js:227 New test Please note that once an object is created, any change in the prototype will not impact the created instance. On “new” a copy of class’s prototype is created and assigned to instance variable. Constructors then initiate/update the members copied from prototype.\n Static Fields and Methods - The static fields are variables which are associated with a class, not with an instance. In JavaScript the static variables are only accessible to the class reference, unlike Java they are not accessible using an instance variable. There is no static keyword in JavaScript.\nMyClass.staticField1 = 100; MyClass.staticField2 = \u0026#34;Static\u0026#34;; MyClass.printStaticFields = function () { console.log (\u0026#34;staticField1: \u0026#34; + this.staticField2 + \u0026#34; staticField2: \u0026#34; + this.staticField2); }; var myInstance = new MyClass (\u0026#34;Kuldeep\u0026#34;, 31); MyClass.printStaticFields (); MyClass.printOtherFields = function () { console.log (\u0026#34;1: \u0026#34; + this.instanceField1 + \u0026#34; 2: \u0026#34; + this.instanceField2); }; MyClass.printOtherFields (); myInstance.printStaticFields (); staticField1: Static staticField2: Static 1: undefined 2: undefined Uncaught TypeError: Object #\u0026lt;MyClass\u0026gt; has no method 'printStaticFields' Singleton - If you need to use only one instance of an object, use a Singleton either by defining the class by the static methods or define the object as literals using JSON syntax.\n Encapsulation In the above example we have clubbed the data and the operations in one class structure but the data can directly be accessed outside of the class as well. So it is a violation of the encapsulation concept. In this section we will see how we can protect the data inside a class or in other words define the access rules for the data and the operations. In JavaScript there is no access modifier such as public, private or protected or package. But we have discussed the scope; by function scoping we can limit the access to the data. Going forward let’s write all the code in IIFE scoping function to control the file level access of the variables.\n Public - The variables and the methods which are associated with “this” or Class.prototype is by default public.\nperson.js\n(function () { /** * This the constructor. * * @param {String} name * @param {int} age * @constructor */ function Person (name, age) { if (! (this instanceof Person)) { throw new Error (\u0026#34;Constructor can\u0026#39;t be called as function\u0026#34;); } /** * @public Name */ this.name = name; /** * @public Age */ this.age = age; /** * @public * @return String representation */ this.toString = function toString () { return \u0026#34;Person [name:\u0026#34; + this.name + \u0026#34;, age:\u0026#34; + this.age + \u0026#34;]\u0026#34;;\t}; }; /** * @public Full Name of the person. */ Person.prototype.fullName = \u0026#34;Kuldeep Singh\u0026#34;; var person = new Person (\u0026#34;Kuldeep\u0026#34;, 31); console.log (\u0026#34;Person: \u0026#34; + person); console.log (person.name); console.log (person.fullName); })(); Person: Person [name: Kuldeep, age: 31] Kuldeep Kuldeep Singh Private - Anything defined with ‘var’ inside a function remains private to that function. So variables and methods defined with var keywords in a constructor are private to the constructor and logically with the class and also the declarative style inner methods are in the private scopes.\n(function () { /** * This the constructor. * * @param {String} name * @param {int} age * @constructor */ function Person (name, age) { if (! (this instanceof Person)) { throw new Error (\u0026#34;Constructor can\u0026#39;t be called as function\u0026#34;); } /** * @private Name */ var _name = name; /** * @private Age */ var _age = age; /** * @public String representation * @return */ this.toString = function toString () { return \u0026#34;Person [name:\u0026#34; + _name + \u0026#34;, age:\u0026#34; + _age + \u0026#34;]\u0026#34;; }; }; var person = new Person (\u0026#34;Kuldeep\u0026#34;, 31); console.log (\u0026#34;Person: \u0026#34; + person); console.log (person._name); })(); Person: Person [name: Kuldeep, age: 31] undefined Expose the data using getters and setters wherever required, accessing private members is called privileged access. The private members are not assessable from a prototypical definition (Class.prototype) of the public methods.\n(function () { /** * This is the constructor. * * @param {String} name * @param {int} age * @constructor */ function Person (name, age) { if (! (this instanceof Person)) { throw new Error (\u0026#34;Constructor can\u0026#39;t be called as function\u0026#34;); } /** * @private Name */ var _name = name; /** * @private Age */ var _age = age; /** * @public */ this.getName = function getName () { return _name; }; /** * @public */ this.getAge = function getAge () { return _age; }; /** * @public String representation * @return */ this.toString = function toString () { return \u0026#34;Person [name:\u0026#34; + _name + \u0026#34;, age:\u0026#34; + _age + \u0026#34;]\u0026#34;; }; }; var person = new Person (\u0026#34;Kuldeep Singh\u0026#34;, 31); console.log (person.getAge ()); console.log (person.getName ()); })(); It’s a good practice to define public API in the prototype of a class and access private variables using the privileged methods.\n Inheritance In JavaScript there are two ways to implement inheritance:\n Prototypical Inheritance - This type of inheritance is implemented by\n Copying the prototype of the base class to the sub class, Changing the constructor property of the prototype We have created a person class. Now let’s expose the class with the window context. person.js\n(function () { /** * This is the constructor. * * @param {String} name * @param {int} age * @constructor */ function Person (name, age) { if (! (this instanceof Person)) { throw new Error (\u0026#34;Constructor can\u0026#39;t be called as function\u0026#34;); } var _name = name; var _age = age; this.getName = function getName (){ return _name; }; this.getAge = function getAge () { return _age; }; /** * @public String representation * @return */ this.toStr = function toString () { return \u0026#34;name :\u0026#34; + _name + \u0026#34;, age :\u0026#34; + _age; }; }; window.Person = Person; })(); Let’s extend the person class and create an Employee class.\nEmployee.prototype = new Person (); Employee.prototype.constructor = Employee; JavaScript does not have any “super” keyword so we need to call super constructor explicitly. Using call or apply. employee.js\n(function () { /** * This is the constructor. * * @param {String} type * @constructor */ function Employee (code, firstName, lastName, age) { if (! (this instanceof Employee)) { throw new Error (\u0026#34;Constructor can\u0026#39;t be called as function\u0026#34;); } //Calling super constructor Person.call (this, firstName, age); // Private fields var _code = code; var _firstName = firstName; var _lastName = lastName; // Public getters this.getCode = function getCode () { return _code; }; this.getFirstName = function getFirstName () { return _firstName; }; this.getLastName = function getLastName () { return _lastName; }; this.toStr = function toString () { return \u0026#34;Code:\u0026#34; + _code + \u0026#34;, Name :\u0026#34; + _firstName +\u0026#34; \u0026#34;+ _lastName; }; }; Employee.prototype = new Person (); Employee.prototype.constructor = Employee; var emp = new Employee (425, \u0026#34;Kuldeep\u0026#34;, \u0026#34;Singh\u0026#34;, 31); console.log (\u0026#34;Employee: \u0026#34; + emp.toStr ()); console.log (emp.getAge ()); console.log (emp.getName ()); })(); Employee: Employee [Code: 425, Name: Kuldeep Singh] 31 Kuldeep All the methods of the Person class are available in the Employee class. You can also check instanceof and isPrototypeOf.\n console.log (emp instanceof Employee); console.log (emp instanceof Person); console.log (Employee.prototype.isPrototypeOf (emp)); console.log (Person.prototype.isPrototypeOf (emp)); The complete prototype chain is maintained; when a member of a child object is accessed, it is searched from the members of the object. If it is not found there, it is searched in its prototype object. If it is not found there, it is searched in the members and the prototype of the parent object. It is searched up to the prototype of the object.\nThe prototypical inheritance supports only the multi-level inheritance. If you make a change in the base classes/its prototype after creating the object instance, the change will be effecting as per the traversal rule explained.\nIn the above example, if you add a method in the Person class after creating emp object, that method will also be available in emp object even if it was not there when the emp object was created.\nIn above code we have also overridden toStr method and called the super constructor from the constructor.\nImprovement:\nWe have created a new object of Person to create a copy of its prototype. Here we have done an extra operation to call the constructor as well with no arguments. We can avoid this extra operation by following.\nInstead of new Person(), do Object.create (Person.prototype)\nIt creates a better prototype chain for the created object.\n_With new\n Employee.prototype = new Person (); Employee.prototype.constructor = Employee; With Object.create()\n Employee.prototype = Object.create (Person.prototype); Employee.prototype.constructor = Employee; Mixin Based Inheritance - When we need to inherit the functionality of the multiple objects, we use this approach to implement inheritance. This can be achieved by the external libraries which copy all the properties of the source objects to the destination objects, such as _.extends or $.extends Let’s try with jQuery, include “jquery-x.y.z.min.js” in html page. Change the Person class as a JSON object:\nperson.js\n(function () { Person = { constructor: function Person (name, age) { var _name = name; /** * @private Age */ var _age = age; /** * @public */ this.getName = function getName () { return _name; }; /** * @public */ this.getAge = function getAge () { return _age; }; /** * @public String representation * @return */ this.toStr = function toString () { return \u0026#34;name:\u0026#34; + _name + \u0026#34;, age:\u0026#34; + _age; }; } }; window.Person = Person; })(); Similarly, Employee class as a JSON object:\nemployee.js\n(function () { Employee = { constructor: function Employee (code, firstName, lastName, age) { // Call super Person.constructor.call (this, firstName, age); // Private fields var _code = code; var _firstName = firstName; var _lastName = lastName; // Public getters this.getCode = function getCode () { return _code; }; this.getFirstName = function getFirstName () { return _firstName; }; this.getLastName = function getLastName () { return _lastName; }; this.toStr = function toString () { return \u0026#34;Code :\u0026#34; + _code + \u0026#34;, Name :\u0026#34; + _firstName + \u0026#34; \u0026#34; + _lastName; ; }; } }; var emp = {}; $.extend (emp, Person, Employee).constructor (425, \u0026#34;Kuldeep\u0026#34;, \u0026#34;Singh\u0026#34;, 31); console.log (emp); var emp2 = {}; $.extend (emp2, Person, Employee).constructor (425, \u0026#34;Test1\u0026#34;, \u0026#34;Test1\u0026#34;, 30); console.log (\u0026#34;Employee : \u0026#34; + emp.toStr ()); console.log (emp.getAge ()); console.log (emp2.getName ()); })(); Employee {constructor: function, getName: function, getAge: function, toStr: function, getCode: function…} Employee: Code: 425, Name: Kuldeep Singh 31 Test1 With this approach you cannot check instanceof or isPrototypeOf console.log (emp instanceof Employee);\nconsole.log (emp instanceof Person); console.log (Employee.prototype.isPrototypeOf (emp)); console.log (Person.prototype.isPrototypeOf (emp));\nBecause all the properties are copied in the object and they remain as its own property.\nPlease be informed that if you make changes in the structure (adding/changing/deleting property) of the original base classes after creating the object then that change will not be propagated to the instance of the child object. So, the mixing-based inheritance does not keep the parent-child relationship.\n Abstraction Due to the flexibility of JavaScript, abstraction is not much needed, as we can pass/use the data without defining their types. But, yes we can implement the abstract types and interfaces to enforce some common functionality.\n Abstract Type - The abstract types are the data types (classes) which cannot be instantiated. Such data types are used to contain common functionality as well as abstraction of the functionality that a sub-class should implement. For example, the human class - human.js\n(function () { function Human () { if (! (this instanceof Human)) { throw new Error (\u0026#34;Constructor can\u0026#39;t be called as function\u0026#34;); } throw new Error (\u0026#34;Can’t instantiate Human class\u0026#34;); }; Human.prototype.getClassName = function _getClassName () { return \u0026#34;class: Human\u0026#34;; }; /** * @abstract */ Human.prototype.getBirthDate = function _getBirthDate () { throw new Error (\u0026#34;Not implemented\u0026#34;); }; window.Human = Human; })(); var h1 = new Human (); Uncaught Error: Can’t instantiate Human class Let’s extend this class in Person. person.js\n(function () { /** * This is the constructor. * * @param {String} name * @param {int} age * @constructor */ function Person (name, age) { if (! (this instanceof Person)) { throw new Error (\u0026#34;Constructor can\u0026#39;t be called as function\u0026#34;); } var _name = name; var _age = age; this.getName = function getName () { return _name; }; this.getAge = function getAge () { return _age; }; /** * @public String representation * @return */ this.toStr = function toString () { return \u0026#34;name:\u0026#34; + _name + \u0026#34;, age:\u0026#34; + _age; }; }; Person.prototype = Object.create (Human.prototype); Person.prototype.constructor = Person; window.Person = Person; var p1 = new Person (\u0026#34;Kuldeep\u0026#34;, 31); console.log (p1.getName ()); console.log (p1.getClassName ()); console.log (p1.getBirthDate ()); })(); Kuldeep class: Human Uncaught Error: Not implemented So implementation needs to be provided for the abstract methods.\n Interface - Interface is similar to an abstract class but it enforces that implementing class must implement all the methods declared in the interface. There is no direct interface or a virtual class in JavaScript, but this can be implemented by checking whether a method is implemented in the class or not. Example: human.js\n(function () { var IFactory = function (name, methods) { this.name = name; // copies array this.methods = methods.slice (0); }; /** * @static * Method to validate if methods of interface are available in \u0026#39;this_obj\u0026#39; * * @param {object} this_obj - object of implementing class. * @param {object} interface - object of interface. */ IFactory.validate = function (this_obj, interface) { for (var i = 0; i \u0026lt; interface.methods.length; i++) { var method = interface.methods[i]; if (! this_obj [method] || typeof this_obj [method]! == \u0026#34;function\u0026#34;) { throw new Error (\u0026#34;Class must implement Method: \u0026#34; + method); } } }; window.IFactory = IFactory; /** * @interface */ var Human = new IFactory(\u0026#34;Human\u0026#34;, [\u0026#34;getClassName\u0026#34;, \u0026#34;getBirthDate\u0026#34;]); window.Human = Human; })(); Let’s have a class which must comply with the Human interface definition. person.js\n(function () { /** * This is the constructor. * * @param {String} name * @param {int} age * @constructor */ function Person (name, age) { if (! (this instanceof Person)) { throw new Error (\u0026#34;Constructor can\u0026#39;t be called as function\u0026#34;); } var _name = name; var _age = age; this.getName = function getName () { return _name; }; this.getAge = function getAge () { return _age; }; /** * @public String representation * @return */ this.toStr = function toString () { return \u0026#34;name:\u0026#34; + _name + \u0026#34;, age:\u0026#34; + _age; }; IFactory.validate (this, Human); }; window.Person = Person; var p1 = new Person (\u0026#34;Kuldeep\u0026#34;, 31); console.log (p1.getName ()); })(); Uncaught Error: Class must implement Method: getClassName Let’s implement the unimplemented methods: person.js\n(function () { /** * This is the constructor. * * @param {String} name * @param {int} age * @constructor */ function Person (name, age, dob) { if (! (this instanceof Person)) { throw new Error (\u0026#34;Constructor can\u0026#39;t be called as function\u0026#34;); } var _name = name; var _age = age; var _dob = dob; this.getName = function _getName () { return _name; }; this.getAge = function _getAge () { return _age; }; this.getBirthDate = function _getDob () { return _dob; }; /** * @public String representation * @return */ this.toStr = function toString () { return \u0026#34;name:\u0026#34; + _name + \u0026#34;, age:\u0026#34; + _age; }; IFactory.validate (this, Human); }; Person.prototype.getClassName = function _getClassName () { return \u0026#34;class: Person\u0026#34;; }; window.Person = Person; var p1 = new Person (\u0026#34;Kuldeep\u0026#34;, 31, \u0026#34;05-Aug-1982\u0026#34;); console.log (p1.getName ()); console.log (p1.getBirthDate ()); console.log (p1.getClassName ()); })(); Kuldeep 05-Aug-1982 class: Person Polymorphism Polymorphism is referred to as multiple forms associated with one name. In JavaScript polymorphism is supported by overloading and overriding the data and the functionality.\n Overriding - Overriding is supported in JavaScript, if you define the function/variable with same name again in child class it will override the definition which was inherited from the parent class.\n Function overriding - The example of overriding can be found in Person-Employee example of Inheritance Variable overriding - You can override the variables in the same way as you can override the functions. Calling super.function() - Now let’s see how we can inherit the functionality of the base class while overriding, in other words, calling super.method(). Since there is no “super” keyword in JavaScript, we need to implement it using call/apply. Let’s consider person.js created in the last section. person.js\n(function () { /** * This is the constructor. * * @param {String} name * @param {int} age * @constructor */ function Person (name, age, dob) { if (! (this instanceof Person)) { throw new Error (\u0026#34;Constructor can\u0026#39;t be called as function\u0026#34;); } var _name = name; var _age = age; var _dob = dob; this.getName = function _getName () { return _name; }; this.getAge = function _getAge () { return _age; }; this.getBirthDate = function _getDob () { return _dob; }; /** * @public String representation * @return */ this.toStr = function toString () { return \u0026#34;name:\u0026#34; + _name + \u0026#34;, age:\u0026#34; + _age; }; IFactory.validate (this, Human); }; Person.prototype.getClassName = function _getClassName () { return \u0026#34;class: Person\u0026#34;; }; window.Person = Person; })(); And have employee.js as follows: employee.js\n(function () { /** * This is the constructor. * * @param {String} type * @constructor */ function Employee (code, firstName, lastName, age) { if (! (this instanceof Employee)) { throw new Error (\u0026#34;Constructor can\u0026#39;t be called as function\u0026#34;); } //Calling super constructor Person.call (this, firstName, age); // Private fields var _code = code; var _firstName = firstName; var _lastName = lastName; // Public getters this.getCode = function _getCode () { return _code; }; this.getFirstName = function _getFirstName () { return _firstName; }; this.getLastName = function _getLastName () { return _lastName; }; this.toStr = function _toString () { return \u0026#34;Code:\u0026#34; + _code + \u0026#34;, Name :\u0026#34; + _firstName +\u0026#34; \u0026#34;+ _lastName; }; }; Employee.prototype = Object.create (Person.prototype); Employee.prototype.constructor = Employee; Employee.prototype.getClassName = function _getClassName () { //Calling super.getClassName () var baseClassName = Person.prototype.getClassName.call (this); return \u0026#34;class: Employee extending \u0026#34; + baseClassName; }; var emp = new Employee (425, \u0026#34;Kuldeep\u0026#34;, \u0026#34;Singh\u0026#34;, 31); console.log (\u0026#34;Employee: \u0026#34; + emp.toStr ()); console.log (emp.getClassName ()); })(); Employee: Code: 425, Name: Kuldeep Singh class: Employee extending class: Person Privileged Function vs. Public Function - Now let’s try to call the privileged function in the child class. Change the Emplyee.toStr method as follows\nthis.toStr = function toString () { var baseStr = Person.prototype.toStr.call (this); return \u0026#34;Code:\u0026#34; + _code + \u0026#34;, Name:\u0026#34; + _firstName +\u0026#34; \u0026#34;+ _lastName + \u0026#34; baseStr:\u0026#34; + baseStr; }; And you will get Uncaught TypeError: Cannot call method \u0026lsquo;call\u0026rsquo; of undefined The reason is you cannot call the privileged method in the child class, but you can override the privileged method. Please note this deference between the privileged (not linked with the prototype) and the public methods (linked with prototype).\n Overloading\n Operator Overloading - JavaScript have implement operator overloading inbuilt for “+” operator. For numeric data type it acts as add operation but for String data type it act as concatenation operator. No custom overloading is support for operators. Functional Overloading - Since the JavaScript functions are flexible in terms of the number of arguments, the type of arguments, and the return types. Only the method name is considered as the signature of the method. You can pass any number of arguments, or any type of argument to a function, it does not matter if the function definition has included those parameters or not. Please ref to Section JavaScript Functions Due to this flexibility, functional overloading is not supported in JavaScript. If you define another function having same name but different parameters then that another function will just overwrite the original function. Modularization Once we have many custom object types, the class and the files, we need to group them and manage their visibility, dependencies, and loading. As of now JavaScript does not directly support the management of the visibility, the dependencies and the loading of the objects or the classes. Generally these things are managed by a namespace/package or modules in the other language. However, future version of JavaScript (ECMAScript 6) will come up with the full-fledged support for a module and a namespace. Till then let’s see how it can be implemented in JavaScript.\n Package or Namespace - There is no inbuilt package or namespace in JavaScript, but we can implement it JSON style objects. namespaces.js\n(function () { // create namespace /** * @namespace com.tk.training.oojs */ var com = { tk: { training: { oojs: {} } } }; /** * @namespace com.tk.training.oojs.interface */ com.tk.training.oojs.interface = {}; /** * @namespace com.tk.training.oojs.manager */ com.tk.training.oojs.manager = {}; //Register namespace with global. window.com = com; })(); Let’s use these namespaces in the classes we have created so far. human.js\n(function () { ... ... /** * @interface */ var Human = new IFactory (\u0026#34;Human\u0026#34;, [\u0026#34;getClassName\u0026#34;, \u0026#34;getBirthDate\u0026#34;]); //register with namespace com.tk.training.oojs.interface.IFactory = IFactory; com.tk.training.oojs.interface.Human = Human; })(); // person.js (function () { function Person (name, age, dob) { ... ... com.tk.training.oojs.interface.IFactory.validate (this, com.tk.training.oojs.interface.Human); }; Person.prototype.getClassName = function _getClassName () { return \u0026#34;class: Person\u0026#34;; }; //Register with namespace. com.tk.training.oojs.manager.Person = Person; })(); employee.js\n(function () { function Employee (code, firstName, lastName, age) { ... //Calling super constructor com.tk.training.oojs.manager.Person.call(this, firstName, age); ... }; Employee.prototype = Object.create (com.tk.training.oojs.manager.Person.prototype); Employee.prototype.constructor = Employee; Employee.prototype.getClassName = function _getClassName () { //Calling super.getClassName () var baseClassName = com.tk.training.oojs.manager.Person.prototype.getClassName.call(this); return \u0026#34;class: Employee extending \u0026#34; + baseClassName; }; com.tk.training.oojs.manager.Employee = Employee; })(); var emp = new com.tk.training.oojs.manager.Employee(425, \u0026#34;Kuldeep\u0026#34;, \u0026#34;Singh\u0026#34;, 31); console.log (emp); console.log (emp.getClassName ()) This approach is quite verbose. You can shorthand by creating local variable for package such as\n var manager = com.tk.training.oojs.manager console.log (manger.Person) Modules - A module is a way of dividing a JavaScript into packages, with that we define what a module is exporting and what are its dependencies.\nLet’s define a simple module:\n/** * @param {Array} dependencies * @param {Array} parameters */ function module (dependencies, parameters) { var localVariable1 = parameters [0]; var localVariable2 = parameters [1]; /** * @private function */ function localFunction () { return localVariable1 = localVariable1 + localVariable2; } /** * @exports */ return { getVariable1: function _getV1 () { return localVariable1; }, getVariable2: function _getV2 () { return localVariable2; }, add: localFunction, print: function _print () { console.log (\u0026#34;1=\u0026#34;, localVariable1, “2=\u0026#34;, localVariable2); } }; } var moduleInstance = module ([], [5, 6]); console.log (moduleInstance.getVariable1 ()); console.log (moduleInstance.getVariable2 ()); console.log (moduleInstance.add ()); moduleInstance.print (); moduleInstance = module ([], [11, 12]); console.log (moduleInstance.getVariable1 ()); console.log (moduleInstance.getVariable2 ()); console.log (moduleInstance.add ()); moduleInstance.print (); 5 6 11 1= 11, 2= 6 11 12 23 1= 23, 2= 12 A module encapsulates the internal data and exports the functionality, we can create a module for creating the objects for the classes. Every time a module is called, it returns a new instance of the returning JSON object. All the exported properties and the APIs are accessed from the returned object. Let’s change the classes created so far to follow the module pattern.\nnamespaces.js\nvar com = (function () { // create namespace /** * @namespace com.tk.training.oojs */ var com = {\ttk: {training: {oojs: {}}}}; /** * @namespace com.tk.training.oojs.interface */ com.tk.training.oojs.interface = {}; /** * @namespace com.tk.training.oojs.manager */ com.tk.training.oojs.manager = {}; return com; })(); ifactory.js\ncom.tk.training.oojs.interface.IFactory = (function () { /** * @static * Method to validate if methods of interface are available in \u0026#39;this_obj\u0026#39; * * @param {object} this_obj - object of implementing class. * @param {object} interface - object of interface. */ function validate (this_obj, interface) { for (var i = 0; i \u0026lt; interface.methods.length; i++) { var method = interface.methods[i]; if (! this_obj [method] || typeof this_obj [method]! == \u0026#34;function\u0026#34;) { throw new Error (\u0026#34;Class must implement Method: \u0026#34; + method); } } }; return {validate: validate}; })(); human.js\ncom.tk.training.oojs.interface.Human = (function() { return {name: \u0026#34;Human”, methods: [\u0026#34;getClassName\u0026#34;, \u0026#34;getBirthDate\u0026#34;]}; })(); person.js\ncom.tk.training.oojs.manager.Person = (function (dependencies) { //dependencies var IFactory = dependencies [0]; var Human = dependencies [1]; function Person (name, age, dob) { if (! (this instanceof Person)) { throw new Error (\u0026#34;Constructor can\u0026#39;t be called as function\u0026#34;); } var _name = name; var _age = age; var _dob = dob; this.getName = function _getName () { return _name; }; this.getAge = function _getAge () { return _age; }; this.getBirthDate = function _getDob () { return _dob; }; /** * @public String representation * @return */ this.toStr = function toString () { return \u0026#34;name:\u0026#34; + _name + \u0026#34;, age:\u0026#34; + _age; }; IFactory.validate (this, Human); }; Person.prototype.getClassName = function _getClassName () { return \u0026#34;class: Person\u0026#34;; }; //export the person class; return Person; }) ([com.tk.training.oojs.interface.IFactory, com.tk.training.oojs.interface.Human]); employee.js\ncom.tk.training.oojs.manager.Employee = (function (dependencies) { //dependencies var Person = dependencies [0]; /** * This is the constructor. * * @param {String} type * @constructor */ function Employee (code, firstName, lastName, age) { if (! (this instanceof Employee)) { throw new Error (\u0026#34;Constructor can\u0026#39;t be called as function\u0026#34;); } //Calling super constructor Person.call (this, firstName, age); // Private fields var _code = code; var _firstName = firstName; var _lastName = lastName; // Public getters this.getCode = function _getCode () { return _code; }; this.getFirstName = function _getFirstName () { return _firstName; }; this.getLastName = function _getLastName () { return _lastName; }; this.toStr = function toString () { return \u0026#34;Code:\u0026#34; + _code + \u0026#34;, Name:\u0026#34; + _firstName +\u0026#34; \u0026#34;+ _lastName; }; }; Employee.prototype = Object.create (Person.prototype); Employee.prototype.constructor = Employee; Employee.prototype.getClassName = function _getClassName () { //Calling super.getClassName () var baseClassName = Person.prototype.getClassName.call (this); return \u0026#34;class: Employee extending \u0026#34; + baseClassName; }; //export the employee class; return Employee; }) ([com.tk.training.oojs.manager.Person]); var emp = new com.tk.training.oojs.manager.Employee (425, \u0026#34;Kuldeep\u0026#34;, \u0026#34;Singh\u0026#34;, 31); console.log (emp); console.log (emp.getClassName ()); You can also encapsulate the data in a module, so instead of having a lot of classes we may have modules returning the exposed APIs.\nBy the above approach we can clearly state the dependencies in each module. But we still need to define a global variable to hold the dependencies. Also, in an HTML page we need to include javascript files in the specific order such that all the dependencies get resolved correctly.\n\u0026lt;script src=\u0026#34;resources/jquery-1.9.0.min.js\u0026#34;\u0026gt;\u0026lt;/script\u0026gt; \u0026lt;script src=\u0026#34;resources/namespaces.js\u0026#34;\u0026gt;\u0026lt;/script\u0026gt; \u0026lt;script src=\u0026#34;resources/ifactory.js\u0026#34;\u0026gt;\u0026lt;/script\u0026gt; \u0026lt;script src=\u0026#34;resources/human.js\u0026#34;\u0026gt;\u0026lt;/script\u0026gt; \u0026lt;script src=\u0026#34;resources/person.js\u0026#34;\u0026gt;\u0026lt;/script\u0026gt; \u0026lt;script src=\u0026#34;resources/employee.js\u0026#34;\u0026gt;\u0026lt;/script\u0026gt; As the number of classes, modules, and files increases, it will be difficult to remember the order of dependencies. So maintaining such a code is still difficult. In next section we will discuss how to resolve this issue.\n Dependencies Management - The management of dependency in JavaScript can be done using the AMD (Asynchronous Module Definition) approach, which is based upon the module pattern discussed in the last section. The modules are loaded as and when required and only once. In general, the Java Scripts are loaded in the sequence in which they are included in the HTML page, but using AMD they will be loaded asynchronously and in the order they are used.\nThere are JavaScript libraries which support AMD approach, one of which is require.js. We can also group modules in the multiple folders which eliminate the need of namespaces. See the following updated code using AMD approach. Download require.js from here, and include it in HTML, remove the other included script tags.\n\u0026lt;script data-main=\u0026#34; resources/main.js\u0026#34; src=\u0026#34;resources/require.js\u0026#34;\u0026gt;\u0026lt;/script\u0026gt; We have refactored javascript code in following structure The syntax for defining a module\n define (\u0026lt;array of dependencies\u0026gt;, function (Dependency1, Dependency1 …) { }); The similar syntax is used for require. Let’s define the modules created so far in AMD syntax.\nhuman.js\ndefine (function () { console.log (\u0026#34;Loaded Hunam\u0026#34;); return {name: \u0026#34;Human”, methods: [\u0026#34;getClassName\u0026#34;, \u0026#34;getBirthDate\u0026#34;]}; }); ifactory.js\ndefine (function () { console.log (\u0026#34;Loaded IFactory\u0026#34;); /** * @static * Method to validate if methods of interface are available in \u0026#39;this_obj\u0026#39; * * @param {object} this_obj - object of implementing class. * @param {object} interface - object of interface. */ function validate (this_obj, interface) { for (var i = 0; i \u0026lt; interface.methods.length; i++) { var method = interface.methods[i]; if (! this_obj [method] || typeof this_obj [method]! == \u0026#34;function\u0026#34;) { throw new Error (\u0026#34;Class must implement Method: \u0026#34; + method); } } }; return {validate: validate}; }); person.js\ndefine ([\u0026#34;interface/human\u0026#34;, \u0026#34;interface/ifactory\u0026#34;], function (Human, IFactory) { console.log (\u0026#34;Loaded Person with dependencies: “, Human, IFactory); function Person (name, age, dob) { if (! (this instanceof Person)) { throw new Error (\u0026#34;Constructor can\u0026#39;t be called as function\u0026#34;); } var _name = name; var _age = age; var _dob = dob; this.getName = function _getName () { return _name; }; this.getAge = function _getAge () { return _age; }; this.getBirthDate = function _getDob () { return _dob; }; /** * @public String representation * @return */ this.toStr = function toString () { return \u0026#34;name:\u0026#34; + _name + \u0026#34;, age:\u0026#34; + _age; }; IFactory.validate (this, Human); }; Person.prototype.getClassName = function _getClassName () { return \u0026#34;class: Person\u0026#34;; }; //export the person class; return Person; }); employee.js\ndefine ([\u0026#34;impl/person\u0026#34;], function (Person) { console.log (\u0026#34;Loaded Employee\u0026#34;); function Employee (code, firstName, lastName, age) { if (! (this instanceof Employee)) { throw new Error (\u0026#34;Constructor can\u0026#39;t be called as function\u0026#34;); } //Calling super constructor Person.call (this, firstName, age); // Private fields var _code = code; var _firstName = firstName; var _lastName = lastName; // Public getters this.getCode = function _getCode () { return _code; }; this.getFirstName = function _getFirstName () { return _firstName; }; this.getLastName = function _getLastName () { return _lastName; }; this.toStr = function toString () { return \u0026#34;Code:\u0026#34; + _code + \u0026#34;, Name:\u0026#34; + _firstName +\u0026#34; \u0026#34;+ _lastName; }; }; Employee.prototype = Object.create (Person.prototype); Employee.prototype.constructor = Employee; Employee.prototype.getClassName = function _getClassName () { //Calling super.getClassName () var baseClassName = Person.prototype.getClassName.call (this); return \u0026#34;class: Employee extending \u0026#34; + baseClassName; }; //export the employee class; return Employee; }); main.js\nrequire ([\u0026#34;impl/employee\u0026#34;], function (Employee) { console.log (\u0026#34;Loaded Main\u0026#34;); var emp = new Employee (425, \u0026#34;Kuldeep\u0026#34;, \u0026#34;Singh\u0026#34;, 31); console.log (emp); console.log (emp.getClassName ()); }); Loaded Hunam Loaded IFactory Loaded Person with dependencies: Object {name: \u0026quot;Human\u0026quot;, methods: Array [2]} Object {validate: function} Loaded Employee Loaded Main Employee {getName: function, getAge: function, getBirthDate: function, toStr: function, getCode: function…} class: Employee extending class: Person Also, see the snap of JS loading sequence given below, the timelines human.js and ifactory.js are loaded almost in parallel. Documentation This section describes JavaScript documentation using JSDoc, which is a markup language used to annotate the JavaScript source code files. The syntax and the semantics of JSDoc are similar to those of the Javadoc scheme, which is used for documenting code written in Java. JSDoc differs from Javadoc, in that it is specialized to handle the dynamic behavior of JavaScript. Jsdoc_toolkit-2.4.0\nGenerate Documentation Download the JSDoc toolkit and unzip the archive.\n Running JsDoc Toolkit requires you to have Java installed on your computer. For more information see http://www.java.com/getjava/.\n Before running the JsDoc Toolkit app you should change your current working directory to the jsdoc-toolkit folder. Then follow the examples given below or as shown on the project wiki.\n Command Line - On a computer running Windows a valid command line to run JsDoc Toolkit might look like this:\njava -jar jsrun.jar app\\run.js -a -t=templates\\jsdoc mycode.js You can use –d switch to specify an output directory.\n Reports - It generates similar reports as Java Docs\n Deployment We have discussed modularization and documentation, which result in a lot of files and a lot of lines for documentation. There will be lot of round trips on server for loading the JavaScript which is not recommended in production. We should minify, obfuscate and concatenate the JavaScript files to reduce the round trips and size of the JavaScript code to be executed in browsers.\nDuring development we should use the AMD approach, and for production we can optimize the scripts by the following steps.\nInstall r.js Optimizer Download node.js and install it. On node console run following C:\\\u0026gt;node npm -g require.js Generate build profile resources/build-profile.js\n({ baseUrl: \u0026#34;.\u0026#34;, name: \u0026#34;main\u0026#34;, out: \u0026#34;main-build.js\u0026#34; }) Build, Package and Optimize Run following command in cmd prompt\nr.js.cmd -o build-profile.js It will generate a file main-build.js as follows, which is a minified, obfuscated and concatenated file.\ndefine(\u0026#34;interface/human\u0026#34;,[],function(){return console.log(\u0026#34;Loaded Hunam\u0026#34;),{name:\u0026#34;Human\u0026#34;,methods:[\u0026#34;getClassName\u0026#34;,\u0026#34;getBirthDate\u0026#34;]}}),define(\u0026#34;interface/ifactory\u0026#34;,[],function(){function e(e,t){for(var n=0;n\u0026lt;t.methods.length;n++){var r=t.methods[n];if(!e[r]||typeof e[r]!=\u0026#34;function\u0026#34;)throw new Error(\u0026#34;Class must implement Method: \u0026#34;+r)}}return console.log(\u0026#34;Loaded IFactory\u0026#34;),{validate:e}}),define(\u0026#34;impl/person\u0026#34;,[\u0026#34;interface/human\u0026#34;,\u0026#34;interface/ifactory\u0026#34;],function(e,t){function n(r,i,s){if(!(this instanceof n))throw new Error(\u0026#34;Constructor can\u0026#39;t be called as function\u0026#34;);var o=r,u=i,a=s;this.getName=function(){return o},this.getAge=function(){return u},this.getBirthDate=function(){return a},this.toStr=function f(){return “name :\u0026#34;+o+\u0026#34;, age :\u0026#34;+u},t.validate(this,e)}return console.log(\u0026#34;Loaded Person with dependencies : \u0026#34;,e,t),n.prototype.getClassName=function(){return “class: Person\u0026#34;},n}),define(\u0026#34;impl/employee\u0026#34;,[\u0026#34;impl/person\u0026#34;],function(e){function t(n,r,i,s){if(!(this instanceof t))throw new Error(\u0026#34;Constructor can\u0026#39;t be called as function\u0026#34;);e.call(this,r,s);var o=n,u=r,a=i;this.getCode=function(){return o},this.getFirstName=function(){return u},this.getLastName=function(){return a},this.toStr=function f(){return “Code:\u0026#34;+o+\u0026#34;, Name :\u0026#34;+u+\u0026#34; \u0026#34;+a}}return console.log(\u0026#34;Loaded Employee\u0026#34;),t.prototype=Object.create(e.prototype),t.prototype.constructor=t,t.prototype.getClassName=function(){var n=e.prototype.getClassName.call(this);return “class: Employee extending \u0026#34;+n},t}),require([\u0026#34;impl/employee\u0026#34;],function(e){console.log(\u0026#34;Loaded Main\u0026#34;);var t=new e(425,\u0026#34;Kuldeep\u0026#34;,\u0026#34;Singh\u0026#34;,31);console.log(t),console.log(t.getClassName())}),define(\u0026#34;main\u0026#34;,function(){}); Include Include the optimized script for production release.\n\u0026lt;script data-main=\u0026#34;resources/main-build.js\u0026#34; src=\u0026#34;resources/require.js\u0026#34;\u0026gt;\u0026lt;/script\u0026gt; This completes the course. Happy reading!\n","id":58,"tag":"oops ; object oriented ; javascript ; js ; course ; practice ; language ; technology","title":"Object Oriented JavaScript","type":"post","url":"https://thinkuldeep.com/post/oojs/"},{"content":"I define software effectiveness as, doing the objective effectively, I mean correctly. Efficiency can be defined as, using the resources optimally where resources could be memory, CPU, time, files, connections, databases etc.\nFrom my experience, in most (should I call many) of the software projects, efficiency/performance is not much accentuated during the system design and in earlier phases(requirement and estimation) as compared to the emphasis given late in the game, coding and testing and mostly in maintenance.\nStressing on efficiency, during the early SDLC phases, can eliminate lot of problems. If we consider the efficiency in say coding phase, then we probably able to develop an optimal system in which 90% of the code using just 1% of CPU in peak load, but 10% of code is using 99% of CPU. If we worry about the performance only after the system is built then we are in the worst situation, we probably could not reach 90% optimal code level.\nComing back to effectiveness, usually this is the most emphasized topic in all the SDLC phases, we always try to make the system understandable, testable, and maintainable.\nEfficiency is generally against the code quality measures that were considered to improve effectiveness, more efficient code is usually more difficult to understand, hard to maintain, sometime very hard to test.\nSo based on project, we should benchmark/strengthen the SDLC process to balance-out the efficiency and effectiveness in each phases. Keep in mind that there are always some modules where efficiency is more concerned than the understandability, maintainability. We may change our mind setup when understanding a highly efficient module that this will require more effort to understand, maintain and test.\nThis post is not meant to allow you writing less effective code, saying that you are writing efficient code so; it will be less understandable, not testable and not maintainable. Use better design pattern, prepare approach and discuss it, design the efficient module vary carefully\nThis article was originally published on My LinkedIn Profile\n","id":59,"tag":"effectiveness ; software ; efficiency ; general ; technology","title":"Software Effectiveness vs Software Efficiency","type":"post","url":"https://thinkuldeep.com/post/software-effectiveness-vs-efficiency/"},{"content":"Hash collision degrades the performance of HashMap significantly. Java 8 has introduced a new strategy to deal with hash collisions, thus improving the performance of HashMaps. Considering this improvement in Java 8 for hash collisions, existing applications can expect performance improvements in case they are using HashMaps having large number of elements by simply upgrading to Java 8.\nEarlier, when multiple keys ends up in the same bucket, then values along with their keys are placed in a linked list. In case of retrieval linked list has to be traversed to get the entry. In worst case scenario, when all keys are mapped to the same bucket, the lookup time of HashMap increases from O(1) to O(n).\nImprovement/Changes in Java 8 Java 8 has come with following improvement of HashMap objects in case of high collisions.\n The alternative String hash function added in Java 7 has been removed. Buckets containing a large number of colliding keys will store their entries in a balanced tree instead of a LinkedList after certain threshold is reached. Above changes ensures performance of O(log(n)) in worst case scenarios (hash function is not distributing keys properly) and O(1) with proper hashCode().\nHow linked list is replaced with binary tree? In Java 8 HashMap replaces linked list with a binary tree when the number of elements in a bucket reaches certain threshold. While converting list to binary tree hashcode is used as a branching variable. If there are two different hashcode in the same bucket, one is considered bigger and goes to the right of the tree and other one to the left. But when both the hashcode are equal, HashMap assumes that the keys are Comparable, and compares the key to determine the direction so that some order can be maintained. It is a good practice to make the keys of HashMap comparable.\nThis JDK 8 change applies only to HashMap, LinkedHashMap and ConcurrentHashMap.\nPerformance Benchmarking Based on a simple experiment of creating HashMaps of different sizes and performing put and get operation by key,\nFollowing results have been recorded.\n HashMap.get() operation with proper hashCode() logic: HashMap.get() operation with broken (hashCode is same for all Keys) hashCode() logic HashMap.put() operation with proper hashCode() logic: HashMap.put() operation with broken (hashCode is same for all Keys) hashCode() logic There is a great improvement when there is a map with high number of elements, and even if there are collision, HashMap of Java 8 works much faster than the older versions.\n","id":60,"tag":"performance ; java 8 ; java ; whatsnew ; hashmap ; benchmarking ; technology","title":"HashMap Performance Improvement in Java 8","type":"post","url":"https://thinkuldeep.com/post/java8-hashmap-performance-improvement/"},{"content":"After Java 8, developers can apply functional programming constructs in a pure Object-Oriented programming language through lambda expressions. Using lambda expression sequential and parallel execution can be achieved by passing behavior into methods. In Java world lambdas can be thought of as an anonymous method with a more compact syntax. Here compact means it is not mandatory to specify access modifiers, return type and parameter types while defining the expression.\nThere are various reasons for addition of lambda expression in Java platform but the most beneficial of them is we can easily distribute processing of collection over multiple threads. Prior to Java 8 if the processing of elements in a collection has to be done in parallel, client code was supposed to perform the necessary steps and not the collection. In Java 8 using Lambda Expression and Stream API we can pass processing logic of elements into methods provided by collections and now collection is responsible for parallel processing of elements not the client. Also parallel processing effectively utilizes multicore CPUs used now a days.\nLambda Expression Syntax: (parameters) -\u0026gt; expression or (parameters) -\u0026gt; { statements; } Java 8 provide support for lambda expressions only with functional interfaces. FunctionalInterface is also a new feature introduced in Java 8. Any Interface with single abstract method is called Functional Interface. Since there is only one abstract method, there is no confusion in applying the lambda expression to that method.\nBenefits of Lambda Expression Less Lines of Code - One of the benefit of using lambda expression is reduced amount of code, see below example. // Using Anonymous class Runnable r1 = new Runnable() { @Override public void run() { System.out.println(\u0026#34;Prior to Java 8\u0026#34;); } }; // Runnable using lambda expression Runnable r2 = () -\u0026gt; { System.out.println(\u0026#34;From Java 8\u0026#34;); }; As we know in Java, lambda can be used only with functional interfaces. In above example Runnable is a functional interface, so we can easily apply lambda expression here. In this case we are not passing any parameter in lambda expression because the run() method of the functional interface (Runnable) takes no argument. Also syntax of the lambda expression says we can omit curly braces ({}) in case of single statement in the method body. In case of multiple statements, we should use curly braces as done in above example. Sequential and Parallel Execution Support by passing behavior in methods Prior to Java 8, processing elements of any collection can be done by obtaining an iterator from the collection and then iterating over the elements and then processing each element. If the requirement is to process the elements in parallel, it should be done by client code. With the introduction of Stream API in Java 8, functions can be passed to collection methods and now it’s responsibility of collection to process the elements either in sequential or parallel manner.\n// Using for loop for iterating a list \tpublic static void printUsingForLoop(){ List\u0026lt;String\u0026gt; stringList = new ArrayList\u0026lt;String\u0026gt;(); stringList.add(\u0026#34;Hello\u0026#34;); for (String string : stringList) { System.out.println(\u0026#34;Content of List is: \u0026#34; + string); } } // Using forEach and passing lambda expression for iteration \tpublic static void printUsingForEach() { List\u0026lt;String\u0026gt; stringList = new ArrayList\u0026lt;String\u0026gt;(); stringList.add(\u0026#34;Hello\u0026#34;); stringList.stream().forEach((string) -\u0026gt; { System.out.println(\u0026#34;Content of List is: \u0026#34; + string); }); } Higher Efficiency (Utilizing Multicore CPU’s) Using Stream API and lambda expression we can achieve higher efficiency (parallel execution) in case of bulk operations on collections. Also lambda expressions can help in achieving internal iteration of collections rather than external iteration as shown in above example. Now a days we have CPU’s with multicores, we can take the advantage of these multicore CPU’s by parallel processing of collections using lambda.\nLambda Expression and Objects In Java any lambda expression is an object as they are instance of functional interface. We can assign a lambda expression to any variable and pass it like any other object. See below example how the lambda expression is assigned to a variable, and how it is invoked.\n// Functional Interface @FunctionalInterface public interface TaskComparator { public boolean compareTasks(int a1, int a2); } public class TaskComparatorImpl { public static void main(String[] args) { TaskComparator myTaskComparator = (int a1, int a2) -\u0026gt; {return a1 \u0026gt; a2;}; boolean result = myTaskComparator.compareTasks(5, 2); System.out.println(result); } } Where Lambda’s can be applied? Lambda expressions can be used anywhere in Java where we have a target type. In Java we have target type in following contexts:\n Variable declarations and assignments Return statements Method or constructor arguments Ref : https://www.javaworld.com/article/3452018/get-started-with-lambda-expressions.html\n","id":61,"tag":"lambda-expression ; java 8 ; java ; whatsnew ; introduction ; tutorial ; language ; technology","title":"Introduction to Java Lambda Expression","type":"post","url":"https://thinkuldeep.com/post/java8-lambda-expression/"},{"content":"Prior to JDK 8, collections can only be managed through iterators with the use of for, foreach or while loops. It means that we instruct a computer to execute the algorithm steps.\nint sum(List\u0026lt;Integer\u0026gt; list) { Iterator\u0026lt;Integer\u0026gt; intIterator = list.iterator(); int sum = 0; while (intIterator.hasNext()) { int number = intIterator.next(); if (number \u0026gt; 5) { sum += number; } } return sum; } The above approach has the following tailbacks:\n We need to express how the iteration will take place, we needed an external iteration as the client program to handle the traversal. The program is sequential; no way to do it in parallel. The code is verbose. Java 8 - Stream APIs Java 8 introduces Stream API to circumvent the aforementioned shortcomings. It allows us to leverage other changes, e.g., lambda expression, method reference, functional interface and internal iteration introduced via forEach() method.\nThe above logic of the program can be rewritten in a single line:\nint sum(List\u0026lt;Integer\u0026gt; list) { return list.stream().filter(i -\u0026gt; i \u0026gt; 5).mapToInt(i -\u0026gt; i).sum(); } The streamed content is filtered with the filter method, which takes a predicate as its argument. The filter method returns a new instance of Stream\u0026lt;T\u0026gt;. The map function is used to transform each element of the stream into a new element. The map returns a Stream\u0026lt;R\u0026gt;, in the above example, we used mapToInt to get the IntStream. Finally, IntStream has a sum operator, which calculates the sum of all elements in the stream, and we got the expected result so cleanly. Sample Examples: public class Java8Streams{ public static void main(String args[]) { List\u0026lt;String\u0026gt; strList = Arrays.asList(\u0026#34;java\u0026#34;, \u0026#34;\u0026#34;, \u0026#34;j2ee\u0026#34;, \u0026#34;\u0026#34;, \u0026#34;spring\u0026#34;, \u0026#34;hibernate\u0026#34;, \u0026#34;webservices\u0026#34;); // Count number of String which starts with \u0026#34;j\u0026#34; long count = strList.stream().filter(x -\u0026gt; x.startsWith(\u0026#34;j\u0026#34;)).count(); System.out.printf(\u0026#34;List %s has %d strings which startsWith \u0026#39;j\u0026#39; %n\u0026#34;, strList, count); // Count String with length more than 4 count = strList.stream().filter(x -\u0026gt; x.length()\u0026gt; 4).count(); System.out.printf(\u0026#34;List %s has %d strings of length more than 4 %n\u0026#34;, strList, count); // Count the empty strings count = strList.stream().filter(x -\u0026gt; x.isEmpty()).count(); System.out.printf(\u0026#34;List %s has %d empty strings %n\u0026#34;, strList, count); // Remove all empty Strings from List List\u0026lt;String\u0026gt; emptyStrings = strList.stream().filter(x -\u0026gt; !x.isEmpty()).collect(Collectors.toList()); System.out.printf(\u0026#34;Original List : %s, List without Empty Strings : %s %n\u0026#34;, strList, emptyStrings); // Create a List with String more than 4 characters List\u0026lt;String\u0026gt; lstFilteredString = strList.stream().filter(x -\u0026gt; x.length()\u0026gt; 4).collect(Collectors.toList()); System.out.printf(\u0026#34;Original List : %s, filtered list with more than 4 chars: %s %n\u0026#34;, strList, lstFilteredString); // Convert String to Uppercase and join them using comma List\u0026lt;String\u0026gt; SAARC = Arrays.asList(\u0026#34;India\u0026#34;, \u0026#34;Japan\u0026#34;, \u0026#34;China\u0026#34;, \u0026#34;Bangladesh\u0026#34;, \u0026#34;Nepal\u0026#34;, \u0026#34;Bhutan\u0026#34;); String SAARCCountries = SAARC.stream().map(x -\u0026gt; x.toUpperCase()).collect(Collectors.joining(\u0026#34;, \u0026#34;)); System.out.println(\u0026#34;SAARC Countries: \u0026#34;+SAARCCountries); // Create List of square of all distinct numbers List\u0026lt;Integer\u0026gt; numbers = Arrays.asList(1, 3, 5, 7, 3, 6); List\u0026lt;Integer\u0026gt; distinct = numbers.stream().map( i -\u0026gt; i*i).distinct().collect(Collectors.toList()); System.out.printf(\u0026#34;Original List : %s, Square Without duplicates : %s %n\u0026#34;, numbers, distinct); //Get count, min, max, sum, and average for numbers List\u0026lt;Integer\u0026gt; lstPrimes = Arrays.asList(3, 5, 7, 11, 13, 17, 19, 21, 23); IntSummaryStatistics stats = lstPrimes.stream().mapToInt((x) -\u0026gt; x).summaryStatistics(); System.out.println(\u0026#34;Highest prime number in List : \u0026#34; + stats.getMax()); System.out.println(\u0026#34;Lowest prime number in List : \u0026#34; + stats.getMin()); System.out.println(\u0026#34;Sum of all prime numbers : \u0026#34; + stats.getSum()); System.out.println(\u0026#34;Average of all prime numbers : \u0026#34; + stats.getAverage()); } } List [java, , j2ee, , spring, hibernate, webservices] has 2 strings which startsWith 'j' List [java, , j2ee, , spring, hibernate, webservices] has 3 strings of length more than 4 List [java, , j2ee, , spring, hibernate, webservices] has 2 empty strings Original List : [java, , j2ee, , spring, hibernate, webservices], List without Empty Strings : [java, j2ee, spring, hibernate, webservices] Original List : [java, , j2ee, , spring, hibernate, webservices], filtered list with more than 4 chars: [spring, hibernate, webservices] SAARC Countries: INDIA, JAPAN, CHINA, BANGLADESH, NEPAL, BHUTAN Original List : [1, 3, 5, 7, 3, 6], Square Without duplicates : [1, 9, 25, 49, 36] Highest prime number in List: 23 Lowest prime number in List: 3 Sum of all prime numbers: 119 Average of all prime numbers: 13.222222222222221 API overview The main type in the JDK 8 Stream API package java.util.stream is the Stream interface. The types IntStream, LongStream, and DoubleStream are streams over objects and the primitive int, long and double types.\nDifferences between Stream and Collection:\n A stream does not store data. An operation on a stream does not modify its source, but simply produces a result. Collections have a finite size, but streams do not. Like an Iterator, a new stream must be generated to revisit the same elements of the source. We can obtain Streams in the following ways:\n From a Collection via the stream() and parallelStream() methods; From an array via Arrays.stream(Object[]); From static factory methods on the stream classes, such as Stream.of(Object[]), IntStream.range(int, int) or Stream.iterate(Object, UnaryOperator); The lines of a file from BufferedReader.lines(); Streams of file paths from methods in Files; Streams of random numbers from Random.ints(); Other stream-bearing methods: BitSet.stream(), Pattern.splitAsStream(java.lang.CharSequence), and JarFile.stream(). Stream operations Intermediate operations Stateless operations, such as filter and map, retain no state from previous element.\nStateful operations, such as distinct and sorted, may have state from the previous elements.\nTerminal operations Operations, such as Stream.forEach or IntStream.sum, may traverse the stream to produce a result or a side-effect. After the terminal operation is performed, the stream pipeline is considered consumed, and can no longer be used.\nReduction Operations The general reduction operations are reduce() and collect() and the special reduction operations are sum(), max(), or count().\nParallelism All the operations of streams can execute either in serial or in parallel. For example, Collection has methods Collection.stream() and Collection.parallelStream(), which produce sequential and parallel streams respectively.\nReferences •\thttps://docs.oracle.com/javase/8/docs/api/java/util/stream/package-summary.html\n","id":62,"tag":"stream-api ; java 8 ; java ; whatsnew ; introduction ; tutorial ; language ; technology","title":"Introduction to Java Stream API","type":"post","url":"https://thinkuldeep.com/post/java8-stream-api/"},{"content":"The vision of IoT can be seen from two perspectives— ‘Internet’ centric and ‘Thing’ centric. The Internet centric architecture will involve internet services being the main focus while data is contributed by the objects. In the object centric architecture, the smart objects take the center stage.\nIn order to realize the full potential of cloud computing as well as ubiquitous sensing, a combined framework with a cloud at the center seems to be most viable. This not only gives the flexibility of dividing associated costs in the most logical manner but is also highly scalable. Sensing service providers can join the network and offer their data using a storage cloud; analytic tool developers can provide their software tools; artificial intelligence experts can provide their data mining and machine learning tools useful in converting information to knowledge and finally computer graphics designers can offer a variety of visualization tools.\nCloud computing can offer these services as Infrastructures, Platforms or Software where the full potential of human creativity can be tapped using them as services. The data generated, tools used and the visualization created disappears into the background, tapping the full potential of the Internet of Things in various application domains. The Cloud integrates all ends by providing scalable storage, computation time and other tools to build new businesses.\nMajor Platforms – A Quick Snapshot Below are given the quick snapshot of major IoT Cloud platforms with their main key features:\nAWS IoT Supports HTTP, WebSockets, and MQTT Rules Engine: A rule can apply to data from one or many devices, and it can take one or many actions in parallel. It can route messages to AWS endpoints including AWS Lambda, Amazon Kinesis, Amazon S3, Amazon Machine Learning, Amazon DynamoDB, Amazon CloudWatch, and Amazon Elasticsearch Service with built-in Kibana integration (Kibana is an open source data visualization plugin for Elasticsearch.) Device Shadows: Create a persistent, virtual version, or “shadow,” of each device that includes the device’s latest state. https://developer.amazon.com/blogs/post/Tx3828JHC7O9GZ9/Using-Alexa-Skills-Kit-and-AWS-IoT-to-Voice-Control-Connected-Devices\nAzure IoT Suite Easily integrate Azure IoT Suite with your systems and applications, including Salesforce, SAP, Oracle Database, and Microsoft Dynamics Azure IoT Suite packages together Azure IoT services with preconfigured solutions. Supports HTTP, Advanced Message Queuing Protocol (AMQP), and MQ Telemetry Transport (MQTT). Gateway SDK – a framework to create extensible gateway solutions, code which sends data from multiple devices over gateway to cloud connection https://azure.microsoft.com/en-in/blog/microsoft-azure-iot-suite-connecting-your-things-to-the-cloud/\nGoogle Cloud IoT Take advantage of Google’s heritage of web-scale processing, analytics, and machine intelligence. Utilizes Google\u0026rsquo;s global fiber network (70 points of presence across 33 countries) for ultra-low latency https://cloud.google.com/iot-core/ IBM Watson IoT Machine Learning - Automate data processing and rank data based on learned priorities. Raspberry Pi Support - Develop IoT apps that leverage Raspberry Pi, cognitive capabilities and APIs Real-Time Insights - Contextualize and analyze real-time IoT data https://cloud.ibm.com/catalog/services/internet-of-things-platform\nGE Predix Designed mainly for IIoT platforms. Supports over 60 regulatory frameworks worldwide. Based on Pivotal Cloud Foundry (Cloud Foundry is an open source cloud computing platform as a service (PaaS)) https://www.ge.com/digital/iiot-platform\nBosch IoT Suite Bosch IoT Analytics: Our analytics services make analyzing field data much simpler. The anomaly detection service helps investigate problems that occur in connected devices. The usage profiling service can determine typical usage patterns within a group of devices. Bosch IoT Hub: Messaging backbone for device related communication as attach point for various protocol connectors Bosch IoT Integrations: Integration with third-party services and systems https://www.bosch-iot-suite.com/capabilities-bosch-iot-suite/\nCisco Jasper IoT Focus on services. Control Center: Providing a vast array of flexible ways to automate your IoT services. https://www.cisco.com/c/en/us/solutions/internet-of-things/iot-control-center.html\nXively The messaging broker supports connections using native MQTT and WebSockets MQTT. Xively provides a C client library for use on devices Xively provides an application for integrating connected products into the Salesforce Service Cloud. https://xively.com/\nComparative Analysis We have done extensive study and trials and at this moment here is our findings, below is given the matrix of parameters vs different cloud platforms.\nThis paper was also supported by Ram Chauhan\n","id":63,"tag":"framework ; IOT ; platform ; comparison ; Analysis ; Cloud ; Azure ; Google ; IBM ; GE ; Bosch ; technology","title":"IOT Cloud Platforms - A Comparative Study","type":"post","url":"https://thinkuldeep.com/post/iot_cloud_platform_comparision/"},{"content":"This article explains the http integration using Apache Camel. Create a configurable router to any http URL having place holders.\n Prerequisite Windows7, Eclipse Juno Java 1.7 Creating HTTP Router Please follow below steps to generate POC for HTTP routing using apache camel.\nCreate Sample Camel Example Create eclipse project using archetype “camel-archetype-spring”\nmvn archetype:generate DarchetypeGroupId=com.tk.poc.camel -DarchetypeArtifactId=camel-archetype-spring -DarchetypeVersion=2.11.0 -DarchetypeRepository=https://repository.apache.org/content/groups/snapshots-group Create following Java file\npackage com.tk.poc.camel; import org.apache.camel.Endpoint; import org.apache.camel.Exchange; import org.apache.camel.Processor; import org.apache.camel.Producer; import org.apache.camel.builder.RouteBuilder; import org.apache.camel.spring.Main; public class MyRouteBuilder extends RouteBuilder { /** * Allow this route to be run as an application */ public static void main(String[] args) throws Exception { new Main().run(); } public void configure() { } } pom.xml – add following dependencies\n\u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.apache.camel\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;camel-http\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;2.11.0\u0026lt;/version\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.apache.camel\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;camel-stream\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;2.11.0\u0026lt;/version\u0026gt; \u0026lt;/dependency\u0026gt; Clean up src/main/resources/META-INF/spring/camel-context.xml\n\u0026lt;camel:camelContext xmlns=\u0026#34;http://camel.apache.org/schema/spring\u0026#34;\u0026gt; \u0026lt;camel:package\u0026gt;com.nagarro.jsag.poc.camel\u0026lt;/camel:package\u0026gt; \u0026lt;/camel:camelContext\u0026gt; Run mvn eclipse:eclipse to include dependencies in class path of eclipse.\nmvn eclipse:eclipse Create Command Configurations Let’s configure commands in a file in following format Commands will to be defined in a text file where every line of text file represents a command in following format:\n\u0026lt;command-key\u0026gt;###\u0026lt;number of placeholders\u0026gt;###\u0026lt;URL with placeholders if any in format “{index of placeholder starting from 0}”\u0026gt; commands.txt\n start###0###http://localhost:8081/home-page/poc.jsp search###1###http://localhost:8081/home-page/poc.jsp?search={0} clean###0###http://localhost:8081/home-page/logout.jsp Change the Main method of MyRouteBuilder.java\nprivate static Map\u0026lt;String, String\u0026gt; commandMap = new HashMap\u0026lt;String, String\u0026gt;(); private static StringBuilder commandText = new StringBuilder(\u0026#34;Commands:\\n\u0026#34;); public static void main(String[] args) throws Exception { String filePath = \u0026#34;./commands.txt\u0026#34;; if(args.length\u0026gt;0){ filePath = args[0]; } try (FileReader fileReader = new FileReader(filePath); BufferedReader reader = new BufferedReader(fileReader);) { String line = reader.readLine(); while (line != null) { String[] keyVal = line.split(\u0026#34;###\u0026#34;); commandMap.put(keyVal[0], keyVal[2]); commandText.append(\u0026#34;\\t\u0026#34;).append(keyVal[0]); if(!\u0026#34;0\u0026#34;.equals(keyVal[1])){ commandText.append(\u0026#34; (require \u0026#34;).append(keyVal[1]).append(\u0026#34; parameter)\u0026#34;); } commandText.append(\u0026#34;\\n\u0026#34;); line = reader.readLine(); } commandText.append(\u0026#34;\\texit\u0026#34;); commandText.append(\u0026#34;\\nEnter command and required parameters:\u0026#34;); } new Main().run(); } Configure the Camel Routes We will configure routes such as we take input (command) from console and then fetch the http URL associated to the command and then route to http URL, route the response to console as well as a log file. Define configure method as follows:\npublic void configure() { from(\u0026#34;stream:in?promptMessage=\u0026#34; + URLEncoder.encode(commandText.toString())).process(new Processor() { @Override public void process(Exchange exchange) throws Exception { String input = exchange.getIn().getBody(String.class); boolean success = true; if (input.trim().length() \u0026gt; 0) { if(\u0026#34;exit\u0026#34;.equals(input)){ System.exit(0); } String[] commandAndParams = input.split(\u0026#34; \u0026#34;); url = commandMap.get(commandAndParams[0]); if (url != null) { if (commandAndParams.length \u0026gt; 1) { String [] params = Arrays.copyOfRange(commandAndParams, 1, commandAndParams.length); url = MessageFormat.format(url, (Object[])params); }\texchange.getOut().setHeader(Exchange.HTTP_URI, url); } else { success = false; } } else { success = false; } if (!success) { throw new Exception(\u0026#34;Error in input\u0026#34;); } } }).to(\u0026#34;http://localhost:99/\u0026#34;).process(new Processor() { @Override public void process(Exchange exchange) throws Exception { StringBuffer stringBuffer = new StringBuffer(\u0026#34;\\n\\nURL :\u0026#34; + url); stringBuffer.append(\u0026#34;\\n\\nHeader :\\n\u0026#34;); Map\u0026lt;String, Object\u0026gt; headers = exchange.getIn().getHeaders(); for (String key : headers.keySet()) { if (!key.startsWith(\u0026#34;camel\u0026#34;)) { stringBuffer.append(key).append(\u0026#34;=\u0026#34;) .append(headers.get(key)).append(\u0026#34;\\n\u0026#34;); } } String body = exchange.getIn().getBody(String.class); stringBuffer.append(\u0026#34;\\nBody:\\n\u0026#34; + body); exchange.getOut().setBody(stringBuffer.toString()); exchange.getOut().setHeader(Exchange.FILE_NAME, constant(\u0026#34;response.txt\u0026#34;)); } }).to(\u0026#34;file:.?fileExist=Append\u0026#34;).to(\u0026#34;stream:out\u0026#34;); } Export as runnable jar Right click on the project and export an runnable jar. name it http.jar\nRun the example Run command “java -jar http.jar \u0026lt;commands file path\u0026gt;” , if you don’t specify commands files will automatically pick the “commands.txt” from same folder and display following D:\\HTTPClient\u0026gt;java -jar http.jar Commands: start menu (require 1 parameter) clean exit Enter command and required parameters: Enter command “start” Enter command and required parameters:start URL : http://localhost:8081/home-page/poc.jsp Header : … Body: .. Commands: start search (require 1 parameter) clean exit Enter command and required parameters: Enter command “Menu with choice” Enter command and required parameters:search IBM URL : http://localhost:8081/home-page/poc.jsp?search=IBM Header : … Body: … Commands: start menu (require 1 parameter) clean exit Enter command and required parameters: You may continue seeing the menu or may call clean or exit Enter command and required parameters:clean URL : http://localhost:8081/home-page/logout.jsp … Enter command and required parameters:exit D:\\Ericsson\\HTTPClient\u0026gt; Conclusion We have developed the Router using camel and same can directly be utilized in the project.\n","id":64,"tag":"java ; Apache Camel ; enterprise ; integration ; technology","title":"HTTP Router with Apache Camel ","type":"post","url":"https://thinkuldeep.com/post/apache-camel-http-route/"},{"content":"This article is about troubleshooting issues we have faced while using apache camel’s SSH routes. It also covers step wise guide to setup apache camel routes.\nPrerequisites Windows7 Eclipse Juno Java 1.7 Problem Statement We were getting following issues in the logs when connecting to one of the SSH server using Apache Camel-SSH. This was happening in one of the instance in production environment.\nHere are few logs :\nServer A :\nSession created... 2013-11-28 18:12:44:855 | DEBUG [MSC service thread 1-15] Connected to 2013-11-28 18:12:44:855 | DEBUG [MSC service thread 1-15] Attempting to authenticate username ******* using Password... 2013-11-28 18:12:44:857 | INFO [NioProcessor-2] Server version string: SSH-2.0-SSHD-CORE-0.6.0 2013-11-28 18:12:45:165 | DEBUG [ClientInputStreamPump] Send SSH_MSG_CHANNEL_DATA on channel 0 2013-11-28 18:12:45:166 | DEBUG [ClientInputStreamPump] Send SSH_MSG_CHANNEL_EOF on channel 0 2013-11-28 18:12:45:178 | DEBUG [NioProcessor-2] Received packet SSH_MSG_CHANNEL_FAILURE 2013-11-28 18:12:45:178 | DEBUG [NioProcessor-2] Received SSH_MSG_CHANNEL_FAILURE on channel 0 2013-11-28 18:12:45:193 | DEBUG [NioProcessor-2] Closing session 2013-11-28 18:12:45:193 | DEBUG [NioProcessor-2] Closing channel 0 2013-11-28 18:12:45:193 | DEBUG [NioProcessor-2] Closing channel 0 immediately 2013-11-28 18:12:45:194 | DEBUG [NioProcessor-2] Closing IoSession 2013-11-28 18:12:45:194 | DEBUG [NioProcessor-2] IoSession closed 2013-11-28 18:12:45:195 | INFO [NioProcessor-2] Session ***@/***:** closed Server B :\n2013-11-29 13:53:09:280 | INFO [NioProcessor-2] Session created... 2013-11-29 13:53:09:297 | DEBUG [MSC service thread 1-13] Connected to 2013-11-29 13:53:09:297 | DEBUG [MSC service thread 1-13] Attempting to authenticate username ***** using Password... 2013-11-29 13:53:09:300 | INFO [NioProcessor-2] Server version string: SSH-2.0-OpenSSH_5.3 22013-11-29 13:53:09:472 | DEBUG [ClientInputStreamPump] Space available for client remote window 2013-11-29 13:53:09:472 | DEBUG [ClientInputStreamPump] Send SSH_MSG_CHANNEL_DATA on channel 0 2013-11-29 13:53:09:472 | DEBUG [ClientInputStreamPump] Send SSH_MSG_CHANNEL_EOF on channel 0 2013-11-29 13:53:09:494 | DEBUG [NioProcessor-2] Received packet SSH_MSG_CHANNEL_DATA 2013-11-29 13:53:09:494 | DEBUG [NioProcessor-2] Received SSH_MSG_CHANNEL_DATA on channel 0 2013-11-29 13:53:09:494 | DEBUG [NioProcessor-2] Received packet SSH_MSG_CHANNEL_EOF 2013-11-29 13:53:09:494 | DEBUG [NioProcessor-2] Received SSH_MSG_CHANNEL_EOF on channel 0 2013-11-29 13:53:09:495 | DEBUG [NioProcessor-2] Received packet SSH_MSG_CHANNEL_REQUEST 2013-11-29 13:53:09:495 | INFO [NioProcessor-2] Received SSH_MSG_CHANNEL_REQUEST on channel 0 2013-11-29 13:53:09:495 | DEBUG [NioProcessor-2] Received packet SSH_MSG_CHANNEL_CLOSE 2013-11-29 13:53:09:495 | DEBUG [NioProcessor-2] Received SSH_MSG_CHANNEL_CLOSE on channel 0 2013-11-29 13:53:09:495 | DEBUG [NioProcessor-2] Send SSH_MSG_CHANNEL_CLOSE on channel 0 Solving the Chaos! We have taken following steps to resole the issues\n Validate SSH - Checked SSH connection with putty and it was working fine, we were able to execute the commands.\n Analyze Logs - SSH_MSG_CHANNEL_FAILURE error was coming on server A, but is working fine when trying to connect server B. We have noticed following logs\nServer A:\n2013-11-28 18:12:44:857 | INFO [NioProcessor-2] Server version string: SSH-2.0-SSHD-CORE-0.6.0 Server B:\n2013-11-29 13:53:09:300 | INFO [NioProcessor-2] Server version string: SSH-2.0-OpenSSH_5.3 So different SSH servers are being used.\nResolution 1 – Use same version of SSH Server Install SSHD-CORE-0.6.0 on Server A /local system\n Download http://mina.apache.org/sshd-project/download_0.6.0.html Extract download zip\n Go to /bin\n Run Server : Run sshd.bat(windows) or sshd.sh(linux)\n Run Client: ssh.bat -l [loginuser] -p [port] [hostname] [command]\nssh.bat -l login -p 8000 localhost Enter password same as login name “login” It will open up the OS shell where you can run commands.\n You can also try to connect ssh using Putty.exe on host and port.\n So now our new SSH server is ready to use, but if we try to connect SSH as execution channel ssh.sh or ssh.bat with some command It always returns with\nssh.bat -l login -p 8000 localhost dir Exception in thread \u0026#34;main\u0026#34; java.lang.IllegalArgumentException: command must not be null at org.apache.sshd.client.channel.ChannelExec.\u0026lt;init\u0026gt;(ChannelExec.java:35) at org.apache.sshd.client.session.ClientSessionImpl.createExecChannel(ClientSessionImpl.java:175) at org.apache.sshd.client.session.ClientSessionImpl.createChannel(ClientSessionImpl.java:160) at org.apache.sshd.client.session.ClientSessionImpl.createChannel(ClientSessionImpl.java:153) at org.apache.sshd.SshClient.main(SshClient.java:402) 53352 [NioProcessor-2] INFO org.apache.sshd.client.session.ClientSessionImpl - Closing session This is because the execution channel at client side is not well implemented. Only SCP command has been implemented. For now we can ignore this client error and try the camel example with new SSH server.\n Try Camel-SSH with new SSH Server (SSHD-CORE) Create Sample Camel-SSH Example Create eclipse project using archetype “camel-archetype-spring\u0026rdquo; mvn archetype:generate DarchetypeGroupId=com.tk.poc.camel -DarchetypeArtifactId=camel-archetype-spring - DarchetypeVersion=2.11.0 -DarchetypeRepository=https://repository.apache.org/content/groups/snapshots-group Create following Java file\npackage com.tk.poc.camel; import org.apache.camel.Endpoint; import org.apache.camel.Exchange; import org.apache.camel.Processor; import org.apache.camel.Producer; import org.apache.camel.builder.RouteBuilder; import org.apache.camel.spring.Main; /** * A simple example router from a file system to an ActiveMQ queue and then to a file system * * @version */ public class MyRouteBuilder extends RouteBuilder { private static String source = \u0026#34;\u0026#34;; private static String url = \u0026#34;\u0026#34;; /** * Allow this route to be run as an application */ public static void main(String[] args) throws Exception { if (args.length \u0026gt; 0) { source = args[0]; url = args[1]; } String[] ar = {}; new Main().run(ar); } public void configure() { from(\u0026#34;file:\u0026#34; + source + \u0026#34;?noop=true\u0026amp;consumer.useFixedDelay=true\u0026amp;consumer.delay=60000\u0026#34;).process(new Processor() { public void process(Exchange exchange) throws Exception { Endpoint ePoint = exchange.getContext().getEndpoint(url); Producer producer = ePoint.createProducer(); producer.process(exchange); } }).bean(new SomeBean()); } public static class SomeBean { public void someMethod(String body) { System.out.println(\u0026#34;Received: \u0026#34; + body); } } } Pom.xml - dependencies\n\u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.apache.camel\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;camel-ssh\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;2.11.0\u0026lt;/version\u0026gt; \u0026lt;!-- use the same version as your Camel core version --\u0026gt; \u0026lt;/dependency\u0026gt; Clean up src/main/resources/META-INF/spring/camel-context.xml\n\u0026lt;camel:camelContext xmlns=\u0026#34;http://camel.apache.org/schema/spring\u0026#34;\u0026gt; \u0026lt;camel:package\u0026gt;com.tk.poc.camel\u0026lt;/camel:package\u0026gt; \u0026lt;/camel:camelContext\u0026gt; Run mvn eclipse:eclipse to include dependencies in class path of eclipse.\nmvn eclipse:eclipse You can now directly run the MyRouteBuilder.java in eclipse with required argument\n Source – a folder path, where folder is containing a text file having command to execute on server.\n Camel URL for SSH component\n ssh://[username]@[host]?password=[password]\u0026amp;port=[port] Let’s export the project as executable jars keeping the dependencies in separate folder\n Lets create a run.bat\njava -jar ssh.jar \u0026#34;D:/TK/Testing/data\u0026#34; \u0026#34;ssh://login@localhost?password=login\u0026amp;port=8000\u0026#34; pause Place the run.bat in the directory where jar has been exported.\n Run the camel SSH example Execute run.bat We get same error as we see on production So problem is now reproducible locally. Resolution 2: Implement Exec channel support in SSH component After looking at the component code, we found that only SCP command has been implemented, EXEC channel is not working correctly.\nExtract as eclipse project Download the camel-ssh source and import it as eclipse project\n Update its pom.xml as follows\n \u0026lt;project xmlns=\u0026#34;http://maven.apache.org/POM/4.0.0\u0026#34; xmlns:xsi=\u0026#34;http://www.w3.org/2001/XMLSchema-instance\u0026#34; xsi:schemaLocation=\u0026#34;http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\u0026#34;\u0026gt; \u0026lt;modelVersion\u0026gt;4.0.0\u0026lt;/modelVersion\u0026gt; \u0026lt;groupId\u0026gt;org.apache.camel\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;camel-ssh\u0026lt;/artifactId\u0026gt; \u0026lt;packaging\u0026gt;jar\u0026lt;/packaging\u0026gt; \u0026lt;version\u0026gt;2.11.0\u0026lt;/version\u0026gt; \u0026lt;name\u0026gt;Camel :: SSH\u0026lt;/name\u0026gt; \u0026lt;description\u0026gt;Camel SSH support\u0026lt;/description\u0026gt; \u0026lt;properties\u0026gt; \u0026lt;camel.osgi.export.pkg\u0026gt;org.apache.camel.component.ssh.*\u0026lt;/camel.osgi.export.pkg\u0026gt; \u0026lt;camel.osgi.export.service\u0026gt;org.apache.camel.spi.ComponentResolver;component=ssh\u0026lt;/camel.osgi.export.service\u0026gt; \u0026lt;/properties\u0026gt; \u0026lt;dependencies\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.apache.camel\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;camel-core\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;2.11.0\u0026lt;/version\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.apache.mina\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;mina-core\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;2.0.7\u0026lt;/version\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.apache.sshd\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;sshd-core\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;0.6.0\u0026lt;/version\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;!-- test dependencies --\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;bouncycastle\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;bcprov-jdk15\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;140\u0026lt;/version\u0026gt; \u0026lt;scope\u0026gt;compile\u0026lt;/scope\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.slf4j\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;slf4j-log4j12\u0026lt;/artifactId\u0026gt; \u0026lt;scope\u0026gt;test\u0026lt;/scope\u0026gt; \u0026lt;version\u0026gt;1.6.1\u0026lt;/version\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;/dependencies\u0026gt; \u0026lt;/project\u0026gt; Update the component - Change the “SshEndpoint.java”\nFrom :\npublic SshResult sendExecCommand(String command) throws Exception { SshResult result = null; ... ... ClientChannel channel = session.createChannel(ClientChannel.CHANNEL_EXEC, command); ByteArrayInputStream in = new ByteArrayInputStream(new byte[]{0}); channel.setIn(in); ... ... } To:\npublic SshResult sendExecCommand(String command) throws Exception { SshResult result = null; ... ... ClientChannel channel = session.createChannel(ClientChannel.CHANNEL_SHELL); log.debug(\u0026#34;Command : \u0026#34; + command); InputStream inputStream = new ByteArrayInputStream(command.getBytes()); channel.setIn(inputStream); ... ... } Build Again - Build the jar\nmvn install Try Camel-SSH Again Use updated jar - Rebuild Camel SSH example or place the camel-ssh-2.11.0.jar in the ssh_lib folder.\n Specify Shell Termination command\n/data/command.txt\nls exit Execute Run.bat\nIt returns the result of command on server\nSH_MSG_CHANNEL_EXTENDED_DATA on channel 0 [ NioProcessor-2] ClientSessionImpl DEBUG Received packet SSH_MSG_CHANNEL_EOF [ NioProcessor-2] ChannelShell INFO Received SSH_MSG_CHANNEL_EOF on channel 0 [ NioProcessor-2] ClientSessionImpl DEBUG Received packet SSH_MSG_CHANNEL_REQUEST [ NioProcessor-2] ChannelShell INFO Received SSH_MSG_CHANNEL_REQUEST on channel 0 [ NioProcessor-2] ClientSessionImpl DEBUG Received packet SSH_MSG_CHANNEL_CLOSE [ NioProcessor-2] ChannelShell INFO Received SSH_MSG_CHANNEL_CLOSE on channel 0 [ NioProcessor-2] ChannelShell INFO Send SSH_MSG_CHANNEL_CLOSE on channel 0 Received: 2 anaconda-ks.cfg key.pem Kuldeep logfile Music nohup.out Videos Another Issue on Server We have faced issue that channel is not being closed and following log is being printing\nResolution 3: Identify the termination command on Shell We need to identify the termination command for the shell on server, we noticed that termination command on server’s default shell was “exit;” instead of “exit”. We updated /data/command.txt accordingly, /data/command.txt\nls exit; and things worked for us.\nSo here you have customized Apache Camel component.\n","id":65,"tag":"Apache Camel ; java ; ssh ; tutorial ; customization ; technology","title":"Apache Camel SSH Component","type":"post","url":"https://thinkuldeep.com/post/apache-camel-ssh-component/"},{"content":"This article help you start with Groovy, with step by step guide and series of examples. It starts with an overview and then covers in detail examples.\nGroovy Overview Groovy is an Object Oriented Scripting Language which provides Dynamic, Easy-to-use and Integration capabilities to the Java Virtual Machine. It absorbs most of the syntax from Java and it is much powerful in terms of functionalities which is manifested in the form Closures, Dynamic Typing, Builders etc. Groovy also provides simplified API for accessing Databases and XML.\nGroovy is almost compatible to Java, e.g. almost every Java construct is valid Groovy coding which makes the usage of Groovy for an experience Java programmer easy.\nFeatures While the simplicity and ease of use is the leading principle of Groovy here are a few nice features of Groovy:\n Groovy allows changing classes and methods at runtime. For example if a class does not have a certain method and this method is called by another class the called class can decided what to do with this call. Groovy does not require semicolons to separate commands if they are in different lines (separated by new-lines). Groovy has list processing and regular expressions directly build into the language. Groovy implements also the Builder Pattern which allows creating easily GUI\u0026rsquo;s, XML Documents or Ant Tasks. Asserts in Groovy will always be executed. Working with Groovy We will create a basic initial examples and understand the syntax and semantics of Groovy the scripting language.\nHello Groovy World This example shows how to write a groovy program. It is a basic program to demo the Groovy code style.\n A Groovy source files ends with the extension .groovy and all Groovy classes are by default public. In Groovy all fields of a class have by default the access modifier \u0026ldquo;private\u0026rdquo;. package com.tk.groovy.poc class GroovyHelloWorld { // declaring the string variable \t/** The first name. */ String firstName String lastName /** * Main. * * @param args the args * This is how mail function is defined in groovy */ static void main(def args) { GroovyHelloWorld p = new GroovyHelloWorld() p.firstName(\u0026#34;Kuldeep\u0026#34;) p.lastName = \u0026#34;Singh\u0026#34; println(\u0026#34;Hello \u0026#34; + p.firstName + \u0026#34; \u0026#34; + p.lastName);\t} } Encapsulation in Groovy Groovy doesn’t force you to keep the file name at the name of public class you define. In Groovy all fields of a class have by default the access modifier \u0026ldquo;private\u0026rdquo; and Groovy provides automatically getter and setter methods for the fields. Therefore all Groovy classes are per default JavaBeans.\npackage com.tk.groovy.poc; class Person { String firstName; String lastName; static main(def args) { // creating instance of person class \tPerson person = new Person(); // groovy auto generates the setter and getter method of variables \tperson.setFirstName(\u0026#34;Kuldeep\u0026#34;); person.setLastName(\u0026#34;Singh\u0026#34;) // calling autogenerated getter method \tprintln(person.getFirstName() + \u0026#34; \u0026#34; + person.getLastName()) } } Constructors Groovy will also directly create constructors in which you can specify the element you would like to set during construction. Groovy archives this by using the default constructor and then calling the setter methods for the attributes. package com.tk.groovy.poc class GroovyPerson { String firstName String lastName static void main(def args) { GroovyHelloWorld p = new GroovyHelloWorld() //Groovy default constructor \tGroovyPerson defaultPerson = new GroovyPerson(); // 1 generated Constructor \t// similar constructor is generated for lastName also \tGroovyPerson personWithFirstNameOnly = new GroovyPerson(firstName :\u0026#34;Kuldeep\u0026#34;) println(personWithFirstNameOnly.getFirstName()); // 2 generated Constructor \tGroovyPerson person = new GroovyPerson(firstName :\u0026#34;Kuldeep\u0026#34;, lastName :\u0026#34;Singh\u0026#34;) println(person.firstName + \u0026#34; \u0026#34; + person.lastName) } } Groovy Closures A closure in Groovy is an anonymous chunk of code that may take arguments, return a value, and reference and use variables declared in its surrounding scope. In many ways it resembles anonymous inner classes in Java, and closures are often used in Groovy in the same way that Java developers use anonymous inner classes. However, Groovy closures are much more powerful than anonymous inner classes, and far more convenient to specify and use.\nImplicit variable: ‘it’ variable is defined which has a special meaning. Closure in this case takes as single argument.\npackage com.tk.groovy.poc class GroovyClosures { /** * Main Function. * * @param args the args */ def static main(args) { /** * Defining closure printSum * * -\u0026gt; : is a special keyword here */ def printSum = { a, b -\u0026gt; println a+b } printSum(5,7) /** * Closure with implicit argument */ def implicitArgClosure = { println it } implicitArgClosure( \u0026#34;I am using implicity variable \u0026#39;it\u0026#39;\u0026#34; ) } } Closures are very heavily used in Groovy. Above example is very basic example. For deep understanding of closure kindly read : http://groovy.codehaus.org/Closures+-+Formal+Definition\nMethods and Collections Below program shows how to declare methods and how to use collections in groovy and some basic operations on collections.\n Groovy has native language support for collections, lists, maps sets, arrays etc. - Each collection can be iterated either by assigning the values variable, Or by using implicit variable ‘it’ Groovy also have Range collection which can be directly used to include all values in a range A Groovy Closure is like a \u0026ldquo;code block\u0026rdquo; or a method pointer. It is a piece of code that is defined and then executed at a later point. It has some special properties like implicit variables, support for currying and support for free variables package com.tk.groovy.poc class GroovyCollections { def listOperations(){ println(\u0026#34;Operation over LIST: \u0026#34;) // defining the Integer type list using generic \tList list = [1, 2, 3, 4] // Note this is same as below declaration \t// Here we are explicitly declaring the type of list \t//List\u0026lt;Integer\u0026gt; list = [1, 2, 3, 4] \tprintln(\u0026#34;Iterating the list using variable assignment: \u0026#34;) // using .each method to iterate over list \t// \u0026#39;itemVar\u0026#39; is implicitly declared as a variable of same type as list \t// with each block, values will be assigned to \u0026#39;itemVar\u0026#39; \t// this is syntax of groovy Closure \tlist.each{itemVar-\u0026gt; print itemVar + \u0026#34;, \u0026#34; } println(\u0026#34;\u0026#34;) println(\u0026#34;Iterating the list using implicit variable \u0026#39;it\u0026#39;: \u0026#34;) // inspite of declaring the variable explicitly directly use \t// the implicit variable \u0026#39;it\u0026#39; \tlist.each{print it + \u0026#34;, \u0026#34;} } def mapOpertion(){ println(\u0026#34;Operation over MAP: \u0026#34;) def map = [\u0026#34;Java\u0026#34;:\u0026#34;server\u0026#34;, \u0026#34;Groovy\u0026#34;:\u0026#34;server\u0026#34;, \u0026#34;JavaScript\u0026#34;:\u0026#34;web\u0026#34;] println(\u0026#34;Iterating the map using variable assignment: \u0026#34;) map.each{k,v-\u0026gt; print k + \u0026#34;, \u0026#34; println v } println(\u0026#34;\u0026#34;) println(\u0026#34;Iterating the map using implicit variable \u0026#39;it\u0026#39;: \u0026#34;) map.each{ print it.key + \u0026#34;, \u0026#34; println it.value } } def setOperations(){ Set s1 = [\u0026#34;a\u0026#34;, 2, \u0026#34;a\u0026#34;, 2, 3] println(\u0026#34;Iterating the map using variable assignment: \u0026#34;) s1.each{setVar-\u0026gt; print setVar + \u0026#34;, \u0026#34;} println(\u0026#34;\u0026#34;) println(\u0026#34;Iterating the set using implicit variable \u0026#39;it\u0026#39;: \u0026#34;) s1.each{ print it + \u0026#34;, \u0026#34;} } def rangeOperations() { def range = 1..50 //get the end points of the range \tprintln() println (\u0026#34;Range starts from: \u0026#34; + range.from + \u0026#34; -- Range ends to: \u0026#34; + range.to) println(\u0026#34;Iterating the range using variable assignment: \u0026#34;) range.each{rangeVar-\u0026gt; print rangeVar + \u0026#34;, \u0026#34;} println(\u0026#34;\u0026#34;) println(\u0026#34;Iterating the range using implicit variable \u0026#39;it\u0026#39;: \u0026#34;) range.each{ print it + \u0026#34;, \u0026#34;} } /** * Main Method * * @param args the args * * This is how we declare a static method in groovy * by using static keyword */ static main(def args){ GroovyCollections groovyCol = new GroovyCollections() groovyCol.listOperations(); println(\u0026#34;\u0026#34;) groovyCol.mapOpertion(); println(\u0026#34;\u0026#34;) groovyCol.setOperations(); println(\u0026#34;\u0026#34;) groovyCol.rangeOperations(); } }  File operations \u0026amp; Exception handling Exceptions in groovy: groovy doesn\u0026rsquo;t actually have checked exceptions. Groovy will pass an exception to the calling code until it is handled, but we don\u0026rsquo;t have to define it in our method signature package com.tk.groovy.poc class GroovyFileOperations { private def writeFile(def String filePath, def String content) { try { def file = new File(filePath); // declaring out variable using groovy closer \t// writing content using groovy implicit out variable \tfile.withWriter { out -\u0026gt; out.writeLine(content) } // all keyword defines all exceptions \t} catch (All) { // print the stack trace \tprintln(All) } } private def readFile(def String filePath) { def file = new File(filePath); file.eachLine { line -\u0026gt; println(line) } } def static main(def args) { def filePath = LICENCE.txt; def content = \u0026#34;testContent\u0026#34; GroovyFileOperations fileOp = new GroovyFileOperations(); println(\u0026#34;Writing content in the file\u0026#34;) fileOp.writeFile(filePath, content); println(\u0026#34;Content from the file\u0026#34;) fileOp.readFile(filePath); } } Dynamic method invocation Using the dynamic features of Groovy calling method by its name only, Defining static methods\npackage com.tk.groovy.poc class Dog { /** * Bark. */ def bark() { println \u0026#34;woof!\u0026#34; } /** * Sit. */ def sit() { println \u0026#34;sitting\u0026#34; } /** * Jump. */ def jump() { println \u0026#34;boing!\u0026#34; } /** * Do action. * * @param animal the animal * @param action the action * * This is how we define the static method */ def static doAction( animal, action ) { //action name is passed at invocation \tanimal.\u0026#34;$action\u0026#34;() } /** * Main. * * @param args the args */ static main(def args) { def dog = new Dog() // calling the method by its name \tdoAction( dog, \u0026#34;bark\u0026#34; ) doAction( dog, \u0026#34;jump\u0026#34; ) } } Groovy and Java Inter-Operability This example shows how we can use groovy script program with java programs. There are two different ways shown below to integrate the groovy script with java program-\n Using GroovyCloassLoader: GroovyCloassLoader load scripts that are stored outside the local file system, and finally how to make your integration environment safe and sandboxed, permitting the scripts to perform only the operations you wish to allow Directly Injecting GroovyClassObject – We can directly create the object of Groovy script class and then can perform the operation as needed. Groovy Class:\npackage com.tk.groovy.poc.groovy import com.tk.groovy.poc.java.JavaLearner /** * The Class GroovyPerson. */ class GroovyLearner { def printGroovy() { println(\u0026#34;I am a groovy Learner\u0026#34;) } static void main(def args) { JavaLearner javaLearner = new JavaLearner(); javaLearner.printJava(); } } Java Class:\npackage com.tk.groovy.poc.java; import groovy.lang.GroovyClassLoader; import java.io.File; import java.io.IOException; import java.lang.reflect.InvocationTargetException; import org.codehaus.groovy.control.CompilationFailedException; import com.nagarro.groovy.poc.groovy.GroovyLearner; /** * The Class JavaLearner. */ public class JavaLearner { /** * Prints the java. */ public void printJava() { System.out.println(\u0026#34;I am a java learner\u0026#34;); } /** * The main method. * * @param args * the arguments */ public static void main(String[] args) { // using GroovyCloassLoader class \tString relativeGroovyClassPath = \u0026#34;src\\\\com\\\\tk\\\\groovy\\\\poc\\\\groovy\\\\GroovyLearner.groovy\u0026#34;; try { Class groovyClass = new GroovyClassLoader().parseClass(new File(relativeGroovyClassPath)); Object groovyClassInstance = groovyClass.newInstance(); groovyClass.getDeclaredMethod(\u0026#34;printGroovy\u0026#34;, new Class[] {}).invoke(groovyClassInstance, new Object[] {}); } catch (CompilationFailedException | IOException | InstantiationException | IllegalAccessException | IllegalArgumentException | InvocationTargetException | NoSuchMethodException | SecurityException e) { e.printStackTrace(); } // directly calling groovy class in java \t// this work on the fact that both groovy and java are compiled to \t// .class file \tGroovyLearner groovyLearner = new GroovyLearner(); groovyLearner.printGroovy(); } }  Building Groovy Applications Below section demonstrates the use of Groovy script with Maven, Ant, and logging library –log4j.\nIntegration with Ant: Below example shows you how easy it is to enhance the build process by using Groovy rather than XML as your build configuration format.\nCentral to the niftiness of Ant within Groovy is the notion of builders. Essentially, builders allow you to easily represent nested tree-like data structures, such as XML documents, in Groovy. And that is the hook: With a builder, specifically an Ant-Builder, you can effortlessly construct Ant XML build files and execute the resulting behavior without having to deal with the XML itself. And that\u0026rsquo;s not the only advantage of working with Ant in Groovy. Unlike XML, Groovy is a highly expressive development environment wherein you can easily code looping constructs, conditionals, and even explore the power of reuse, rather than the cut-and-paste drill you may have previously associated with creating new build.xml files. And you still get to do it all within the Java platform!\nThe beauty of builders, and especially Groovy’s Ant Builder, is that their syntax follows a logical progression closely mirroring that of the XML document they represent. Methods attached to an instance of Ant Builder match the corresponding Ant task; likewise, those methods can take parameters (in the form of a map) that correspond to the task\u0026rsquo;s attributes. What more, nested tags are such as includes and file-sets are then defined as closures.\npackage com.tk.groovy.project /** * The Class Build. */ class Build { def base_dir = \u0026#34;.\u0026#34; def src_dir = base_dir + \u0026#34;/src\u0026#34; def lib_dir = base_dir + \u0026#34;/lib\u0026#34; def build_dir = base_dir + \u0026#34;/classes\u0026#34; def dist_dir = base_dir + \u0026#34;/dist\u0026#34; def file_name = \u0026#34;Fibonacci\u0026#34; def ant=null; void clean() { ant.delete(dir: \u0026#34;${build_dir}\u0026#34;) ant.delete(dir: \u0026#34;${dist_dir}\u0026#34;) } /** * Builds the. */ void build() { ant= new AntBuilder().sequential { taskdef name: \u0026#34;groovyc\u0026#34;, classname: \u0026#34;org.codehaus.groovy.ant.Groovyc\u0026#34; groovyc srcdir: \u0026#34;src\u0026#34;, destdir: \u0026#34;${build_dir}\u0026#34;, { classpath { fileset dir: \u0026#34;${lib_dir}\u0026#34;, { include name: \u0026#34;*.jar\u0026#34; } pathelement path: \u0026#34;${build_dir}\u0026#34; } javac source: \u0026#34;1.7\u0026#34;, target: \u0026#34;1.7\u0026#34;, debug: \u0026#34;on\u0026#34; } jar destfile: \u0026#34;${dist_dir}/${file_name}.jar\u0026#34;, basedir: \u0026#34;${build_dir}\u0026#34;,filesetmanifest: \u0026#34;mergewithoutmain\u0026#34;,{ manifest{ attribute( name: \u0026#39;Main-Class\u0026#39;, value: \u0026#34;com.tk.groovy.Fibonacci\u0026#34; ) } } java classname:\u0026#34;${file_name}\u0026#34;, fork:\u0026#39;true\u0026#39;,{ classpath { fileset dir: \u0026#34;${lib_dir}\u0026#34;, { include name: \u0026#34;*.jar\u0026#34; } pathelement path: \u0026#34;${build_dir}\u0026#34; } } } } /** * Run. * * @param args the args */ void run(args) { if ( args.size() \u0026gt; 0 ) { invokeMethod(args[0], null ) } else { build() } } /** * Main. * * @param args the args */ static main(args) { def b = new Build() b.run(args) } } Integration with Maven: An approach to Groovy and Maven integration is to use the gmaven plugin for Maven. This example is made by using\nThese examples could be useful in any project where we want to do some kind of sanitary test over our ear file and can perform handy things at the build time.\nFor maven to compile and understand the Groovy class files, we have to follow the same folder layout as we used to do for java project with only few differences.\n project/ | +- | +- pom.xml | | +- src/main/resources/ | | +- src/main/groovy/ | | | +- com/nagarro/groovy | | | +- XXX.groovy | +- src/main/test/ | +- com/nagarro/groovy | +- XXXTest.Groovy pom.xml\n\u0026lt;?xml version=\u0026#34;1.0\u0026#34; encoding=\u0026#34;UTF-8\u0026#34;?\u0026gt; \u0026lt;project xmlns=\u0026#34;http://maven.apache.org/POM/4.0.0\u0026#34; xmlns:xsi=\u0026#34;http://www.w3.org/2001/XMLSchema-instance\u0026#34; xsi:schemaLocation=\u0026#34;http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd\u0026#34;\u0026gt; \u0026lt;modelVersion\u0026gt;4.0.0\u0026lt;/modelVersion\u0026gt; \u0026lt;groupId\u0026gt;GroovyMavenPOC\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;GroovyMavenPOC\u0026lt;/artifactId\u0026gt; \u0026lt;name\u0026gt;Example Project\u0026lt;/name\u0026gt; \u0026lt;version\u0026gt;1.0.0\u0026lt;/version\u0026gt; \u0026lt;dependencies\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.codehaus.groovy.maven.runtime\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;gmaven-runtime-1.6\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;1.0\u0026lt;/version\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;junit\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;junit\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;3.8.1\u0026lt;/version\u0026gt; \u0026lt;scope\u0026gt;test\u0026lt;/scope\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;/dependencies\u0026gt; \u0026lt;build\u0026gt; \u0026lt;plugins\u0026gt; \u0026lt;plugin\u0026gt; \u0026lt;groupId\u0026gt;org.codehaus.groovy.maven\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;gmaven-plugin\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;1.0\u0026lt;/version\u0026gt; \u0026lt;executions\u0026gt; \u0026lt;execution\u0026gt; \u0026lt;goals\u0026gt; \u0026lt;goal\u0026gt;generateStubs\u0026lt;/goal\u0026gt; \u0026lt;goal\u0026gt;compile\u0026lt;/goal\u0026gt; \u0026lt;goal\u0026gt;generateTestStubs\u0026lt;/goal\u0026gt; \u0026lt;goal\u0026gt;testCompile\u0026lt;/goal\u0026gt; \u0026lt;/goals\u0026gt; \u0026lt;/execution\u0026gt; \u0026lt;/executions\u0026gt; \u0026lt;/plugin\u0026gt; \u0026lt;/plugins\u0026gt; \u0026lt;/build\u0026gt; \u0026lt;/project\u0026gt; Groovy File\npackage com.tk.groovy.poc import groovy.util.GroovyTestCase class HelloWorldTest extends GroovyTestCase { void testShow() { new HelloWorld().show() } }\tpackage com.tk.groovy.poc import groovy.util.GroovyTestCase /** * The Class HelperTest. */ class HelperTest extends GroovyTestCase { void testHelp() { new Helper().help(new HelloWorld()) } } Performance Analysis This POC shows how significant the performance improvements in Groovy 2.0 have turned out and how Groovy 2.0 would now compare to Java in terms of performance.\nIn case the performance gap meanwhile had become minor or at least acceptable it would certainly be time to take a serious look at Groovy.\nThe performance measurement consists of calculating Fibonacci numbers with same code in Groovy script and Java Class file.\nAll measurements were done on an Intel Core i7-3720QM CPU 2.60 GHz using JDK7u6 running on Windows 7 with Service Pack 1. I used Eclipse Juno with the Groovy plugin using the Groovy compiler version 1.8.6.xx-20121219-0800-e42-RELEASE.\nFibonacci Algorithm Java Program\npackage com.nagarro.groovy.poc; public class JavaFibonacci { public static int fibStaticTernary(int n) { return (n \u0026gt;= 2) ? fibStaticTernary(n - 1) + fibStaticTernary(n - 2) : 1; } public static int fibStaticIf(int n) { if (n \u0026gt;= 2) return fibStaticIf(n - 1) + fibStaticIf(n - 2); else return 1; } int fibTernary(int n) { return (n \u0026gt;= 2) ? fibStaticTernary(n - 1) + fibStaticTernary(n - 2) : 1; } int fibIf(int n) { if (n \u0026gt;= 2) return fibIf(n - 1) + fibIf(n - 2); else return 1; } public static void main(String[] args) { long start = System.currentTimeMillis(); JavaFibonacci.fibStaticTernary(40); System.out.println(\u0026#34;Java(static ternary): \u0026#34; + (System.currentTimeMillis() - start) + \u0026#34;ms\u0026#34;); start = System.currentTimeMillis(); JavaFibonacci.fibStaticIf(40); System.out.println(\u0026#34;Java(static if): \u0026#34; + (System.currentTimeMillis() - start) + \u0026#34;ms\u0026#34;); start = System.currentTimeMillis(); new JavaFibonacci().fibTernary(40); System.out.println(\u0026#34;Java(instance ternary): \u0026#34; + (System.currentTimeMillis() - start) + \u0026#34;ms\u0026#34;); start = System.currentTimeMillis(); new JavaFibonacci().fibIf(40); System.out.println(\u0026#34;Java(instance if): \u0026#34; + (System.currentTimeMillis() - start) + \u0026#34;ms\u0026#34;); } } Groovy Program\npackage com.tk.groovy.poc class GroovyFibonacci { static int fibStaticTernary (int n) { n \u0026gt;= 2 ? fibStaticTernary(n-1) + fibStaticTernary(n-2) : 1 } static int fibStaticIf (int n) { if(n \u0026gt;= 2) fibStaticIf(n-1) + fibStaticIf(n-2) else 1 } int fibTernary (int n) { n \u0026gt;= 2 ? fibTernary(n-1) + fibTernary(n-2) : 1 } int fibIf (int n) { if(n \u0026gt;= 2) fibIf(n-1) + fibIf(n-2) else 1 } public static void main(String[] args) { def warmup = true def warmUpIters = 50 def start = System.currentTimeMillis() JavaFibonacci.fibStaticTernary(40) println(\u0026#34;Groovy(static ternary): ${System.currentTimeMillis() - start}ms\u0026#34;) start = System.currentTimeMillis() JavaFibonacci.fibStaticIf(40) println(\u0026#34;Groovy(static if): ${System.currentTimeMillis() - start}ms\u0026#34;) def fib = new JavaFibonacci() start = System.currentTimeMillis() fib.fibTernary(40) println(\u0026#34;Groovy(instance ternary): ${System.currentTimeMillis() - start}ms\u0026#34;) start = System.currentTimeMillis() fib.fibIf(40) println(\u0026#34;Groovy(instance if): ${System.currentTimeMillis() - start}ms\u0026#34;) } } See below table for performance metric s of above two programs:\n Groovy 2.0 Java static ternary 1015ms 746ms static if 905ms 739ms instance ternary 708ms 739ms instance if 924ms 739ms IO Operations Now another performance test is done on I/O operations by Groovy and Java. See below the respective programs and performance metric table.\nAll measurement were done using previous configuration with additional file specification, we have used test.txt file with below specification; No of Lines- 5492493 Size of File- 243 Mega Bytes\nJava File Read\npackage com.tk.groovy.poc; public class ReadFileJava { private static String FILE_NAME=LICENCE.txt; public static void readFile() { BufferedReader br = null; try { File file=new File(FILE_NAME); double bytes = file.length(); double kilobytes = (bytes / 1024); double megabytes = (kilobytes / 1024); br = new BufferedReader(new FileReader(file)); int count = 0; while (br.readLine() != null) { count++; } } catch (FileNotFoundException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } } public static void main(String[] args) { long start = System.currentTimeMillis(); readFile(); long end = System.currentTimeMillis(); System.out.println(\u0026#34;Time Consumed------\u0026gt;\u0026gt;\u0026gt;\u0026gt;\u0026#34; + (end - start)+\u0026#34;ms\u0026#34;); } } Groovy File Read\npackage com.tk.groovy.poc class GroovyReadFile { static main(def a){ def filePath = \u0026#34;D:\\\\test.txt\u0026#34; long start = System.currentTimeMillis(); int count = 0; new File(filePath).eachLine{ count++; } println count; long end = System.currentTimeMillis(); println (\u0026#34;Time Consumed------\u0026gt;\u0026gt;\u0026gt;\u0026gt;\u0026#34;+(end - start) +\u0026#34;ms\u0026#34;); } } Performance metric table:\n Groovy 2.0 Java Iteration 1 2848ms 1667ms Iteration 2 2649ms 1578ms Iteration 3 2692ms 1811ms Iteration 4 2686ms 1799ms Conclusion I would like to see some benefits in using Groovy where appropriate and perhaps better than Java. But a lot of the \u0026ldquo;features\u0026rdquo; of dynamic languages appear to me as disadvantages. For example all the useful compile time checks and IDE support (auto completion etc.) are missing or not really good. Code in Groovy (and other similar languages) often may be much shorter or concise than the same code in a language like Java.\nThe times where Groovy was somewhat 10-20 times slower than Java are definitely over whether @CompileStatic is used or not. This means to me that Groovy is ready for applications where performance has to be somewhat comparable to Java\nSo I see Groovy\u0026rsquo;s place where you have to script a fast and small solution to a specific problem as we its use is shown with Maven integration. References\n http://groovy.codehaus.org/Beginners+Tutorial http://www.vogella.com/articles/Groovy/article.html http://groovy.codehaus.org/Getting+Started+Guide http://groovy.codehaus.org/Dynamic+Groovy http://groovy.codehaus.org/Closures http://groovy.codehaus.org/Closures+-+Formal+Definition This article is also supported by my friend Dhruv Bansal\n","id":66,"tag":"groovy ; java ; introduction ; language ; comparison ; technology","title":"Groovy - Getting Started","type":"post","url":"https://thinkuldeep.com/post/groovy-getting-started/"},{"content":"In previous article we learned basics of Scala, in this articles we will learn how to setup build scripts for scala and build applications using scala.\nWe will also learn few web development frameworks for Scala and compare them with similar framework in Java.\nBuilding Scala Applications Below program demonstrates the use of Scala script with Maven, Ant, and logging library – LogBack.\nIntegration with Ant Below example shows how Scala project can be built by the ant build.\n Scala build task Use of ScalaTestAntTask to run scala test case Ant build.xml \u0026lt;?xml version=\u0026#34;1.0\u0026#34;?\u0026gt; \u0026lt;project name=\u0026#34;scalaantproject\u0026#34; default=\u0026#34;run\u0026#34;\u0026gt; \u0026lt;!-- root directory of this project --\u0026gt; \u0026lt;property name=\u0026#34;project.dir\u0026#34; value=\u0026#34;.\u0026#34; /\u0026gt; \u0026lt;!-- main class to run --\u0026gt; \u0026lt;property name=\u0026#34;main.class\u0026#34; value=\u0026#34;\u0026lt;full qualified main class name\u0026gt;\u0026#34; /\u0026gt; \u0026lt;!-- location of scalatest.jar for unit testing --\u0026gt; \u0026lt;property name=\u0026#34;scalatest.jar\u0026#34; value=\u0026#34;${basedir}/lib/scalatest_2.9.0-1.9.1.ja”/\u0026gt; \u0026lt;target name=\u0026#34;init\u0026#34;\u0026gt; \u0026lt;property environment=\u0026#34;env\u0026#34; /\u0026gt; \u0026lt;!-- derived path names --\u0026gt; \u0026lt;property name=\u0026#34;build.dir\u0026#34; value=\u0026#34;${project.dir}/build\u0026#34; /\u0026gt; \u0026lt;property name=\u0026#34;source.dir\u0026#34; value=\u0026#34;${project.dir}/src\u0026#34; /\u0026gt; \u0026lt;property name=\u0026#34;test.dir\u0026#34; value=\u0026#34;${project.dir}/test\u0026#34; /\u0026gt; \u0026lt;!-- scala libraries for classpath definitions --\u0026gt; \u0026lt;property name=\u0026#34;scala-library.jar\u0026#34; value=\u0026#34;${scala.home}/lib/scala-library.jar\u0026#34; /\u0026gt; \u0026lt;property name=\u0026#34;scala-compiler.jar\u0026#34; value=\u0026#34;${scala.home}/lib/scala-compiler.jar\u0026#34; /\u0026gt; \u0026lt;!-- classpath for the compiler task definition --\u0026gt; \u0026lt;path id=\u0026#34;scala.classpath\u0026#34;\u0026gt; \u0026lt;pathelement location=\u0026#34;${scala-compiler.jar}\u0026#34; /\u0026gt; \u0026lt;pathelement location=\u0026#34;${scala-library.jar}\u0026#34; /\u0026gt; \u0026lt;/path\u0026gt; \u0026lt;!-- classpath for project build --\u0026gt; \u0026lt;path id=\u0026#34;build.classpath\u0026#34;\u0026gt; \u0026lt;pathelement location=\u0026#34;${scala-library.jar}\u0026#34; /\u0026gt; \u0026lt;fileset dir=\u0026#34;${project.dir}/lib\u0026#34;\u0026gt; \u0026lt;include name=\u0026#34;*.jar\u0026#34; /\u0026gt; \u0026lt;/fileset\u0026gt; \u0026lt;pathelement location=\u0026#34;${build.dir}/classes\u0026#34; /\u0026gt; \u0026lt;/path\u0026gt; \u0026lt;!-- classpath for unit test build --\u0026gt; \u0026lt;path id=\u0026#34;test.classpath\u0026#34;\u0026gt; \u0026lt;pathelement location=\u0026#34;${scala-library.jar}\u0026#34; /\u0026gt; \u0026lt;pathelement location=\u0026#34;${scalatest.jar}\u0026#34; /\u0026gt; \u0026lt;pathelement location=\u0026#34;${build.dir}/classes\u0026#34; /\u0026gt; \u0026lt;/path\u0026gt; \u0026lt;!-- definition for the \u0026#34;scalac\u0026#34; and \u0026#34;scaladoc\u0026#34; ant tasks --\u0026gt; \u0026lt;taskdef resource=\u0026#34;scala/tools/ant/antlib.xml\u0026#34;\u0026gt; \u0026lt;classpath refid=\u0026#34;scala.classpath\u0026#34; /\u0026gt; \u0026lt;/taskdef\u0026gt; \u0026lt;!-- definition for the \u0026#34;scalatest\u0026#34; ant task --\u0026gt; \u0026lt;taskdef name=\u0026#34;scalatest\u0026#34; classname=\u0026#34;org.scalatest.tools.ScalaTestAntTask\u0026#34;\u0026gt; \u0026lt;classpath refid=\u0026#34;test.classpath\u0026#34;/\u0026gt; \u0026lt;/taskdef\u0026gt; \u0026lt;/target\u0026gt; \u0026lt;!-- delete compiled files --\u0026gt; \u0026lt;target name=\u0026#34;clean\u0026#34; depends=\u0026#34;init\u0026#34; description=\u0026#34;clean\u0026#34;\u0026gt; \u0026lt;delete dir=\u0026#34;${build.dir}\u0026#34; /\u0026gt; \u0026lt;delete dir=\u0026#34;${project.dir}/doc\u0026#34; /\u0026gt; \u0026lt;delete file=\u0026#34;${project.dir}/lib/scala-library.jar\u0026#34; /\u0026gt; \u0026lt;/target\u0026gt; \u0026lt;!-- compile project --\u0026gt; \u0026lt;target name=\u0026#34;build\u0026#34; depends=\u0026#34;init\u0026#34; description=\u0026#34;build\u0026#34;\u0026gt; \u0026lt;buildnumber /\u0026gt; \u0026lt;tstamp /\u0026gt; \u0026lt;mkdir dir=\u0026#34;${build.dir}/classes\u0026#34; /\u0026gt; \u0026lt;scalac srcdir=\u0026#34;${source.dir}\u0026#34; destdir=\u0026#34;${build.dir}/classes\u0026#34; classpathref=\u0026#34;build.classpath\u0026#34; force=\u0026#34;never\u0026#34; deprecation=\u0026#34;on\u0026#34;\u0026gt; \u0026lt;include name=\u0026#34;**/*.scala\u0026#34; /\u0026gt; \u0026lt;/scalac\u0026gt; \u0026lt;/target\u0026gt; \u0026lt;!-- run program --\u0026gt; \u0026lt;target name=\u0026#34;run\u0026#34; depends=\u0026#34;build\u0026#34; description=\u0026#34;run\u0026#34;\u0026gt; \u0026lt;java classname=\u0026#34;${main.class}\u0026#34; classpathref=\u0026#34;build.classpath\u0026#34; /\u0026gt; \u0026lt;/target\u0026gt; \u0026lt;!-- build unit tests --\u0026gt; \u0026lt;target name=\u0026#34;buildtest\u0026#34; depends=\u0026#34;build\u0026#34;\u0026gt; \u0026lt;mkdir dir=\u0026#34;${build.dir}/test\u0026#34; /\u0026gt; \u0026lt;scalac srcdir=\u0026#34;${test.dir}\u0026#34; destdir=\u0026#34;${build.dir}/test\u0026#34; classpathref=\u0026#34;test.classpath\u0026#34; force=\u0026#34;never\u0026#34; deprecation=\u0026#34;on\u0026#34;\u0026gt; \u0026lt;include name=\u0026#34;**/*.scala\u0026#34; /\u0026gt; \u0026lt;/scalac\u0026gt; \u0026lt;/target\u0026gt; \u0026lt;!-- run unit tests --\u0026gt; \u0026lt;target name=\u0026#34;test\u0026#34; depends=\u0026#34;buildtest\u0026#34; description=\u0026#34;test\u0026#34;\u0026gt; \u0026lt;scalatest runpath=\u0026#34;${build.dir}/test\u0026#34;\u0026gt; \u0026lt;reporter type=\u0026#34;stdout\u0026#34; config=\u0026#34;FWD\u0026#34;/\u0026gt; \u0026lt;reporter type=\u0026#34;file\u0026#34; filename=\u0026#34;${build.dir}/TestOutput.out\u0026#34; /\u0026gt; \u0026lt;membersonly package=\u0026#34;suite\u0026#34; /\u0026gt; \u0026lt;/scalatest\u0026gt; \u0026lt;/target\u0026gt; \u0026lt;!-- create a startable *.jar with proper classpath dependency definition --\u0026gt; \u0026lt;target name=\u0026#34;jar\u0026#34; depends=\u0026#34;build\u0026#34; description=\u0026#34;jar\u0026#34;\u0026gt; \u0026lt;mkdir dir=\u0026#34;${build.dir}/jar\u0026#34; /\u0026gt; \u0026lt;copy file=\u0026#34;${scala-library.jar}\u0026#34; todir=\u0026#34;${project.dir}/lib\u0026#34; /\u0026gt; \u0026lt;path id=\u0026#34;jar.class.path\u0026#34;\u0026gt; \u0026lt;fileset dir=\u0026#34;${project.dir}\u0026#34;\u0026gt; \u0026lt;include name=\u0026#34;lib/**/*.jar\u0026#34; /\u0026gt; \u0026lt;/fileset\u0026gt; \u0026lt;/path\u0026gt; \u0026lt;pathconvert property=\u0026#34;jar.classpath\u0026#34; pathsep=\u0026#34; \u0026#34; dirsep=\u0026#34;/\u0026#34;\u0026gt; \u0026lt;path refid=\u0026#34;jar.class.path\u0026#34;\u0026gt; \u0026lt;/path\u0026gt; \u0026lt;map from=\u0026#34;${basedir}${file.separator}lib\u0026#34; to=\u0026#34;lib\u0026#34; /\u0026gt; \u0026lt;/pathconvert\u0026gt; \u0026lt;jar destfile=\u0026#34;${build.dir}/jar/${ant.project.name}.jar\u0026#34; basedir=\u0026#34;${build.dir}/classes\u0026#34; duplicate=\u0026#34;preserve\u0026#34;\u0026gt; \u0026lt;manifest\u0026gt; \u0026lt;attribute name=\u0026#34;Main-Class\u0026#34; value=\u0026#34;${main.class}\u0026#34; /\u0026gt; \u0026lt;attribute name=\u0026#34;Class-Path\u0026#34; value=\u0026#34;${jar.classpath}\u0026#34; /\u0026gt; \u0026lt;section name=\u0026#34;Program\u0026#34;\u0026gt; \u0026lt;attribute name=\u0026#34;Title\u0026#34; value=\u0026#34;${ant.project.name}\u0026#34; /\u0026gt; \u0026lt;attribute name=\u0026#34;Build\u0026#34; value=\u0026#34;${build.number}\u0026#34; /\u0026gt; \u0026lt;attribute name=\u0026#34;Date\u0026#34; value=\u0026#34;${TODAY}\u0026#34; /\u0026gt; \u0026lt;/section\u0026gt; \u0026lt;/manifest\u0026gt; \u0026lt;/jar\u0026gt; \u0026lt;/target\u0026gt; \u0026lt;!-- create API documentation in doc folder --\u0026gt; \u0026lt;target name=\u0026#34;scaladoc\u0026#34; depends=\u0026#34;build\u0026#34; description=\u0026#34;scaladoc\u0026#34;\u0026gt; \u0026lt;mkdir dir=\u0026#34;${project.dir}/doc\u0026#34; /\u0026gt; \u0026lt;scaladoc srcdir=\u0026#34;${source.dir}\u0026#34; destdir=\u0026#34;${project.dir}/doc\u0026#34; classpathref=\u0026#34;build.classpath\u0026#34; doctitle=\u0026#34;${ant.project.name}\u0026#34; windowtitle=\u0026#34;${ant.project.name}\u0026#34; /\u0026gt; \u0026lt;/target\u0026gt; \u0026lt;!-- create a zip file with binaries for distribution --\u0026gt; \u0026lt;target name=\u0026#34;package\u0026#34; depends=\u0026#34;jar, scaladoc\u0026#34; description=\u0026#34;package\u0026#34;\u0026gt; \u0026lt;zip destfile=\u0026#34;${build.dir}/${ant.project.name}.zip\u0026#34;\u0026gt; \u0026lt;zipfileset dir=\u0026#34;${build.dir}/jar\u0026#34; includes=\u0026#34;${ant.project.name}.jar\u0026#34; /\u0026gt; \u0026lt;zipfileset dir=\u0026#34;${project.dir}\u0026#34; includes=\u0026#34;lib/*\u0026#34; /\u0026gt; \u0026lt;zipfileset dir=\u0026#34;${project.dir}\u0026#34; includes=\u0026#34;doc/*\u0026#34; /\u0026gt; \u0026lt;zipfileset dir=\u0026#34;${project.dir}/txt\u0026#34; includes=\u0026#34;*\u0026#34; /\u0026gt; \u0026lt;/zip\u0026gt; \u0026lt;/target\u0026gt; \u0026lt;/project\u0026gt; Integration with Maven Scale maven project can simply be created using ‘org.scala-tools.archetypes’ archetypes. Example show pom.xml file to build basic scala maven project For maven to compile and understand the Scala class files, we should follow the following folder structure\nproject/ pom.xml - Defines the project src/ main/ java/ - Contains all java code that will go in your final artifact. scala/ - Contains all scala code that will go in your final artifact. resources/ - Contains all static files that should be available on the classpath in the final artifact. webapp/ - Contains all content for a web application (jsps,css,images, etc.) test/ java/ - Contains all java code used for testing. scala/ - Contains all scala code used for testing. resources/ - Contains all static content that should be available on the classpath during testing. pom.xml\n\u0026lt;project xmlns=\u0026#34;http://maven.apache.org/POM/4.0.0\u0026#34; xmlns:xsi=\u0026#34;http://www.w3.org/2001/XMLSchema-instance\u0026#34; xsi:schemaLocation=\u0026#34;http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd\u0026#34;\u0026gt; \u0026lt;modelVersion\u0026gt;4.0.0\u0026lt;/modelVersion\u0026gt; \u0026lt;groupId\u0026gt;com.tk.scala\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;scalamavenpoc\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;1.0\u0026lt;/version\u0026gt; \u0026lt;name\u0026gt;${project.artifactId}\u0026lt;/name\u0026gt; \u0026lt;description\u0026gt;scala maven poc application\u0026lt;/description\u0026gt; \u0026lt;inceptionYear\u0026gt;2010\u0026lt;/inceptionYear\u0026gt; \u0026lt;properties\u0026gt; \u0026lt;encoding\u0026gt;UTF-8\u0026lt;/encoding\u0026gt; \u0026lt;scala.version\u0026gt;2.9.1\u0026lt;/scala.version\u0026gt; \u0026lt;/properties\u0026gt; \u0026lt;dependencies\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.scala-lang\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;scala-library\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;${scala.version}\u0026lt;/version\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;!-- Test --\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;junit\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;junit\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;4.8.1\u0026lt;/version\u0026gt; \u0026lt;scope\u0026gt;test\u0026lt;/scope\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.scala-tools.testing\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;specs_2.9.1\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;1.6.9\u0026lt;/version\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.scalatest\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;scalatest\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;1.2\u0026lt;/version\u0026gt; \u0026lt;scope\u0026gt;test\u0026lt;/scope\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;mysql\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;mysql-connector-java\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;5.1.12\u0026lt;/version\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;ch.qos.logback\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;logback-core\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;0.9.30\u0026lt;/version\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;ch.qos.logback\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;logback-classic\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;0.9.30\u0026lt;/version\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.slf4j\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;slf4j-api\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;1.6.2\u0026lt;/version\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;/dependencies\u0026gt; \u0026lt;build\u0026gt; \u0026lt;sourceDirectory\u0026gt;src/main/scala\u0026lt;/sourceDirectory\u0026gt; \u0026lt;testSourceDirectory\u0026gt;src/test/scala\u0026lt;/testSourceDirectory\u0026gt; \u0026lt;plugins\u0026gt; \u0026lt;plugin\u0026gt; \u0026lt;groupId\u0026gt;org.scala-tools\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;maven-scala-plugin\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;2.15.0\u0026lt;/version\u0026gt; \u0026lt;executions\u0026gt; \u0026lt;execution\u0026gt; \u0026lt;goals\u0026gt; \u0026lt;goal\u0026gt;compile\u0026lt;/goal\u0026gt; \u0026lt;goal\u0026gt;testCompile\u0026lt;/goal\u0026gt; \u0026lt;/goals\u0026gt; \u0026lt;configuration\u0026gt; \u0026lt;args\u0026gt; \u0026lt;arg\u0026gt;-make:transitive\u0026lt;/arg\u0026gt; \u0026lt;arg\u0026gt;-dependencyfile\u0026lt;/arg\u0026gt; \u0026lt;arg\u0026gt;${project.build.directory}/.scala_dependencies\u0026lt;/arg\u0026gt; \u0026lt;/args\u0026gt; \u0026lt;/configuration\u0026gt; \u0026lt;/execution\u0026gt; \u0026lt;/executions\u0026gt; \u0026lt;/plugin\u0026gt; \u0026lt;plugin\u0026gt; \u0026lt;groupId\u0026gt;org.apache.maven.plugins\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;maven-surefire-plugin\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;2.6\u0026lt;/version\u0026gt; \u0026lt;configuration\u0026gt; \u0026lt;useFile\u0026gt;false\u0026lt;/useFile\u0026gt; \u0026lt;disableXmlReport\u0026gt;true\u0026lt;/disableXmlReport\u0026gt; \u0026lt;includes\u0026gt; \u0026lt;include\u0026gt;**/*Test.*\u0026lt;/include\u0026gt; \u0026lt;include\u0026gt;**/*Suite.*\u0026lt;/include\u0026gt; \u0026lt;/includes\u0026gt; \u0026lt;/configuration\u0026gt; \u0026lt;/plugin\u0026gt; \u0026lt;/plugins\u0026gt; \u0026lt;/build\u0026gt; \u0026lt;/project\u0026gt; Building Scala Web Application This POC shows a simple web servlet using scala.\nScala web servlet example Creating a web project that compiles your Scala code requires a little tweaking.\n First, you should create a simple Dynamic Web Project. Then, open the .project file.\n Replace the org.eclipse.jdt.core.javabuilder with org.scala-ide.sdt.core.scalabuilder.\n Now add the Scala nature to your project:\n \u0026lt;nature\u0026gt;org.scala-ide.sdt.core.scalanature\u0026lt;/nature\u0026gt;. \u0026lt;?xml version=\u0026#34;1.0\u0026#34; encoding=\u0026#34;UTF-8\u0026#34;?\u0026gt; \u0026lt;projectDescription\u0026gt; \u0026lt;name\u0026gt;ScalaWebApplication\u0026lt;/name\u0026gt; \u0026lt;comment\u0026gt;\u0026lt;/comment\u0026gt; \u0026lt;projects\u0026gt; \u0026lt;/projects\u0026gt;\t\u0026lt;buildSpec\u0026gt; \u0026lt;buildCommand\u0026gt; \u0026lt;name\u0026gt;org.eclipse.wst.jsdt.core.javascriptValidator\u0026lt;/name\u0026gt; \u0026lt;arguments\u0026gt; \u0026lt;/arguments\u0026gt; \u0026lt;/buildCommand\u0026gt; \u0026lt;buildCommand\u0026gt; \u0026lt;name\u0026gt;org.scala-ide.sdt.core.scalabuilder\u0026lt;/name\u0026gt; \u0026lt;arguments\u0026gt; \u0026lt;/arguments\u0026gt; \u0026lt;/buildCommand\u0026gt; \u0026lt;buildCommand\u0026gt; \u0026lt;name\u0026gt;org.eclipse.wst.common.project.facet.core.builder\u0026lt;/name\u0026gt; \u0026lt;arguments\u0026gt; \u0026lt;/arguments\u0026gt; \u0026lt;/buildCommand\u0026gt; \u0026lt;buildCommand\u0026gt; \u0026lt;name\u0026gt;org.eclipse.wst.validation.validationbuilder\u0026lt;/name\u0026gt; \u0026lt;arguments\u0026gt; \u0026lt;/arguments\u0026gt; \u0026lt;/buildCommand\u0026gt; \u0026lt;/buildSpec\u0026gt; \u0026lt;natures\u0026gt; \u0026lt;nature\u0026gt;org.scala-ide.sdt.core.scalanature\u0026lt;/nature\u0026gt; \u0026lt;nature\u0026gt;org.eclipse.jem.workbench.JavaEMFNature\u0026lt;/nature\u0026gt; \u0026lt;nature\u0026gt;org.eclipse.wst.common.modulecore.ModuleCoreNature\u0026lt;/nature\u0026gt; \u0026lt;nature\u0026gt;org.eclipse.wst.common.project.facet.core.nature\u0026lt;/nature\u0026gt; \u0026lt;nature\u0026gt;org.eclipse.jdt.core.javanature\u0026lt;/nature\u0026gt; \u0026lt;nature\u0026gt;org.eclipse.wst.jsdt.core.jsNature\u0026lt;/nature\u0026gt; \u0026lt;/natures\u0026gt; \u0026lt;/projectDescription\u0026gt; If you made the right change, you should see a S instead of a J on your project’s icon. The last thing to do should be to add the Scala library to your path. Depending on your application server configuration, either add the Scala library to your build path (Build Path -\u0026gt; Configure Build Path -\u0026gt; Add Library -\u0026gt; Scala Library) or manually add the needed library to your WEB-INF/lib folder. Web Servlet Program\npackage com.tk.servlet.scala import javax.servlet.http.{ HttpServlet, HttpServletRequest, HttpServletResponse } class ScalaServlet extends HttpServlet { /* (non-Javadoc) * @see javax.servlet.http.HttpServlet#doGet(javax.servlet.http.HttpServletRequest, javax.servlet.http.HttpServletResponse) */ override def doGet(req: HttpServletRequest, resp: HttpServletResponse) = { resp.getWriter().print(\u0026#34;\u0026lt;HTML\u0026gt;\u0026#34; + \u0026#34;\u0026lt;HEAD\u0026gt;\u0026lt;TITLE\u0026gt;Hello, Scala!\u0026lt;/TITLE\u0026gt;\u0026lt;/HEAD\u0026gt;\u0026#34; + \u0026#34;\u0026lt;BODY\u0026gt;Hello, Scala! This is a servlet.\u0026lt;/BODY\u0026gt;\u0026#34; + \u0026#34;\u0026lt;/HTML\u0026gt;\u0026#34;) } /* (non-Javadoc) * @see javax.servlet.http.HttpServlet#doPost(javax.servlet.http.HttpServletRequest, javax.servlet.http.HttpServletResponse) */ override def doPost(req: HttpServletRequest, resp: HttpServletResponse) = { } } Web.xml\n\u0026lt;?xml version=\u0026#34;1.0\u0026#34; encoding=\u0026#34;UTF-8\u0026#34;?\u0026gt; \u0026lt;web-app xmlns:xsi=\u0026#34;http://www.w3.org/2001/XMLSchema-instance\u0026#34; xmlns=\u0026#34;http://java.sun.com/xml/ns/javaee\u0026#34; xmlns:web=\u0026#34;http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd\u0026#34; xsi:schemaLocation=\u0026#34;http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_3_0.xsd\u0026#34; id=\u0026#34;WebApp_ID\u0026#34; version=\u0026#34;3.0\u0026#34;\u0026gt; \u0026lt;display-name\u0026gt;ScalaWebApplication\u0026lt;/display-name\u0026gt; \u0026lt;welcome-file-list\u0026gt; \u0026lt;welcome-file\u0026gt;index.html\u0026lt;/welcome-file\u0026gt; \u0026lt;welcome-file\u0026gt;index.htm\u0026lt;/welcome-file\u0026gt; \u0026lt;welcome-file\u0026gt;index.jsp\u0026lt;/welcome-file\u0026gt; \u0026lt;welcome-file\u0026gt;default.html\u0026lt;/welcome-file\u0026gt; \u0026lt;welcome-file\u0026gt;default.htm\u0026lt;/welcome-file\u0026gt; \u0026lt;welcome-file\u0026gt;default.jsp\u0026lt;/welcome-file\u0026gt; \u0026lt;/welcome-file-list\u0026gt; \u0026lt;servlet\u0026gt; \u0026lt;servlet-name\u0026gt;helloWorld\u0026lt;/servlet-name\u0026gt; \u0026lt;servlet-class\u0026gt;com.tk.servlet.scala.ScalaServlet\u0026lt;/servlet-class\u0026gt; \u0026lt;/servlet\u0026gt; \u0026lt;servlet-mapping\u0026gt; \u0026lt;servlet-name\u0026gt;helloWorld\u0026lt;/servlet-name\u0026gt; \u0026lt;url-pattern\u0026gt;/sayHello\u0026lt;/url-pattern\u0026gt; \u0026lt;/servlet-mapping\u0026gt; \u0026lt;/web-app\u0026gt; Scala web framework - Lift Lift is a free web application framework that is designed for the Scala programming language. As Scala program code executes within the Java virtual machine (JVM), any existing Java library and web container can be used in running Lift applications. Lift web applications are thus packaged as WAR files and deployed on any servlet 2.4 engine (for example, Tomcat 5.5.xx, Jetty 6.0, etc.). Lift programmers may use the standard Scala/Java development toolchain including IDEs such as Eclipse, NetBeans and IDEA. Dynamic web content is authored via templates using standard HTML5 or XHTML editors. Lift applications also benefit from native support for advanced web development techniques such as Comet and Ajax. Lift does not prescribe the model–view–controller (MVC) architectural pattern. Rather Lift is chiefly modeled upon the so-called \u0026ldquo;View First\u0026rdquo; (designer friendly) approach to web page development inspired by the Wicket framework.\nLift Hello world application Follow below steps to create simple lift web project\n Create maven project with archetype ‘lift-archetype-blank_{version}’\n Enter GroupId, ArtifactId, version, package just like normal maven project\n This will create project structure with scala, lift and logback configured in it.\n Project Structure Details\n src/main - your application source src/main/webapp - contains files to be copied directly into your web application, such as html templates and the web.xml src/test - your test code target - where your compiled application will be created and packaged Boot.scala - defines the global setup for the application. The main things it does are: Defines the packages for Lift code Creates the sitemap – a menu and security access for the site Commet - Comet describes a model where the client sends an asynchronous request to the server. The request is hanging till the server has something interesting to respond. As soon as the server responds another request is made. The idea is to give the impression that the server is notifying the client of changes on the server. For more details refer: Wikipedia article. Snippet - in general is a programming term for a small region of re-usable source code, machine code, or text. Lift uses snippets to transform sections of the HTML page from static to dynamic. The key word is transform. Lift’s snippets are Scala functions: NodeSeq =\u0026gt; NodeSeq. A NodeSeq is a collection of XML nodes. A snippet can only transform input NodeSeq to output NodeSeq. Default generated code Explanation - Here is the servlet filter that runs a lift app\n\u0026lt;?xml version=\u0026#34;1.0\u0026#34; encoding=\u0026#34;ISO-8859-1\u0026#34;?\u0026gt; \u0026lt;!DOCTYPE web-app PUBLIC \u0026#34;-//Sun Microsystems, Inc.//DTD Web Application 2.3//EN\u0026#34; \u0026#34;http://java.sun.com/dtd/web-app_2_3.dtd\u0026#34;\u0026gt; \u0026lt;web-app\u0026gt; \u0026lt;filter\u0026gt; \u0026lt;filter-name\u0026gt;LiftFilter\u0026lt;/filter-name\u0026gt; \u0026lt;display-name\u0026gt;Lift Filter\u0026lt;/display-name\u0026gt; \u0026lt;description\u0026gt;The Filter that intercepts lift calls\u0026lt;/description\u0026gt; \u0026lt;filter-class\u0026gt;net.liftweb.http.LiftFilter\u0026lt;/filter-class\u0026gt; \u0026lt;/filter\u0026gt; \u0026lt;filter-mapping\u0026gt; \u0026lt;filter-name\u0026gt;LiftFilter\u0026lt;/filter-name\u0026gt; \u0026lt;url-pattern\u0026gt;/*\u0026lt;/url-pattern\u0026gt; \u0026lt;/filter-mapping\u0026gt;   Default Hello world snippet - HelloWorld.scala\npackage com.tk.lift.poc { /** * declaring package for snippet classes */ package snippet { import _root_.scala.xml.NodeSeq import _root_.net.liftweb.util.Helpers import Helpers._ /** * Defining snippet class HelloWorld */ class HelloWorld { /** * defining snippet with name \u0026#39;howdy\u0026#39; */ def howdy(in: NodeSeq): NodeSeq = /** * This snippet will replace \u0026#39;\u0026lt;b:time/\u0026gt; in html file * with current date string\u0026#39; */ Helpers.bind(\u0026#34;b\u0026#34;, in, \u0026#34;time\u0026#34; -\u0026gt; (new _root_.java.util.Date).toString) } } } Default html page - index.html\n\u0026lt;lift:surround with=\u0026#34;default\u0026#34; at=\u0026#34;content\u0026#34;\u0026gt; \u0026lt;h2\u0026gt;Welcome to your project!\u0026lt;/h2\u0026gt; \u0026lt;p\u0026gt; \u0026lt;lift:helloWorld.howdy\u0026gt; \u0026lt;span\u0026gt;Welcome to liftblank at \u0026lt;b:time/\u0026gt;\u0026lt;/span\u0026gt; \u0026lt;/lift:helloWorld.howdy\u0026gt; \u0026lt;/p\u0026gt; \u0026lt;/lift:surround\u0026gt; Default Boot.scala\n package bootstrap.liftweb import _root_.net.liftweb.common._ import _root_.net.liftweb.util._ import _root_.net.liftweb.http._ import _root_.net.liftweb.sitemap._ import _root_.net.liftweb.sitemap.Loc._ import Helpers._ /** * A class that\u0026#39;s instantiated early and run. It allows the application * to modify lift\u0026#39;s environment */ class Boot { def boot { // where to search snippet LiftRules.addToPackages(\u0026#34;com.nagarro.lift.poc\u0026#34;) // This piece of code build SiteMap on thie index.html page // we will see listing with \u0026#39;home\u0026#39; attribute there val entries = Menu(Loc(\u0026#34;Home\u0026#34;, List(\u0026#34;index\u0026#34;), \u0026#34;Home\u0026#34;)) :: Nil LiftRules.setSiteMap(SiteMap(entries: _*)) } } Running default web application To run this webapp Build the project with maven clean install goal This will generate war file in target folder Deploy the war file in any servers like tomcat etc Scala/Lift vs Java/Spring Let\u0026rsquo;s assume we\u0026rsquo;re equally comfortable in Scala and Java, and ignore the (huge) language differences except as they pertain to Spring or Lift.(Probably a big assumption for java refuges)\nSpring and Lift are almost diametrically opposed in terms of maturity and goals.\n Spring is about five years older than Lift Lift is monolithic and targets only the web; Spring is modular and targets both web and \u0026ldquo;regular\u0026rdquo; apps Spring supports a plethora of Java EE features; Lift ignores that stuff Scala MVC framework - Play Play is an open source web application framework, written in Scala and Java, which follows the model–view–controller (MVC)architectural pattern. It aims to optimize developer productivity by using convention over configuration, hot code reloading and display of errors in the browser. Support for the Scala programming language has been available since version 1.1 of the framework. In version 2.0, the framework core was rewritten in Scala. Build and deployment was migrated to Simple Build Tool and templates use Scala instead of Groovy.\nPlay - Hello World Install play framework Create new hello world application open command prompt and type following commands play new helloworld –with scala It will prompt you for the application full name. Type Hello world. The play new command creates a new directory helloworld/ and populates it with a series of files and directories, the most important being: Play Project Structure app/ contains the application’s core. It contains all the application *.scala files. You see that the default is a single controllers.scala file. You can of course use more files and organize them in several folders for more complex applications. Also the app/viewsdirectory is where template files live. A template file is generally a simple text, xml or html file with dynamic placeholders. conf/ contains all the application’s configuration files, especially the main application.conf file, the routes definition files and the messages files used for internationalization. lib/ contains all optional Scala libraries packaged as standard .jar files. public/ contains all the publicly available resources, which includes JavaScript, stylesheets and images directories. test/ contains all the application tests. Tests are either written either as ScalaTest tests or as Selenium tests. As a Scala developer, you may wonder where all the .class files go. The answer is nowhere: Play doesn’t use any class files but reads the Scala source files directly.\n The main entry point of your application is the conf/routes file. This file defines all of the application’s accessible URLs. If you open the generated routes file you will see this first ‘route’:\nGET\t/\tApplication.index That simply tells Play that when the web server receives a GET request for the / path, it must call the Application.index Scala method. In this case, Application.index is a shortcut for controllers.Application.index, because the controller’s package is implicit.\n A Play application has several entry points, one for each URL. We call these methodsaction methods. Action methods are defined in several objects that we call controllers. Open thehelloworld/app/controllers.scala source file:\npackage controllers import play._ import play.mvc._ object Application extends Controller { import views.Application._ def index = { html.index(\u0026#34;Your new Scala application is ready!\u0026#34;) } } The default index action is simple: it invoke the views.Application.html.index template and returns the generated HTML. Using a template is the most common way (but not the only one) to generate the HTTP response. Templates are special Scala source files that live in the /app/views directory.\n Open the helloworld/app/views/Application/index.scala.html file:\n@(title:String) @main(title) { @views.defaults.html.welcome(title) } The first line defines the template parameters: a template is a function that generates play.template.Html return value. In this case, the template has a single title parameter of type String, so the type of this template is (String) =\u0026gt; Html.\nThen this template call another template called main, that take two parameters: a String and an Html block.\n Open the helloworld/app/views/main.scala.html file that defines the main template:\n @(title:String = \u0026#34;\u0026#34;)(body: =\u0026gt; Html) \u0026lt;!DOCTYPE html\u0026gt; \u0026lt;html\u0026gt; \u0026lt;head\u0026gt; \u0026lt;title\u0026gt;@title\u0026lt;/title\u0026gt; \u0026lt;link rel=\u0026#34;stylesheet\u0026#34; href=\u0026#34;@asset(\u0026#34;public/stylesheets/main.css\u0026#34;)\u0026#34;\u0026gt; \u0026lt;link rel=\u0026#34;shortcut icon\u0026#34; href=\u0026#34;@asset(\u0026#34;public/images/favicon.png\u0026#34;)\u0026#34;\u0026gt; \u0026lt;script src=\u0026#34;@asset(\u0026#34;public/javascripts/jquery-1.5.2.min.js\u0026#34;)\u0026#34;\u0026gt;\u0026lt;/script\u0026gt; \u0026lt;/head\u0026gt; \u0026lt;body\u0026gt; @body \u0026lt;/body\u0026gt; \u0026lt;/html\u0026gt; Lift vs. Play Architecture: Play is based on MVC architecture and lift is basically View-first design approach. View First: View first simply means the view is what drives the creation or discovery of the view model. In view first scenarios, the view typically binds to the view model as a resource, uses a locator pattern. Productivity: Play is better than lift because of its learning curve. Security : Lift is better than any other framework Templates : Lift is better because you can really split up front-end dev work and back-end work Refactoring / maintainability : Lift is better : a MVC isn\u0026rsquo;t as good as a View-first design for that. Scalability : Play is slightly better and can be hosted on most cloud PaaS. Lift scales very well but you will need sticky sessions. Documentation : Play\u0026rsquo;s documentation is better. Activity : Play is more active because its more attractive for beginners.\\ Stateless/Stateful : Lift is very clear about where you need state and where you don\u0026rsquo;t. Lift can support stateless, partially stateful, and completely stateful apps. On a page-by-page and request-by-request basis, a Lift app can be stateful or stateless. Build environment: Lift uses Maven, sbt, Buildr, and even Ant. Lift is agnostic about the build environment and about the deploy environment (JEE container, Netty, whatever). This is important because it make Lift easier to integrate with the rest of your environment. Programming language: Play support java as well as scala but lift only supports scala language. Template language: Play uses groovy, Japid, Scala template engine as its template language while Liftweb uses HTML 5 its template language. Conclusion Scala designed to express common programming patterns in a concise, elegant, and type-safe way.\nScala is often compared with Groovy and Clojure, two other programming languages also built on top of the JVM. Among the main differences are:\nCompared with Clojure cala is statically typed, while both Groovy and Clojure are dynamically typed. This adds complexity in the type system but allows many errors to be caught at compile time that would otherwise only manifest at runtime, and tends to result in significantly faster execution.\nCompared with Groovy Compared with Groovy, Scala has more changes in its fundamental design. The primary purpose of Groovy was to make Java programming easier and less verbose, while Scala (in addition to having the same goal) was designed from the ground up to allow for a functional programming style in addition to object-oriented programming, and introduces a number of more \u0026ldquo;cutting-edge\u0026rdquo; features from functional programming languages like Haskell that are not often seen in mainstream languages\nReferences : Official Website http://www.scala-lang.org/ http://www.scala-lang.org/node/25 http://www.codecommit.com/blog/scala/roundup-scala-for-java-refugees http://www.scala-blogs.org/2007/12/dynamic-web-applications-with-lift-and.html http://stackoverflow.com/questions/7468050/how-to-create-a-simple-web-form-in-scala-lift SNIPET - http://simply.liftweb.net/index-3.4.html Comet - https://www.assembla.com/spaces/liftweb/wiki/Comet_Support Play - http://www.playframework.com/ This article is also supported by my friend Dhruv Bansal\n","id":67,"tag":"scala ; java ; introduction ; language ; play ; lift ; spring ; comparison ; technology","title":"Scala - Getting Started - Part 2","type":"post","url":"https://thinkuldeep.com/post/scala-getting-started-part2/"},{"content":"This article help you to start with Scala, with step by step guide and series of examples. It starts with an overview and then covers in detail examples. In later articles, I will write about feature comparison with other languages. It article is helpful for people coming from Java background, how it is not the prerequisites.\nScala Overview Scala is a general purpose programming language designed to express common programming patterns in a concise, elegant, and type-safe way. It smoothly integrates features of object-oriented and functional languages, enabling Java and other programmers to be more productive.\nFeatures While the simplicity and ease of use is the leading principle of Scala here are a few nice features of Scala:\n Code sizes are typically reduced by a factor of two to three when compared to an equivalent Java application.\n As compared to java, Scala boost their development productivity, applications scalability and overall reliability.\n Seamless integration with Java : Existing Java code and programmer skills are fully re-usable. Scala programs run on the Java VM are byte code compatible with Java so you can make full use of existing Java libraries or existing application code. You can call Scala from Java and you can call Java from Scala, the integration is seamless.\n Scala Compiler Performance: The Scala compiler is mature and proven highly reliable by years of use in production environments, the compiler was written by Martin Odersky who also wrote the Java reference compiler and co-authored the generics. Scala compiler produces byte code that performs every bit as good as comparable Java code.\n Scala is object-oriented: Scala is a pure object-oriented language in the sense that every value is an object. Types and behavior of objects are described by classes and traits. Classes are extended by sub classing and a flexible mixing-based composition mechanism as a clean replacement for multiple inheritance.\nTraits are Scala\u0026rsquo;s replacement for Java\u0026rsquo;s interfaces. Interfaces in Java are highly restricted, able only to contain abstract function declarations. This has led to criticism that providing convenience methods in interfaces is awkward (the same methods must be re implemented in every implementation), and extending a published interface in a backwards-compatible way is impossible. Traits are similar to mixing classes in that they have nearly all the power of a regular abstract class, lacking only class parameters (Scala\u0026rsquo;s equivalent to Java\u0026rsquo;s constructor parameters), since traits are always mixed in with a class.\n Scala is functional: Scala is also a functional language in the sense that every function is a value.\n Scala provides a lightweight syntax for defining anonymous functions. It supports higher-order functions : These are functions that take other functions as parameters, or whose result is a function. It allows functions to be nested:. It supports currying: Methods may define multiple parameter lists. When a method is called with a fewer number of parameter lists, then this will yield a function taking the missing parameter lists as its arguments. Scala interoperates with Java and .NET: Scala is designed to interoperate well with the popular Java 2 Runtime Environment (JRE). In particular, the interaction with the mainstream object-oriented Java programming language is as smooth as possible. Scala has the same compilation model (separate compilation, dynamic class loading) like Java and allows access to thousands of existing high-quality libraries. Support for the .NET Framework (CLR) is also available. Working with Scala Lets start with few Scala examples. Here is what you need\n Eclipse 3.7 or Eclipse 4.2 Java 1.6 or above. Scala Binary - 2.10.1/(2.9.2 – for ant integration example only) Hello World (Main method) Lets understand basic scala program things to keen in mind\n A Scala source files ends with the extension .scala. Scala methods are public by default, which means there’s no public method modifier (private and protected are both defined) Scala doesn’t really have statics. It has a special syntax which allows you to easily define and use singleton classes. object keyword creating a Singleton object of a class. For every object in the code, an anonymous class is created, which inherits from whatever classes you declared Rest is all java :)\npackage com.tk.scala /** * There is nothing called \u0026#39;static\u0026#39; in scala, This way we are creating singleton object */ object MainClass { /** * args is a method parameter of type Array[String]. * That is to say, args is a string array. * * \u0026#39;:Unit\u0026#39; defines the return type of the main function * Unit is like void but its more than that. */ def main(args:Array[String]):Unit = { //declaration of String variable firstName var firstName: String = \u0026#34;Kuldeep\u0026#34;; // similar declaration as above String. However, we don’t explicitly // specify any type. // This is because we’re taking advantage of Scala’s type inference mechanism var lastName = \u0026#34;Singh\u0026#34;; // val represents the declaration of constant // Think of it like a shorter form of Java’s final modifier val greeting:String = \u0026#34;Hello\u0026#34; println(greeting + \u0026#34; \u0026#34; + firstName + \u0026#34; \u0026#34; + lastName) } } Hello World ( with Application class) Below program show the use of Application class (trait) in Scala. Inspite of declaring main function in above example we can take advantage of ‘Application’ class.\n The Application trait can be used to quickly turn objects into executable programs. Drawback of using Application class to make program executable is that there is no way to obtain the command-line arguments because all code in body of an object extending Application is run as part of the static initialization which occurs before Application\u0026rsquo;s main method even begins execution. package com.tk.scala object MainClass extends Application { //declaration of String variable firstName var firstName: String = \u0026#34;Kuldeep\u0026#34;; var lastName = \u0026#34;Singh\u0026#34;; val greeting:String = \u0026#34;Hello\u0026#34; println(greeting + \u0026#34; \u0026#34; + firstName + \u0026#34; \u0026#34; + lastName) }  Byte code generation .Scala file is converted into .class file as JVM only understands byte code. This is how ‘.class’ look file when decompiled with the java decompiler. We can see here the singleton class MainClass as mentioned above ‘Object’ keyword in Scala in equivalent to singleton class in java.\npackage com.tk.scala; import scala.Predef.; import scala.collection.mutable.StringBuilder; // Singleton class public final class MainClass$ { public static final MODULE$; static { new (); } public void main(String[] args) { String firstName = \u0026#34;Kuldeep\u0026#34;; String lastName = \u0026#34;Singh\u0026#34;; String greeting = \u0026#34;Hello\u0026#34;; Predef..MODULE$.println(new StringBuilder().append(greeting).append(\u0026#34; \u0026#34;).append(firstName).append(\u0026#34; \u0026#34;).append(lastName).toString()); } // private constructor of singleton class private MainClass$() { MODULE$ = this; } }  Encapsulation in Scala This example demonstrate simple encapsulation concept using Scala program\n Declaration of private member in scala and defining its setter getter package com.tk.scala class Product { // declaration of private Int variable price private var _price: Int = 0 // definition of getter method for price // method name price returning Int variable price def price = _price // above declaration is same as // def price:Int = {_price}; // definition of setter method // here underscore is acting like a reserve keyword def price_=(value: Int): Unit = _price = value } /** * Main object */ object AppMainClass { def main(args: Array[String]) = { var product = new Product(); // calling setter to set value product.price = 12; print(\u0026#34;Price of product is: \u0026#34;) println(product.price) } } Byte code This is how .class looks when decompiled with java decompiler. public class Product { private int _price = 0; private int _price() { return this._price; } private void _price_$eq(int x$1) { this._price = x$1; } public int price() { return _price(); } public void price_$eq(int value) { _price_$eq(value); } } public final class AppMainFx$ { public static final MODULE$; static { new (); } public void main(String[] args) { Product product = new Product(); product.price_$eq(12); Predef..MODULE$.print(\u0026#34;Price of product is: \u0026#34;); Predef..MODULE$.println(BoxesRunTime.boxToInteger(product.price())); } private AppMainFx$() { MODULE$ = this; } }  General OOPs in Scala This example demonstrate simple OOPS concept using Scala program\n Scala doesn’t force you to define all public classes individually in files of the same name. Scala actually lets you define as many classes as you want per file (think C++ or Ruby). Demonstration of inheritance using scala. Constructor’s declarations. Overriding the methods. package com.tk.scala /** * Abstract Class declaration */ abstract class Person { // constant declaration of type string using keyword ‘val’ val SPACE: String = \u0026#34; \u0026#34; /** * These are a public variable (Scala elements are public by default) * * Scala supports encapsulation as well, but its syntax is considerably * less verbose than Java’s. Effectively, all public variables become instance * properties. */ var firstName: String = null var lastName: String = null // abstract method def getDetails(): Unit } /** * Concrete Employee class extending person class * * Constructor: Employee class with parameterized constructor with two * parameters firstName and lastName inherited from Person class. * The constructor isn’t a special method syntactically like in java, * it is the body of the class itself. */ class Employee(firstName: String, lastName: String) extends Person { val profession:String = \u0026#34;Employee\u0026#34;; // implementation of abstract methods def getDetails() = { println(\u0026#34;Personal Details: \u0026#34; + firstName + SPACE + lastName) } /** * Override is actually a keyword. * It is a mandatory method modifier for any method with a signature which * conflicts with another method in a superclass. * Thus overriding in Scala isn’t implicit (as in Java, Ruby, C++, etc), but explicitly declared. */ override def toString = \u0026#34;[\u0026#34; + profession +\u0026#34;: firstName=\u0026#34; + firstName + \u0026#34; lastName=\u0026#34; + lastName + \u0026#34;]\u0026#34; } /** * Concrete Teacher class extending person class */ class Teacher(firstName: String, lastName: String) extends Person { val profession:String = \u0026#34;Teacher\u0026#34;; def getDetails() = { println(\u0026#34;Personal Details: \u0026#34; + firstName + SPACE + lastName) } override def toString = \u0026#34;[\u0026#34; + profession +\u0026#34;: firstName=\u0026#34; + firstName + \u0026#34; lastName=\u0026#34; + lastName + \u0026#34;]\u0026#34; } object App { def main(args: Array[String]) = { var emp = new Employee(\u0026#34;Kuldeep\u0026#34;, \u0026#34;Singh\u0026#34;); println(emp); var teacher = new Teacher(\u0026#34;Kuldeep\u0026#34;, \u0026#34;Thinker\u0026#34;) println(teacher) } } Collections in Scala This example demonstrates some Collections in scala.\n Scala Lists are quite similar to arrays which means, all the elements of a list have the same type but there are two important differences. Lists are immutable which means, elements of a list cannot be changed by assignment. Lists represent a linked list whereas arrays are flat. Scala Sets: There are two kinds of Sets, the immutable and the mutable. The difference between mutable and immutable objects is that when an object is immutable, the object itself can\u0026rsquo;t be changed. By default, Scala uses the immutable Set. If you want to use the mutable Set, you\u0026rsquo;ll have to import scala.collection.mutable.Set class explicitly. Scala Maps: Similar to sets there are two kinds of maps immutable and mutable. Scala Tuples: Scala tuple combines a fixed number of items together so that they can be passed around as a whole. Unlike an array or list, a tuple can hold objects with different types but they are also immutable. package com.tk.scala /** * Class demonstrating list */ class ScalaList { /** * Creation list of types string */ val fruit1: List[String] = List(\u0026#34;apples\u0026#34;, \u0026#34;oranges\u0026#34;, \u0026#34;pears\u0026#34;) /** * All lists can be defined using two fundamental building blocks, * a tail Nil and :: which is pronounced cons. Nil also represents the empty list. */ val fruit2 = \u0026#34;apples\u0026#34; :: (\u0026#34;oranges\u0026#34; :: (\u0026#34;pears\u0026#34; :: Nil)) val empty = Nil /** * Operations supported on list * As scala list is LinkList, operations as related to general link list operations/ * * Some operations */ def Operations(): Unit = { println(\u0026#34;Head of fruit : \u0026#34; + fruit1.head) println(\u0026#34;Tail of fruit : \u0026#34; + fruit1.tail) println(\u0026#34;Check if fruit is empty : \u0026#34; + fruit1.isEmpty) println(\u0026#34;Check if nums is empty : \u0026#34; + fruit1.isEmpty) println(\u0026#34;Check if nums is empty : \u0026#34; + empty.isEmpty) traverseList } def traverseList() = { val iterator = fruit1.iterator; while (iterator.hasNext) { print(iterator.next()) print(\u0026#34;-\u0026gt;\u0026#34;) } println } } /** * Class demonstrating sets */ class ScalaSet { // Empty set of integer type var emptySet: Set[Int] = Set() // Set of integer type var oddIntegerSet: Set[Int] = Set(1, 3, 5, 7) var evenIntegerSet: Set[Int] = Set(2, 4, 6, 8) def operations(): Unit = { println(\u0026#34;Head of fruit : \u0026#34; + oddIntegerSet.head) println(\u0026#34;Tail of fruit : \u0026#34; + oddIntegerSet.tail) println(\u0026#34;Check if fruit is empty : \u0026#34; + oddIntegerSet.isEmpty) println(\u0026#34;Check if nums is empty : \u0026#34; + oddIntegerSet.isEmpty) println(\u0026#34;Check if nums is empty : \u0026#34; + emptySet.isEmpty) //Concatenating Sets var concatinatedList = oddIntegerSet ++ evenIntegerSet println(\u0026#34;Concatinated List : \u0026#34; + concatinatedList) } } /** * Class demonstrating maps */ class ScalaMap { // Empty hash table whose keys are strings and values are integers: /** * Note: While defining empty map, the type annotation is necessary * as the system needs to assign a concrete type to variable. */ var emptyMap: Map[Char, Int] = Map() val colors = Map(\u0026#34;red\u0026#34; -\u0026gt; \u0026#34;#FF0000\u0026#34;, \u0026#34;azure\u0026#34; -\u0026gt; \u0026#34;#F0FFFF\u0026#34;, \u0026#34;peru\u0026#34; -\u0026gt; \u0026#34;#CD853F\u0026#34;) def operations() = { println(\u0026#34;Keys in colors : \u0026#34; + colors.keys) println(\u0026#34;Values in colors : \u0026#34; + colors.values) println(\u0026#34;Check if colors is empty : \u0026#34; + colors.isEmpty) println(\u0026#34;Check if nums is empty : \u0026#34; + emptyMap.isEmpty) tranverseMap } def tranverseMap() { println(\u0026#34;Printing key value pair for map:::\u0026#34;) // using for each loop to traverse map colors.keys.foreach { i =\u0026gt; print(\u0026#34;Key = \u0026#34; + i) println(\u0026#34; Value = \u0026#34; + colors(i)) } } } /** * Class demonstrating tuples */ class scalaTuple { val t = new Tuple3(1, \u0026#34;hello\u0026#34;, Console) //same tuple can be declared as val t1 = (1, \u0026#34;hello\u0026#34;, Console) val intergerTuple = (4, 3, 2, 1) def operations() = { println(\u0026#34;Print sum of elements in integer tuples\u0026#34;) val sum = intergerTuple._1 + intergerTuple._2 + intergerTuple._3 + intergerTuple._4 transverseTuple } def transverseTuple { println(\u0026#34;Printing elements in tuples:::\u0026#34;) intergerTuple.productIterator.foreach { i =\u0026gt; println(\u0026#34;Value = \u0026#34; + i) } } } object CollectionMain { def main(args: Array[String]) = { var scalaList = new ScalaList(); println(\u0026#34;Printing operations on list::::\u0026#34;) scalaList.Operations; var scalaSet = new ScalaSet(); println(\u0026#34;\u0026#34;); println(\u0026#34;Printing operations on Set::::\u0026#34;) scalaSet.operations; var scalaMap = new ScalaMap(); println(\u0026#34;\u0026#34;); println(\u0026#34;Printing operations on Map::::\u0026#34;) scalaMap.operations; var scalaTuple = new scalaTuple(); println(\u0026#34;\u0026#34;); println(\u0026#34;Printing operations on Tuple::::\u0026#34;) scalaTuple.operations; } }\tFile operations \u0026amp; Exception handling Exceptions in Scala: Scala doesn\u0026rsquo;t actually have checked exceptions. Scala allows you to try/catch any exception in a single block and then perform pattern matching against it using case blocks as shown below package com.tk.scala import java.io.IOException import java.io.FileNotFoundException import scala.io.Source import scala.reflect.io.Path import java.io.File import java.io.PrintWriter /** * Scala example to show read and write operations on file */ class FileOperations { /** * create and add content to file * * This example depicts how we use java API in scala * We can use any java class/API in scala */ def writeFile(filePath:String, content:String) = { // using java file object to create a file var file:File = new File(filePath); if (file.createNewFile()) { println(\u0026#34;New file created\u0026#34;) } else { println(\u0026#34;File alredy exists\u0026#34;) } // using java print writer object to add content in file var writer:PrintWriter = new PrintWriter(file); if (content != null) writer.write(content) writer.close(); } /** * reads the given file are return the content * of that file */ def readFile(filePath: String): String = { // here is scala io class to readfrom file var lines: String = null; // there is no checked exception in scala try { lines = scala.io.Source.fromFile(filePath).mkString // catch block for exception handing } catch { case ex: FileNotFoundException =\u0026gt; { println(\u0026#34;Missing file exception\u0026#34;) } case ex: IOException =\u0026gt; { println(\u0026#34;IO Exception\u0026#34;) } } finally { println(\u0026#34;Exiting finally...\u0026#34;) } // returning content of the file lines } } /** * Main object */ object FileOperationMain extends Application { val filePath = \u0026#34;e:\\\\test.txt\u0026#34; var fileOpts: FileOperations = new FileOperations; // calling api to create/write file fileOpts.writeFile(filePath, \u0026#34;File operation example for scala\u0026#34;); // calling api to read file println(fileOpts.readFile(filePath)); } Triat A trait encapsulates method and field definitions, which can then be reused by mixing them into classes. Unlike class inheritance, in which each class must inherit from just one superclass, a class can mix in any number of traits. A trait definition looks just like a class definition except that it uses the keyword trait as follows Explains how scala support mixed bases composition. package com.tk.scala /** * defining a trait * Trait can have abstract method as well * as well as concrete method */ trait Swimming { def swim() = println(\u0026#34;I can swim\u0026#34;) } /** * fly trait */ trait Fly { def fly() = println(\u0026#34;I can fly\u0026#34;) } /** * abstract class bird */ abstract class Bird { //abstract method def defineMe: String } /** * class having bird as well as trait properties */ class Penguin extends Bird with Swimming { /** * Short hand syntax of function defination * Following line means function flyMessage is * returing string value \u0026lt;I am good..\u0026gt; */ def defineMe = \u0026#34;I am a Penguin\u0026#34; /** * This piece of code is equivalent to above definition of function */ /*def defineMe() = { var str:String = \u0026#34;I am a Penguin\u0026#34; // returning string value (No return keyword required) str }*/ } /** * class having all the traits */ class Pigeon extends Bird with Swimming with Fly { /** * defining defineMe function */ def defineMe = \u0026#34;I am a Pigeon\u0026#34; } /** * AppMain class with main function */ object AppMain extends Application { // penguin instance have on swim trait var penguin = new Penguin println(penguin.defineMe) penguin.swim // pigeon instance have all traits var pigeon = new Pigeon println(pigeon.defineMe) pigeon.fly pigeon.swim } Scala and Java Interoperability This example shows how we can use scala program with java programs and vice versa.\n As we know the Scala classes also get compiled to .class file and generates the same bytecode as java class. Thus the interoperability is smooth between two languages. Java Class\npackage com.tk.java; import com.tk.scala.ScalaLearner; /** * The Class JavaLearner. */ public class JavaLearner { private String firstName; private String lastName; public JavaLearner(String firstName, String lastName) { this.firstName = firstName; this.lastName = lastName; } public void consoleAppender() { System.out.println(firstName + \u0026#34; \u0026#34; + lastName + \u0026#34; is a java learner\u0026#34;); } public static void main(String[] args) { JavaLearner printJava = new JavaLearner(\u0026#34;Kuldeep\u0026#34;, \u0026#34;Singh\u0026#34;); printJava.consoleAppender(); // create instance of scala class \tScalaLearner printScala = new ScalaLearner(\u0026#34;Dhruv\u0026#34;, \u0026#34;Bansal\u0026#34;); printScala.consoleAppender(); } } Scala Class\npackage com.tk.scala class ScalaLearner(firstName: String, lastName: String) { /** * Scala API prints message on console */ def consoleAppender() = println(firstName + \u0026#34; \u0026#34; + lastName + \u0026#34;i am in scala learner\u0026#34;) } /** * This Object is java representation of singleton object * this piece of code will generate class file with name * AppMain having singleton class AppMain having public * static void main method */ object AppMain { def main(args: Array[String]): Unit = { var scalaLearner = new ScalaLearner(\u0026#34;Kuldeep\u0026#34;, \u0026#34;Singh\u0026#34;); scalaLearner.consoleAppender; // import class in the scope of another class only import com.nagarro.jsag.java.JavaLearner // creating instance of java class file var javaLearner = new JavaLearner(\u0026#34;Dhruv\u0026#34;, \u0026#34;Bansal\u0026#34;); javaLearner.consoleAppender; } } ##Performance comparison of Scala with Java The performance measurement we did bubble sort on revert order array with same code in Scala script and Java Class file. All measurements were done on an Intel Core i7-3720QM CPU 2.60 GHz using JDK7u7 running on Windows 7 with Service Pack 1. I used Eclipse Juno with the Scala plugin and mentioned above.\nJava Bubble Sort Implementation Java Program\npackage com.tk.scala; /** * The Class SortAlgo. */ public class SortAlgo { /** * Implementation of bubble sort algorithm. * * @param inputArray * the input array */ public void bubbleSort(int inputArray[]) { int i, j, t = 0; int arryLength = inputArray.length; for (i = 0; i \u0026lt; arryLength; i++) { for (j = 1; j \u0026lt; (arryLength - i); j++) { if (inputArray[j - 1] \u0026gt; inputArray[j]) { t = inputArray[j - 1]; inputArray[j - 1] = inputArray[j]; inputArray[j] = t; } } } } } package com.tk.scala; /** * The Class AppMain. */ public class AppMain { public static void main(String[] args) { long start = System.currentTimeMillis(); consoleAppender(\u0026#34;Sort algo start time: \u0026#34; + start); Integer arraySize = 10000; consoleAppender(\u0026#34;Population array...\u0026#34;); int[] array = new int[arraySize]; for (int i = 0; i \u0026lt; arraySize; i++) { array[i] = arraySize - i; } consoleAppender(\u0026#34;Sorting array...\u0026#34;); SortAlgo sortAlgo = new SortAlgo(); sortAlgo.bubbleSort(array); long end = System.currentTimeMillis(); consoleAppender(\u0026#34;Sort algo end time: \u0026#34; + end); consoleAppender(\u0026#34;Total time taken: \u0026#34; + (end - start)); } private static void consoleAppender(String strValue) { System.out.println(strValue); } } Scala program\npackage com.tk.scala class SortAlgo { // implementation of bubble sort def bubbleSort(a: Array[Int]): Unit = { var i:Int = 1; for (i \u0026lt;- 0 to (a.length - 1)) { for (j \u0026lt;- (i - 1) to 0 by -1) { if (a(j) \u0026gt; a(j + 1)) { val temp = a(j + 1) a(j + 1) = a(j) a(j) = temp } } } } } package com.tk.scala /** * Bubble sort singleton class with main method */ object BubbleSort { def main(args : Array[String]) : Unit = { // declaring and initiating local variable for performance measure var startTime:Long = java.lang.System.currentTimeMillis(); println(\u0026#34;Sorting start time: \u0026#34; + startTime) val arraySize = 10000; // declaring Integer array var arry : Array[Int] = new Array[Int](arraySize) println(\u0026#34;Populating array...\u0026#34;) for(i\u0026lt;- 0 to (arraySize - 1)){ arry(i) = arraySize - i; } println(\u0026#34;Sorting array...\u0026#34;) var algo:SortAlgo = new SortAlgo; // calling sort API algo.bubbleSort(arry) var endTime:Long = java.lang.System.currentTimeMillis(); println(\u0026#34;Sorting end time: \u0026#34; + endTime) println(\u0026#34;Total time taken:\u0026#34; + (endTime - startTime)) } } Comparison See below table for performance metric s of above two programs:\n Size Scala Java Time to populate Array 100000 70 ms 3 ms Time to Sort Array by Bubble sort 100000 26187 ms 12104 ms Time to populate Array 10000 69 ms 1 ms Time to Sort Array by Bubble sort 10000 339 ms 140 ms Conclusion We have learned basics of Scala tried benchmarking it against Java. We found the Scala produce less verbose code but it is slower in certain cases in comparison to java.\nThis article is also supported by my friend Dhruv Bansal\nReferences Official Website -http://www.scala-lang.org/ http://www.scala-lang.org/node/25 http://www.codecommit.com/blog/scala/roundup-scala-for-java-refugees http://www.tutorialspoint.com/scala/scala_lists.htm http://joelabrahamsson.com/learning-scala-part-seven-traits/ In the next article we will see how to build web applications using Scala.\n","id":68,"tag":"scala ; java ; introduction ; language ; comparison ; technology","title":"Scala - Getting Started - Part 1","type":"post","url":"https://thinkuldeep.com/post/scala-getting-started-part1/"},{"content":"This article describes how to configure charset for a JVM.\nWe faced an issue on a Linux environment, a file containing some Swedish character was not being read correctly by underlying JVM since the default charset of JVM was picked as UTF-8 (which does not support some Swedish character). But on the other hand, in Windows environment, it was working fine since the default charset was picked as Windows-1252 (which does support these characters).\nHere are the steps to solve the issue ISO-8859-1 is the standard character encodings, generally intended for Western European languages. This charset can be set for JVM in Linux and Windows environments for Western European languages.\nSo, to tell JVM which charset should it pick, irrespective of the environment, can be defined from command line in JVM.\nExample for Tomcat server, make the following changes\n Open catalina.sh in your TOMCAT/bin directory and set the JAVA_OPTS to change default charset to ISO-8859-1 using below statement JAVAOPTS=\u0026quot;$JAVAOPTS -Dfile.encoding=ISO-8859-1\u0026quot; and then restart the server.\n","id":69,"tag":"charset ; java ; jvm ; customizations ; technology","title":"Charset configuration in JVM","type":"post","url":"https://thinkuldeep.com/post/java-charset-configuration/"},{"content":"Document introduction contains a proof of concept for the research done on reading and writing OLE objects in Java. This research has been done using Apache POI. Please go thru the references for information about apache POI.\nThis article covers a POC to fetch OLE objects from RIF formatted XML. Not just that, we need to store them in a in some viewable format.\nFor a given XML in RIF format, parse it and read the embedded OLE object and represent in some viewable format using Java.\nXML was given in RIF(Requirement Interchange Format), it is used to exchange requirements along with the associated meta data. The given XML was generated from a tool DOORS by IBM. It embeds the attachments in BASH64 encoded under tag \u0026lt;rif-xhtml:object\u0026gt; as follows:\n\u0026lt;rif-xhtml:object name=\u0026#34;2cdb48aa776249cfa74c46ef450dffb9\u0026#34; classid=\u0026#34;00020906-0000-0000-c000000000000046\u0026#34;\u0026gt; 0M8R4KGxGuEAAAAAAAAAAAAAAAAAAAAAPgADAP7/CQAGAAAAAAAAAAAAAAAP AAAAAQAAAAAAAAAAEAAAAgAAAAEAAAD+////AAAAAAAAAAAEAAAA+AAAAPkA AAD6AAAA+wAAAPwAAAD9AAAA/gAAAP8AAAAAAQAAAQEAAAIBAAADAQAA/way .. .. .. \u0026lt;rif-xhtml:object\u0026gt; This matches well with the following information from the RIF 1.0 standard (p, 28)\n Object embedding in the XHTML name space With RIF, it is also possible to embed arbitrary objects (e.g. pictures) within text. This functionality is based on Microsoft’s clipboard and OLE (Object Linking and Embedding) technology and is realized within the XHTML name space by including the “Object Module”. The following table gives an overview of the XML element and attributes from the Object Module that are used for embedding objects:\n Element Attributes Attribute types Minimal Content Model Object classid URI (PCDATA data URI height Length name CDATA type ContentType width Length Each rif-xhtml:object tag have “name” and “classid” attribute, where classid = “00020906-0000-0000-c000000000000046” is classid for Microsoft Word Document according to Windows Registry Hacks So we will only be taking care of classId “00020906-0000-0000-c000000000000046” while parsing the XML.\nOLE File System Normally, these OLE documents are stored in subdirectories of the OLE filesystem. The exact location of the embeded documents will vary depending on the type of the master document, and the exact directory names will differ each time. To figure out exactly which directory to look in, you will either need to process the appropriate OLE 2 linking entry in the master document, or simple iterate over all the directories in the filesystem.\nAs a general rule, you will find the same OLE 2 entries in the subdirectories, as you would\u0026rsquo;ve found at the root of the filesystem were a document to not be embeded.\nApache POIFS File System POIFS file systems are essentially normal files stored on a Java-compatible platform\u0026rsquo;s native file system. They are typically identified by names ending in a four character extension noting what type of data they contain. For example, a file ending in \u0026ldquo;.xls\u0026rdquo; would likely contain spreadsheet data, and a file ending in \u0026ldquo;.doc\u0026rdquo; would probably contain a word processing document. POIFS file systems are called \u0026ldquo;file system\u0026rdquo;, because they contain multiple embedded files in a manner similar to traditional file systems. Along functional lines, it would be more accurate to call these POIFS archives. For the remainder of this document it is referred to as a file system in order to avoid confusion with the \u0026ldquo;files\u0026rdquo; it contains.\nPOIFS file systems are compatible with those document formats used by a well-known software company\u0026rsquo;s popular office productivity suite and programs outputting compatible data. Because the POIFS file system does not provide compression, encryption or any other worthwhile feature, its not a good choice unless you require interoperability with these programs.\nThe POIFS file system does not encode the documents themselves. For example, if you had a word processor file with the extension \u0026ldquo;.doc\u0026rdquo;, you would actually have a POIFS file system with a document file archived inside of that file system.\nProof of Concept Prerequisites/Dependecies Eclipse IDE commons-codec-1.7.jar – for base64 encoding/decoding poi-3.8-20120326.jar – Apache POI Jar poi-scratchpad-3.8-20120326.jar – Apache POI scratchpad jar which contains the MS World API. Sample XML in RIF format. The constants file\npackage com.tk.poc; public class Constants { public static final String CLASS_ID_OFFICE = \u0026#34;00020906-0000-0000-c000000000000046\u0026#34;; public static final String ATTR_NAME = \u0026#34;name\u0026#34;; public static final String ATTR_CLASS_ID = \u0026#34;classid\u0026#34;; public static final String TAG_OLE = \u0026#34;rif-xhtml:object\u0026#34;; } Parser Use SAX parser to parse the given XML to get content from tag “rif-xhtml:object”. This will parse the given xml and generate the separate file for each rif-xhtml:object tag having classid “00020906-0000-0000-c000000000000046”. It also decode the content before writing to files.\npackage com.tk.poc; ... public class Parser extends DefaultHandler { private StringBuilder stringBuilder; private String name; private List\u0026lt;String\u0026gt; generatedFiles = new ArrayList\u0026lt;String\u0026gt;(10); /** * Parse the xml file. and return the name of the files generated files * * @param filePath * for the xml. */ public List\u0026lt;String\u0026gt; parseXML(String filePath) { stringBuilder = new StringBuilder(); SAXParserFactory factory = SAXParserFactory.newInstance(); try { SAXParser parser = factory.newSAXParser(); parser.parse(filePath, this); } catch (ParserConfigurationException e) { System.out.println(\u0026#34;ParserConfig error\u0026#34;); } catch (SAXException e) { System.out.println(\u0026#34;SAXException : xml not well formed\u0026#34;); } catch (IOException e) { System.out.println(\u0026#34;IO error\u0026#34;); } return generatedFiles; } @Override public void startElement(String uri, String localName, String qName, Attributes attributes) throws SAXException { name = null; if (Constants.TAG_OLE.equalsIgnoreCase(qName) \u0026amp;\u0026amp; Constants.CLASS_ID_OFFICE.equals(attributes.getValue(Constants.ATTR_CLASS_ID))) { stringBuilder = new StringBuilder(); name = attributes.getValue(Constants.ATTR_NAME); } } @Override public void characters(char[] ch, int start, int length) throws SAXException { stringBuilder.append(new String(ch, start, length)); } @Override public void endElement(String uri, String localName, String qName) throws SAXException { if (name != null \u0026amp;\u0026amp; Constants.TAG_OLE.equalsIgnoreCase(qName)) { FileOutputStream outputStream = null; try { String filePath = name; outputStream = new FileOutputStream(filePath); byte[] base64 = Base64.decodeBase64(stringBuilder.toString()); outputStream.write(base64); generatedFiles.add(filePath); } catch (FileNotFoundException e) { System.out.println(\u0026#34;File not found : \u0026#34; + name); } catch (IOException e) { e.printStackTrace(); } finally { try { if (outputStream != null) { outputStream.close(); } } catch (IOException e) { e.printStackTrace(); } } } } } Reading OLE Object Using Apache POI You can reach the OLE attachment content by following command java -classpath poi-3.8-20120326.jar org.apache.poi.poifs.dev.POIFSDump \u0026lt;filename\u0026gt; It generates following structure for the ole object generated from above xml. You can read/write OLE object properties. Here is example to read and print the properties\n... public class OLEReader { public static void main(String[] args) throws FileNotFoundException, IOException { final String filename = \u0026#34;1797c9be9d034d8f907239afb20d6547\u0026#34;; POIFSReader r = new POIFSReader(); r.registerListener(new MyPOIFSReaderListener()); r.read(new FileInputStream(filename)); } static class MyPOIFSReaderListener implements POIFSReaderListener { public void processPOIFSReaderEvent(POIFSReaderEvent event) { PropertySet ps = null; try { ps = PropertySetFactory.create(event.getStream()); } catch (NoPropertySetStreamException ex) { System.out.println(\u0026#34;No property set stream: \\\u0026#34;\u0026#34; + event.getPath() + event.getName() + \u0026#34;\\\u0026#34;\u0026#34;); return; } catch (Exception ex) { throw new RuntimeException(\u0026#34;Property set stream \\\u0026#34;\u0026#34; + event.getPath() + event.getName() + \u0026#34;\\\u0026#34;: \u0026#34; + ex); } /* Print the name of the property set stream: */ System.out.println(\u0026#34;Property set stream \\\u0026#34;\u0026#34; + event.getPath() + event.getName() + \u0026#34;\\\u0026#34;:\u0026#34;); final long sectionCount = ps.getSectionCount(); System.out.println(\u0026#34; No. of sections: \u0026#34; + sectionCount); /* Print the list of sections: */ List\u0026lt;Section\u0026gt; sections = ps.getSections(); int nr = 0; for (Section section : sections) { System.out.println(\u0026#34; Section \u0026#34; + nr++ + \u0026#34;:\u0026#34;); String s = section.getFormatID().toString(); System.out.println(\u0026#34; Format ID: \u0026#34; + s); /* Print the number of properties in this section. */ int propertyCount = section.getPropertyCount(); System.out.println(\u0026#34; No. of properties: \u0026#34; + propertyCount); /* Print the properties: */ Property[] properties = section.getProperties(); for (int i2 = 0; i2 \u0026lt; properties.length; i2++) { /* Print a single property: */ Property p = properties[i2]; long id = p.getID(); long type = p.getType(); Object value = p.getValue(); System.out.println(\u0026#34; Property ID: \u0026#34; + id + \u0026#34;, type: \u0026#34; + type + \u0026#34;, value: \u0026#34; + value); } } } } } It will print something like below\n No property set stream: \u0026quot;\\0,4,2540,19841Table\u0026quot; No property set stream: \u0026quot;\\0,4,2540,1984_OlePres000\u0026quot; Property set stream \u0026quot;\\0,4,2540,1984_SummaryInformation\u0026quot;: No. of sections: 1 Section 0: Format ID: {F29F85E0-4FF9-1068-AB91-08002B27B3D9} No. of properties: 16 Property ID: 1, type: 2, value: 1252 Property ID: 2, type: 30, value: Property ID: 3, type: 30, value: Property ID: 4, type: 30, value: Caspari, Michael (415-Extern) Property ID: 5, type: 30, value: Property ID: 6, type: 30, value: Property ID: 7, type: 30, value: Normal.dotm Property ID: 8, type: 30, value: Caspari, Michael (415-Extern) Property ID: 9, type: 30, value: 2 Property ID: 18, type: 30, value: Microsoft Office Word Property ID: 12, type: 64, value: Wed Sep 19 15:32:00 IST 2012 Property ID: 13, type: 64, value: Wed Sep 19 15:33:00 IST 2012 Property ID: 14, type: 3, value: 1 Property ID: 15, type: 3, value: 6 Property ID: 16, type: 3, value: 38 Property ID: 19, type: 3, value: 0 Property set stream \u0026quot;\\0,4,2540,1984_DocumentSummaryInformation\u0026quot;: No. of sections: 1 Section 0: Format ID: {D5CDD502-2E9C-101B-9397-08002B2CF9AE} No. of properties: 12 Property ID: 1, type: 2, value: 1252 Property ID: 15, type: 30, value: ITI/OD Property ID: 5, type: 3, value: 1 Property ID: 6, type: 3, value: 1 Property ID: 17, type: 3, value: 43 Property ID: 23, type: 3, value: 917504 Property ID: 11, type: 11, value: false Property ID: 16, type: 11, value: false Property ID: 19, type: 11, value: false Property ID: 22, type: 11, value: false Property ID: 13, type: 4126, value: [B@19c26f5 Property ID: 12, type: 4108, value: [B@c1b531 No property set stream: \u0026quot;\\0,4,2540,1984WordDocument\u0026quot; No property set stream: \u0026quot;\\0,4,2540,1984_CompObj\u0026quot; No property set stream: \u0026quot;\\0,4,2540,1984_Ole\u0026quot; Wring OLE Object to viewable format OLE object can be identified by the class id and its properties. We have noticed that OLE object present in the stream parsed from xml contained inside one folder under ROOT folder also noticed that the OLE object is actually MSWord document. Here is program which converts the OLE object stream to MS Word document.\npackage com.tk.poc; ... /** * This file read the OLE document and convert to document. * * @author kuldeep * */ public class OLE2DocConvertor { /** * * @param filePath * @return path of generated document */ public static String convert(String filePath) { String output = filePath + \u0026#34;.doc\u0026#34;; FileInputStream is = null; FileOutputStream out = null; try { is = new FileInputStream(filePath); POIFSFileSystem fs = new POIFSFileSystem(is); // Assuming there is one folder under root which is an OLE object // such as \u0026#34;/0,1,13204,13811\u0026#34; , \u0026#34;/0,1,16267,7627\u0026#34; DirectoryNode firstFolder = (DirectoryNode) fs.getRoot().getEntries().next(); HWPFDocument embeddedWordDocument = new HWPFDocument(firstFolder); out = new FileOutputStream(output); embeddedWordDocument.write(out); } catch (FileNotFoundException e) { System.out.println(\u0026#34;File paths are not correct: \u0026#34; + e.getMessage()); } catch (IOException e) { System.out.println(\u0026#34;Error in coverting ole to doc\u0026#34; + e.getMessage()); } finally { try { if (is != null) { is.close(); } if (out != null) { out.close(); } } catch (IOException e) { e.printStackTrace(); } } return output; } } On the similar lines you can get actual content as well like image, excel, or any other.\n The Main - Here is the executor program which takes XML(RIF) as input and generate separate document for each embedded elements. package com.tk.poc; import java.util.List; /** * Main class * @author kuldeep * */ public class Main { public static void main(String[] args) { String xmlFileName = \u0026#34;Export_Muster.xml\u0026#34;; if(args.length\u0026gt;0){ xmlFileName = args[0]; } Parser parser = new Parser(); System.out.println(\u0026#34;Input file : \u0026#34; + xmlFileName); List\u0026lt;String\u0026gt; generatedFiles = parser.parseXML(xmlFileName); String outputFile = null; for (String fileName : generatedFiles) { outputFile = OLE2DocConvertor.convert(fileName); System.out.println(\u0026#34;Embeded OLE object converted to document : \u0026#34; + outputFile); } }\t} Conclusion We were able to parse the RIF XML and able to separate out the embedded attachments in individual viewable/browsable files.\nReferences http://en.wikipedia.org/wiki/Requirements_Interchange_Format http://poi.apache.org/poifs/how-to.html http://poi.apache.org/poifs/fileformat.html http://msdn.microsoft.com/en-us/library/windows/desktop/ms693383(v=vs.85).aspx http://en.wikibooks.org/wiki/Windows_Registry_Hacks/Alphanumerical_List_of_Microsoft_Windows_Registry_Class_Identifiers http://poi.apache.org/hwpf/index.html ","id":70,"tag":"ole ; java ; apache poi ; embedded ; RIF ; XML ; IBM DOORS ; technology","title":"Working with Embedded OLE Objects in Java","type":"post","url":"https://thinkuldeep.com/post/java-working-with-ole-objects/"},{"content":"Javascript Plugin Framework is implemented to add extra functionality in the existing Web based application without actually changing the application code. This framework is also helpful when some functionality need to control for different user profiles.\nFor example, to add one extra button in a menu we can write the plugin directly, by defining what that button is going to do on click, what all events should it handle, in one javascript file rather than making changes in the application code for the same.\nThe Plugin Framework The Plugin Class // Definition of a plugin. Plugin = function () { this.id = \u0026#34;\u0026#34;; this.customData = {}; } Plugin.prototype = { /** * Initialize method. */ init : function () { }, /** * Handle event method. * @param eventId * @param payload */ handleEvent: function (eventId, payload) { } } Plugin Registry Plugin registry manage all the plugins and propagating event to the respective plugin.\n/** * Plugins.js * * It stores the plugins. This file defines a plugin and expose an API to register the plugin. */ Plugins = { /** * Plugins array ordered for every event. */ pluginsData: [], /** * Map of Event id and Index in plugin array. */ mapEventIndexes: {}, /** * Register plugin * @param {Plugin Object} - object of plugin. * @param {String} eventIds - comma separated events ids. */ registerPlugin: function(plugin, eventIds){ var eventIdArr = eventIds.split(\u0026#34;,\u0026#34;); for ( var i = 0; i \u0026lt; eventIdArr.length; i++) { var eventId = eventIdArr[i]; var eventIndex = this.mapEventIndexes[eventId]; //check if event is already registered. \tif(eventIndex || eventIndex == 0){ var pluginArr = this.pluginsData[eventIndex]; pluginArr[pluginArr.length] = plugin; } else { //Add new entry. \teventIndex = this.pluginsData.length; var value = \u0026#34;{\u0026#34; + eventId +\u0026#34; : \u0026#34; + eventIndex + \u0026#34;}\u0026#34;; value = eval(\u0026#39;(\u0026#39; +value+ \u0026#39;)\u0026#39;); jQuery.extend(this.mapEventIndexes, value); this.pluginsData[eventIndex] = [plugin]; } } }, /** * Call handle event of all plugins who has registered for the event. * @param {String} eventId - event identifier. * @param {Object} payload - event payload */ handleEvent : function (eventId, payload){ var eventIndex = this.mapEventIndexes[eventId]; var pluginArr = this.pluginsData[eventIndex]; if(pluginArr){ jQuery.each( pluginArr, function (index, plugin){ plugin.handleEvent(eventId, payload); }); } } }; Extending the existing website Lets extend the existing website using the plugin framework\n Define a plugin.\nimage_rotation.js\n/** * Plugin for image rotation functionality. */ var imageRotationPlugin = new Plugin(); imageRotationPlugin.id = \u0026#34;Image_Rotation_Plugin\u0026#34;; Define which all events the plugin would handle\n/** * Override Plugin.handleEvent * @See Plugins.js */ imageRotationPlugin.handleEvent = function(eventId, payload){ if(eventId == \u0026#34;rotate_image\u0026#34;) { rotateImage(payload.id) // call method to rotate a image on given id } }; Initialise the Plugin, where do you want to add this button/functionality.\n/** * Override Plugin.init * @See Plugins.js */ imageRotationPlugin.init = function(){ //Draw the button. var toolset = jQuery(\u0026#39;#menu\u0026#39;); toolset.prepend(\u0026#34;\u0026lt;li class=\u0026#39;imagerotate\u0026#39;\u0026gt;\u0026lt;a id=\u0026#39;src_img\u0026#39; class=\u0026#39;tooltip\u0026#39; title=\u0026#39;Rotate the selected image\u0026#39;\u0026gt;\u0026lt;/a\u0026gt;\u0026lt;/li\u0026gt;\u0026#34;); var imageRoateLink = jQuery(\u0026#39;#menu li.imagerotate a\u0026#39;); //Add event handler to download the image map. imageRoateLink.click(function(){ Plugins.handleEvent(\u0026#39;rotate_image\u0026#39;, {id:\u0026#34;image_source\u0026#34;}); }); } And, finally register the plugin\njQuery(document).ready(function(){ Plugins.registerPlugin(imageDownloadPlugin, \u0026#34;update_viewer_to_initial_state\u0026#34;); imageRotationPlugin.init(); }); This plugin will add a menu in existing website, which making changes in the main website code, we can manage the Plugins folder and keep extending the functionality.\nWe can call the Plugins.handleEvent('rotate_image', {id:\u0026quot;image_source\u0026quot;}) from any place of the code, and it will be receive the event. We can register our plugin to register for multiple events, even for th events raised globally or broadcasted by other plugins.\n","id":71,"tag":"JavaScript ; plugin ; framework ; extendability ; architecture ; technology","title":"JavaScript Plugin Framework","type":"post","url":"https://thinkuldeep.com/post/javascript-plugin-framework/"},{"content":"We were getting following error on the production sever (Linux, Jboss). We were in the middle of user acceptance testing with the client, many users were accessing the application to simulate various scenarios in a live session.\nthe application was crashing just after 2-3 minute of start with above error, and no user were able to access the application. We were expected for fix the issue ASAP, as the users were assembled for the web session, all were waiting for the server up.\nERROR [org.apache.jasper.compiler.Compiler] Compilation error java.io.FileNotFoundException: /home/user/jboss/server/user/work/jboss.web/localhost/user/org/apache/jsp/jsp/common/header_jsp.java (Too many open files) at java.io.FileInputStream.open(Native Method) at java.io.FileInputStream.(FileInputStream.java:106) at java.io.FileInputStream.(FileInputStream.java:66) at org.apache.jasper.compiler.JDTCompiler$1CompilationUnit.getContents(JDTCompiler.java:103) Solving the Chaos! We had followed below approach and steps:\n Recorded the logs.\n Server Reboot – this was the quickest option.\n We had rebooted the server.\n But this option had not given us much time to analyze the recorded logs, rhe application crashed again after 5 minutes, as soon as the concurrent users increased.\n And once again we noticed above exception in the logs. “Too many open files”. Actually the exception was misleading “FileNotFoundException”.\n Then we borrowed some time from the users and continued the analysis.\n We had following points in mind:\n File/Resources were not being closed properly – I was sure that there was no file handling or resource handing related code introduced in that release and IO handing were proper in previous releases. We had not faced such issue before if there were some coding issue then we would have encountered it earlier.\n Next thing came in our mind was, what is the allowed limit for the number of open files? – In the release, numbers of JSP files were introduced, and it could be possible that compiler or OS allow only certain number of open files at a time.\n We started analyzing, problem looked more related to platform (OS, Hardware \u0026hellip;) as this was the only difference in the QA and Production server. Production server was owned by the client.\n and We found following :\n Run following command to know currently used files:\nls -l /proc/[jbosspid]/fd\n Replace the [jbosspid] by the jboss process id, get pid by : ps -ef | grep java\n[user@system ~]$ ps -ef | grep java user 23096 23094 0 Apr27 ? 00:04:55 java -Xms3072m -Xmx3072m -jar /opt/user/app/deploy/app-server.jar user 31090 29459 0 12:16 pts/2 00:00:00 grep java And locate the process id for jboss process. For the above it is 23096. Run the next command to get the list of file descriptors being used currently :\n[user@system ~]$ ls -l /proc/23096/fd total 0 lr-x------ 1 remate remate 64 May 1 12:13 0 -\u0026gt; /dev/null l-wx------ 1 remate remate 64 May 1 12:13 1 -\u0026gt; /opt/user/jboss/server/user/log/console.log lr-x------ 1 remate remate 64 May 1 12:13 10 -\u0026gt; /opt/user/jboss/lib/endorsed/jbossws-native-saaj.jar lr-x------ 1 remate remate 64 May 1 12:13 101 -\u0026gt; /usr/share/fonts/liberation/LiberationSerif-Italic.ttf lr-x------ 1 remate remate 64 May 1 12:13 102 -\u0026gt; /usr/share/fonts/dejavu-lgc/DejaVuLGCSans-BoldOblique.ttf lrwx------ 1 remate remate 64 May 1 12:13 105 -\u0026gt; socket:[33496164 .. .. We have analyzed the result of above commands and had not noticed any abnormality.\n System Level File Descriptor Limit Run following command: cat /proc/sys/fs/file-max\n[user@system ~]$ cat /proc/sys/fs/file-max 743813 It was already quite large; we did not find any need to change it. Any way it can be changed by adding \u0026lsquo;fs.file-max = 1000000\u0026rsquo; to /etc/sysctl.conf. “1000000” is the limit you want to set.\n User level File descriptor limit. Run following command: ulimit –n [user@system ~]$ ulimit -n 1024 ‘1024’ file descriptor per process for a user could be a low value for the concurrent jsp application, where lot of jsps need to be complied initially. From our analysis, we found that open FD were close to 1024 limit.\nWe changed this limit appending following lines in /etc/security/limits.conf\nuser soft nofile 16384 user hard nofile 16384 and then check again the with ulimit –n\n[user@system ~]$ ulimit -n 16384 Looks like we have solved it.\n We requested our client to apply changes mentioned steps on production server. After this change we have never faced the issue again. We were able to solve the issue in 20 minutes and in another 10minutes (taken by client’s IT process) the server were up again and we had good game play and the UAT gone well.\n References: http://eloquence.marxmeier.com/sdb/html/linux_limits.html http://www.cyberciti.biz/faq/linux-increase-the-maximum-number-of-open-files/ ","id":72,"tag":"jsp ; java ; compilation ; issues ; os ; tuning ; configuration ; technology","title":"Strange JSP Compilation Issue ","type":"post","url":"https://thinkuldeep.com/post/jsp-compilation-issues/"},{"content":"On one of our production environments, we had an issue our application related to time, specially modified time and creation time of the entities. It was quite random in nature for some entities.\nWe suspected that there might be issue in the timezone configuration.\nAnalysing the problem On production environment, we had limited permissions. We were not allowed to change system timezone. We noticed that Jboss is automatically picking the system timezone “Europe/Berlin” (which was configured from root user) and if we check the timezone for the linux user for which we have access it shows GMT time. The MySQL database is picking the GMT timezone.\nThis mismatch of timezone were creating issues when creating record/updating record in database. Default current date generated from Java comes different than the “CURRENT_TIMESTAMP” of MySQL.\nHow JVM gets default timezone JVM gets the default timezone as follows:\n Looks to environment variable TZ if it is not set then JVM looks for the file /etc/sysconfig/clock and tries to find the ZONE entry. If the ZONE entry is not found, the JVM will compare contents of /etc/localtime with the contents of every file in /usr/share/zoneinfo recursively. If found it will assume that timezone. [admin@localhost ~]$ echo $TZ Europe/Berlin [admin@localhost ~]$ or\n[admin@localhost ~] $ cat /etc/sysconfig/clock ZONE=”Europe/Berlin” UTC=false ARC=false [admin@localhost ~] $ Set timezone in MySQL database Let’s set the timezone in MySQL database.\nIf you find any value configured in above command , then you need to set the same value for MySQL timezone.\nFor example, if you find “Europe/Berlin” as shown in the above example, then the value of MYSQL timezone can be changed as follows:\n Verify MySQL timezone database:\nIf you find a record as below\nmysql\u0026gt; SELECT * from mysql.time_zone_name t where name = ‘Europe/Berlin’; +----------------------+--------------------+ | Name | Time_zone_id | +----------------------+---------------------+ | Europe/Berlin | 403 | +---------------------+---------------------+ 1 row in set ( 0.00 sec) then go for step 3 to setup mysql timezone otherwise go to step 2 to load mysql timezone from Operating System’s timezone database.\n Load Mysql timezone database from Operating System’s timezone database using following command :\nmysql_tzinfo_to_sql [time_zone_db_path] | mysql –u[username] -p[password] mysql where [username] and [password] are replaced by the actual values, and username here should be root which is having the mysql database administration access. The [time_zone_db_path] should be replaced with actual path for Operating System’s timezone database (the default path on linux is /usr/share/zoneinfo)\nAs shown above, this command will ignore some timezones which are not supported by mysql. After running the above command, please verify mysql timezone in the database again (as shown step 1). The timezone should be present now. However, if you still do not find any record in mysql database of your interest, then please contact the system administrator of Linux, to install the correct timezones in operating system’s database. If everything works correctly, please move to the next step of setting the MySQL timezone.\n Setting MySQL timezone\na. Login into the system with user root.\nb. Open file /etc/init.d/mysqld\nc. Locate following code\n# Pass all the options determined above, to ensure consistent behavior. # In many cases mysqld_safe would arrive at the same conclusions anyway # but we need to be sure. ` /usr/bin/mysqld_safe --datadir=\u0026#34;$datadir\u0026#34; --socket=\u0026#34;$socketfile\u0026#34; \\ --log-error=\u0026#34;$errlogfile\u0026#34; --pid-file=\u0026#34;$mypidfile\u0026#34; \\ /dev/null 2\u0026gt;\u0026amp;1 \u0026amp;` d. Add --timezone=\u0026quot;Europe/Berlin\u0026quot; as follow :\n/usr/bin/mysqld_safe --datadir=\u0026#34;$datadir\u0026#34; --timezone=\u0026#34;Europe/Berlin\u0026#34;--socket=\u0026#34;$socketfile\u0026#34; \\ --log-error=\u0026#34;$errlogfile\u0026#34; --pid-file=\u0026#34;$mypidfile\u0026#34; \\ /dev/null 2\u0026gt;\u0026amp;1 \u0026amp; More details on how to setup timezone on linux can be found at Linux Tips and Default Timezone in Java\n ","id":73,"tag":"database ; mysql ; java ; jvm ; timezone ; configuration ; technology","title":"Default Timezone in Java and MySQL","type":"post","url":"https://thinkuldeep.com/post/java-mysql-timezone-configuration/"},{"content":"We have faced \u0026lsquo;out of space\u0026rsquo; issue on disk even after clearing old data on database server(MySQL) and temporary data on application. We noticed that 90% of the space was occupied by the database tables which were used to store very large de-normalized statistical data and when we track the database table we noticed there is not much data in the database in the tables but they were still occupying space on disk.\nKnowing the problem After analysis we came to know that MYISAM tables leave the memory holes when delete some rows from the tables and this lead to fragments and require more memory on disk. Run Following command on MySQL prompt to know fragmented space in the table:\nSHOW TABLE STATUS and refer to column Data_free.\nSolving the chaos! Increased the disk space. (This was not a solution but just to delay the problem until next de-fragmentation).\n Executed following command to defrag the tables wherever we face the issue.\nOPTIMIZE [NOWRITETOBINLOG | LOCAL] TABLE tblname [, tbl_name] ... But this has following issues\n Above operation is a heavy operation When this process is in progress, those database tables remain inaccessible. So above operation need to be executed in off hours when nobody is using that system. Also this activity needs to be done in some frequent interval. Solution to above issues:\n Implemented a Java program which runs optimization command on database for the database tables provided in a CSV file. Divided the heavy tables in two set to equalized memory size of table. Schedule the java program to run alternate weekend one set of tables and on next weekend run another set of tables. This way java program runs on every weekend early morning and optimizes one set of tables. In other word, every states table gets optimized bi-weekly. This approach has fixed most of the out of space issues.\nBut we have faced more.\n Optimization/Defragmentation/Repair of table requires free space on disk. And if any optimization fails in between then it may corrupt the table, which need to be repaired by running REPAIR command.\nWe had faced an issue on our production server where a big table(40+ GB data and 10GB index ) got corrupted as optimization fails on the tables, because optimization and repair table uses system’s temporary space to repair a table indexes and temp space( /tmp folder which was a mounted drive) on system was just 9GB. It leads to out of space issue.\nSpace requirement for table on temp folder can be calculated as :\n (largestkey + rowpointerlength) * numberof_rows * 2 See http://dev.mysql.com/doc/refman/5.1/en/myisamchk-memory.html for more details\nAlso to optimize a table equal amount on space as table size is required on disk where table actually exists.\n To fix the issue we have increased the temp space by extending the tmp directory as follows on other drive which is having more space:\n Create a folder say /var/lib/mysql/tmp Give permission to this folder so that mysql can write temporary files. Edit mysql configuration file /etc/my.cnf and add following line tmpdir=/tmp/:/var/lib/mysql/tmp/ start mysql server.\n Go to mysql prompt and check the variable tmpdir.\n mysql\u0026gt; show variables like \u0026#34;tmpdir\u0026#34;; +---------------+---------------------------+ | Variable_name | Value | +---------------+---------------------------+ | tmpdir | /tmp/:/var/lib/mysql/tmp/ | +---------------+---------------------------+ 1 row in set (0.00 sec) Also updated the java program to calculate available disk size before optimizing table if current state is not optimal to optimize the table then skip the optimization for that table notify administrator about the error. Also Java program updated to send status of optimization by email which includes the space recovered after optimization, tables optimized etc.\n After above, we have fixed 95% of the issues, bit still more: Since optimization is a heavy operation, it is taking 4 hours average. We have encountered swap memory being used on the system even when nobody is using the application. It’s true that this is a very memory intensive application as it needs to execute lot simulation data in memory. we noticed that after 2-3 months the system started using lot of swap memory which lead to slow performance. After analysis we come to know that above java program does not release the memory for long time and garbage collector does not behave correctly for above program for so long. Also it put lot of load on CPU.\n Above problem has been fixed by implementing shell-script in-place of java program, now shell script does all the operation which was earlier done by java. It work great and we never faced that out of space issue again for the same set of data.\n References: http://articles.techrepublic.com.com/5100-10878_11-5193721.html, http://dev.mysql.com/doc/refman/5.0/en/myisamchk-repair-options.html http://forums.mysql.com/read.php?34,174262,174262 http://dev.mysql.com/doc/refman/5.0/en/temporary-files.html http://dev.mysql.com/doc/refman/5.1/en/myisamchk-table-info.html ","id":74,"tag":"database ; mysql ; tuning ; optimization ; technology","title":"MySQL Tuning","type":"post","url":"https://thinkuldeep.com/post/mysql-tuning/"},{"content":"This article let you start with Rule Engine Drools, with step by step guide and series of examples. It starts with an overview and then covers in detail examples.\nRule Engine Overview The underlying idea of a rule engine is to externalize the business or application logic. A rule engine can be viewed as a sophisticated interpreter of if-then statements. The if-then statements are the rules. A rule is composed of two parts, a condition and an action: When the condition is met, the action is executed. The if portion contains conditions (such as amount \u0026gt;=$100), and the then portion contains actions (such as offer discount 5%). The inputs to a rule engine are a collection of rules called a rule execution set and data objects. The outputs are determined by the inputs and may include the original input data objects with modifications, new data objects, and possible side effects (such as sending email to the customer.\nRule engines should be used for applications with highly dynamic business logic and for applications that allow end users to author business rules. A rule engine is a great tool for efficient decision making because it can make decisions based on thousands of facts quickly, reliably, and repeatedly.\nRule engines are used in applications to replace and manage some of the business logic. They are best used in applications where the business logic is too dynamic to be managed at the source code level \u0026ndash; that is, where a change in a business policy needs to be immediately reflected in the application Rules can fit into applications such as the following:\n At the application tier to manage dynamic business logic and the task flow At the presentation layer to customize the page flow and work flow, as well as to construct custom pages based on session state Adopting a rule-based approach for your applications has the following advantages:\n Rules that represent policies are easily communicated and understood. Rules retain a higher level of independence than conventional programming languages. Rules separate knowledge from its implementation logic. Rules can be changed without changing source code; thus, there is no need to recompile the application\u0026rsquo;s code A Standard API for Rule Engines in Java In November, the Java Community approved the final draft of the Java Rule Engine API specification (JSR-94). This new API gives developers a standard way to access and execute rules at runtime. As implementations of this new spec ripen and are brought to the market, programming teams will be able to pull executive logic out of their applications.\nA new generation of management tools will be needed to help executives define and refine the behaviour of their software systems. Instead of rushing changes in application behaviour through the development cycle and hoping it comes out correct on the other end, executives will be able to change the rules, run tests in the staging environment and roll out to production as often as becomes necessary.\nWorking with Drools Rule Engine Drools is a business rule management system (BRMS) with a forward chaining inference based rules engine, using an enhanced implementation of the Rete algorithm.\nDrools supports the JSR-94 standard for its business rule engine and enterprise framework for the construction, maintenance, and enforcement of business policies in an organization, application, or service.\nA Simple Example Create a simple Java Project.\n Add a package com.tk.drool\n Create a class in this package and name it RuleRunner.java\npackage com.tk.drool; import java.io.InputStreamReader; import org.drools.RuleBase; import org.drools.RuleBaseFactory; import org.drools.WorkingMemory; import org.drools.compiler.PackageBuilder; import org.drools.rule.Package; public class RuleRunner { public RuleRunner(){} public void runRules(String[] rules, Object... facts) throws Exception { RuleBase ruleBase = RuleBaseFactory.newRuleBase(); PackageBuilder builder = new PackageBuilder(); for(String ruleFile : rules) { System.out.println(\u0026#34;Loading file: \u0026#34;+ruleFile); builder.addPackageFromDrl( new InputStreamReader( this.getClass().getResourceAsStream(ruleFile))); } Package pkg = builder.getPackage(); ruleBase.addPackage(pkg); WorkingMemory workingMemory = ruleBase.newStatefulSession(); for(Object fact : facts) { System.out.println(\u0026#34;Inserting fact: \u0026#34;+fact); workingMemory.insert(fact); } workingMemory.fireAllRules(); } public static void simple1() throws Exception { new RuleRunner().runRules( new String[]{\u0026#34;/simple/rule01.drl\u0026#34;}); } public static void main(String[] args) throws Exception { simple1(); } } Create a folder name “simple” and add a rule file containing rules\npackage simple rule \u0026#34;Rule 01\u0026#34; when eval (1==1) then System.out.println(\u0026#34;Rule 01 Works\u0026#34;); end Create a folder lib and add the following jars downloaded using this link http://www.jboss.org/drools/downloads.html\n drools-api-5.0.1.jar drools-compiler-5.0.1.jar drools-core-5.0.1.jar Download org.eclipse.jdt.core_3.4.4.v_894_R34x.jar from http://www.java2s.com/Code/Jar/o/Downloadorgeclipsejdtcore344v894R34xjar.htm\n Download antlr-3.1.1-runtime.jar from http://www.java2s.com/Code/Jar/ABC/Downloadantlr311runtimejar.htm\n Download mvel2-2.0.7.jar from http://mirrors.ibiblio.org/pub/mirrors/maven2/org/mvel/mvel2/2.0.7/mvel2-2.0.7.jar\n Add all these jars to buildpath.\n Run RuleRunner.java as a Java application and you will get the output as show below.\n This is how you run the rule.\nA More Complex Example Create a simple Java Project.\n Add a package com.tk.drool\n Create a class in this package and name it RuleRunner.java\npackage com.tk.drool; public class RuleRunner { public RuleRunner(){} public void runRules(String[] rules, Object... facts) throws Exception { RuleBase ruleBase = RuleBaseFactory.newRuleBase(); PackageBuilder builder = new PackageBuilder(); for(String ruleFile : rules) { System.out.println(\u0026#34;Loading file: \u0026#34;+ruleFile); builder.addPackageFromDrl( new InputStreamReader( this.getClass().getResourceAsStream(ruleFile))); } Package pkg = builder.getPackage(); ruleBase.addPackage(pkg); WorkingMemory workingMemory= ruleBase.newStatefulSession(); for(Object fact : facts) { System.out.println(\u0026#34;Inserting fact: \u0026#34;+fact); workingMemory.insert(fact); } workingMemory.fireAllRules(); } public static void simple1() throws Exception { Object[] flows = { new Myflow(new SimpleDate(\u0026#34;01/01/2012\u0026#34;), 300.00), new Myflow(new SimpleDate(\u0026#34;05/01/2012\u0026#34;), 100.00), new Myflow(new SimpleDate(\u0026#34;11/01/2012\u0026#34;), 500.00), new Myflow(new SimpleDate(\u0026#34;07/01/2012\u0026#34;), 800.00), new Myflow(new SimpleDate(\u0026#34;02/01/2012\u0026#34;), 400.00), }; new RuleRunner().runRules(new String[]{\u0026#34;/simple/rule01.drl\u0026#34;}, flows); } public static void main(String[] args) throws Exception { simple1(); } class SimpleDate extends Date { public SimpleDate(String datestr) throws Exception { SimpleDateFormat format = new SimpleDateFormat(\u0026#34;dd/MM/yyyy\u0026#34;); this.setTime(format.parse(datestr).getTime()); } } } Create a Myflow class in the same package\npackage com.tk.drool; import java.util.Date; public class Myflow { private Date date; private double amount; public Cashflow(){} public Cashflow(Date date, double amount) { this.date = date; this.amount = amount; } public Date getDate() { return date; } public void setDate(Date date) { this.date = date; } public double getAmount() { return amount; } public void setAmount(double amount) { this.amount = amount; } public String toString() { return \u0026#34;Cashflow[date=\u0026#34;+date+\u0026#34;,amount=\u0026#34;+amount+\u0026#34;]\u0026#34;; } } Create rule01.drl class containing rules in folder named “simple”.\npackage simple import com.tk.drool.*; rule \u0026#34;Rule 01\u0026#34; when $cashflow : Myflow( $date : date, $amount : amount ) not Myflow( date \u0026lt; $date) then System.out.println(\u0026#34;Cashflow: \u0026#34;+$date+\u0026#34; :: \u0026#34;+$amount); retract($cashflow); end Run RuleRunner.java as Java application. It will print all the executed rules.\n Rules UI Editor | Gunvor Drools Guvnor is a centralized repository for Drools Knowledge Bases, with rich web based GUIs, editors, and tools to aid in the management of large numbers of rules. There is easy way to m create, manage and delete rules in Guvnor, also consuming these rules in the java client application and dynamically reading POJO models and firing rules over them is possible directly on the UI.\nEclipse also has a great UI plugin - drools eclipse GUI editor.\nIt is A business rules manager that allows people to manage rules in a multi user environment; it is a single point of truth for your business rules, allowing change in a controlled fashion, with user friendly interfaces.\nGuvnor is the name of the web and network related components for managing rules with drools. This combined with the core drools engine and other tools form the business rules manager.\n Download Guvnor web application using the following link - http://download.jboss.org/drools/release/5.3.0.Final/drools-distribution-5.3.0.Final.zip Deploy the downloaded war file stored at the location \\guvnor-distribution-5.3.0.Final\\binaries\\guvnor-5.3.0.Final-tomcat-6.0.war in tomcat 6.0 Run the application using following address - http://localhost:8080/guvnor-5.3.0.Final-tomcat-6.0/org.drools.guvnor.Guvnor/Guvnor.jsp You can now create directly UI, and can upload a jar file with fact models, that can be used to define rules. Gunvor host the repository of rules, you can access the rule repository in Eclipse Gurvor Plugin or directly in the applications.\nReferences http://www.theserverside.com/news/1365158/An-Introduction-to-the-Drools-Project http://www.ibm.com/developerworks/java/library/j-drools/#when http://java.sun.com/developer/technicalArticles/J2SE/JavaRule.html http://en.wikipedia.org/wiki/Drools http://www.packtpub.com/article/guided-rules-with-the-jboss-brms-guvnor http://docs.jboss.org/drools/release/5.3.0.CR1/drools-guvnor-docs/html/ch11.html#d0e3131 http://docs.redhat.com/docs/en-US/JBoss_Enterprise_SOA_Platform/4.3/html/JBoss_Rules_Reference_Guide/chap-JBoss_Rules_Reference_Manual-The_Eclipse_based_Rule_IDE.html ","id":75,"tag":"rule-engine ; java ; introduction ; drools ; technology","title":"Rule Engine - Getting Started with Drools","type":"post","url":"https://thinkuldeep.com/post/rule-engine-getting-started/"},{"content":"We have faced multiple issues while compiling a native c/c++ code and using it with JNI in our Java application\n How to write a JNI program. jni_md.h not found. Incompatible data types. On widows cygwin1.dll is required when running Java code on the native library. and more. Actually we were compiling NLP Solver’s native library (IPOPT) on windows 32 bit and on 64bit linux machine.\nSolving the chaos! Here are the steps we have followed to solve the above issues:\n We had prepared a POC by following the instructions to write a JNI program.\n Now to compile C code we need to install a c/c++ compiler for windows/linux. There are many compilers available for each OS. We tried following.\n GNU g++ for linux. GNU g++ for window on Cygwin. While compiling c/c++ Java native code we get error “jnimd.h” not found. jnimd.h can be located in the folder /include/win32, so you need to include win32 folder in the include path of c compilation.\n-Ic:/JDK5.0/include -Ic:/JDK5.0/include/win32 Similarly we can locate the file in linux.\n After fixing the last error we got lot of incompatible data type issues. This may be fixed by adding standard parameter to compiler as follows:\n-O3 -fomit-frame-pointer -pipe -DNDEBUG -pedantic-errors -Wimplicit -Wparentheses -Wreturn-type -Wcast-qual -Wall -Wpointer-arith -Wwrite-strings -Wconversion -Wno-unknown-pragmas -Wl,--add-stdcall-alias | sed -e s\u0026#39;|-pedantic-errors||\u0026#39; Then on windows we got cygwin1.dll not found issue. Cygwin dependencies can be removed by parameter -mno-cygwin.\n We have fixed the compiling issues in NLP Solver’s native library (IPOPT) by making above changes in make file.\n","id":76,"tag":"JNI ; java ; jvm ; c++ ; native ; NLP Solver ; AI ; optimization ; data engineering ; technology","title":"Compiling Java Native C/C++ Code","type":"post","url":"https://thinkuldeep.com/post/java-jni-cpp/"},{"content":"In the previous article, we learned about ESB and deployed our first OSGi bundle on FuseESB.\nIn this article, we will learn how to build Apache CXF and Apache Camel component and deploy them on FuseESB.\nLets understand what are these components :\nApache Camel ™ Powerful open source integration framework based on known Enterprise Integration Patterns with powerful Bean Integration.\nCamel lets you create the Enterprise Integration Patterns to implement routing and mediation rules in either a Java based Domain Specific Language (or Fluent API), via Spring or Blueprint based Xml Configuration files or via the Scala DSL. This means you get smart completion of routing rules in your IDE whether in your Java, Scala or XML editor.\nApache CFX Apache CXF is an open source services framework. CXF helps you build and develop services using frontend programming APIs, like JAX-WS and JAX-RS. These services can speak a variety of protocols such as SOAP, XML/HTTP, RESTful HTTP, or CORBA and work over a variety of transports such as HTTP, JMS or JBI.\nOSGI Apache CXF packaging and deployment Create the CXF OSGi Bundle Create the Maven project using servicemix-cxf-code-first-osgi-bundle archetype. Run Following command.\nmvn archetype:create -DarchetypeGroupId=org.apache.servicemix.tooling -DarchetypeArtifactId=servicemix-cxf-code-first-osgi-bundle -DarchetypeVersion=2011.02.1-fuse-00-08 -DgroupId=com.tk.example.cxf -DartifactId=cxf-example -Dversion=1.0 It will generate the maven project as folder “cxf-example” in the directory where you run the above command.\nNote : Make sure you have added fusesource repository to the maven, refer to http://fusesource.com/docs/esb/4.3.1/esb_deploy_osgi/Build-GenerateMaven.html#Build-GenerateMaven-AddRepos\n Import the “cxf-example” in eclipse as maven project\n Select File \u0026gt; Import \u0026gt; Existing Maven Projects Browse the folder “cxf-example” as Root Directory Finish Brief discussion about the main files:\nPerson.java - This is the main web service interface. Interface is written under @WebService to make it JAX-WS webservice.\npackage com.tk.example.cxf; import javax.jws.WebMethod; import javax.jws.WebParam; import javax.jws.WebService; import javax.xml.bind.annotation.XmlSeeAlso; import javax.xml.ws.RequestWrapper; import javax.xml.ws.ResponseWrapper; @WebService @XmlSeeAlso({com.nagarro.example.cxf.types.ObjectFactory.class}) public interface Person { @RequestWrapper(localName = \u0026#34;GetPerson\u0026#34;, className = \u0026#34;com.nagarro.example.cxf.types.GetPerson\u0026#34;) @ResponseWrapper(localName = \u0026#34;GetPersonResponse\u0026#34;, className = \u0026#34;com.tk.example.cxf.types.GetPersonResponse\u0026#34;) @WebMethod(operationName = \u0026#34;GetPerson\u0026#34;) public void getPerson( @WebParam(mode = WebParam.Mode.INOUT, name = \u0026#34;personId\u0026#34;) javax.xml.ws.Holder\u0026lt;java.lang.String\u0026gt; personId, @WebParam(mode = WebParam.Mode.OUT, name = \u0026#34;ssn\u0026#34;) javax.xml.ws.Holder\u0026lt;java.lang.String\u0026gt; ssn, @WebParam(mode = WebParam.Mode.OUT, name = \u0026#34;name\u0026#34;) javax.xml.ws.Holder\u0026lt;java.lang.String\u0026gt; name ) throws UnknownPersonFault; } PersonImpl.java – This is the implementation class of the Person interface.\nbeans.xml – Spring bean class define the service end point.\n\u0026lt;jaxws:endpoint id=\u0026#34;HTTPEndpoint\u0026#34; implementor=\u0026#34;com.tk.example.cxf.PersonImpl\u0026#34; address=\u0026#34;/PersonServiceCF\u0026#34;/\u0026gt; pom.xml - The maven build file to build as jar file.\nClient.java to test the web service.\n Run maven “clean install” on the attached project.\nmvn clean install OR Right click on pom.xml and select Run As \u0026gt; Maven clean and then again Run As \u0026gt; Maven install.\nIt will generate cxf-example-1.0.jar in target folder.\n You have created the OSGi bundle successfully. Now deploy these in next steps.\n Deploy the CXF OSGi Bundle in FuseESB Run following command on FuseESB console to install the bundle\n osgi:install -s file:\u0026lt;PROJECT_DIR\u0026gt;/target/cxf-example-1.0.jar osgi:install -s file:D:\\cxf-camel-osgi\\cxf-example\\target\\cxf-example-1.0.jar or Just drop the cxf-example-1.0.jar in deploy folder.\n Now access the WSDL on following URL\n http://\u0026lt;hostname\u0026gt;:\u0026lt;port\u0026gt;/cxf/\u0026lt;WSName\u0026gt;?wsdl eg. http://localhost:8181/cxf/PersonServiceCF?wsdl\n Run Client.java after correcting the url and port.\n Invoking getPerson... getPerson._getPerson_personId=Guillaume getPerson._getPerson_ssn=000-000-0000 getPerson._getPerson_name=Guillaume Let’s try to build our own helloworld service. HelloWorldWS.java package com.tk.example.hellowrold; import javax.jws.WebService; /** * HelloWorld Web Service. * @author kuldeep */ @WebService public interface HelloWorldWS { /** * Method to say Hi * @param text * @return Hello text :) */ String sayHi(String text); } HelloWorldWSImpl.java package com.tk.example.hellowrold; import javax.jws.WebService; /** * HelloWorld Web Service implementation * @author kuldeep */ @WebService(endpointInterface = \u0026#34;com.tk.example.hellowrold.HelloWorldWS\u0026#34;) public class HelloWorldWSImpl implements HelloWorldWS { public String sayHi(String text){ System.out.println(\u0026#34;Inside say Hi\u0026#34;); return \u0026#34;Hello \u0026#34; + text + \u0026#34;:)\u0026#34;;\t} } Define service end point in beans.xml \u0026lt;jaxws:endpoint id=\u0026#34;helloworld\u0026#34; implementor=\u0026#34;com.tk.example.hellowrold.HelloWorldWSImpl\u0026#34; address=\u0026#34;/HelloWorld\u0026#34;/\u0026gt; Build the project again and deploy it to FuseESB Open URL http://localhost:8181/cxf/HelloWorld?wsdl to see the wsdl Let create a test class to consume the HelloWorld web service package com.tk.example.hellowrold.test; import org.apache.cxf.jaxws.JaxWsProxyFactoryBean; import com.nagarro.tk.hellowrold.HelloWorldWS; public class WSTest { public static void main(String[] args) { JaxWsProxyFactoryBean factory = new JaxWsProxyFactoryBean(); factory.setServiceClass(HelloWorldWS.class); factory.setAddress(\u0026#34;http://localhost:8181/cxf/HelloWorld\u0026#34;); HelloWorldWS client = (HelloWorldWS)factory.create(); String response = client.sayHi(\u0026#34;User\u0026#34;); System.out.println(response); } } Run this class. It should print on console\n Hello User:) OSGI Apache Camel packaging and deployment Create the Camel OSGi Bundle Create the Maven project using servicemix-camel-osgi-bundle archetype. Run Following command.\nmvn archetype:create -DarchetypeGroupId=org.apache.servicemix.tooling -DarchetypeArtifactId=servicemix-camel-osgi-bundle -DarchetypeVersion=2011.02.1-fuse-00-08 -DgroupId=com.tk.example.camel -DartifactId=camel-example -Dversion=1.0 It will generate the maven project as folder “camel-example” in the directory where you run the above command.\nNote : Make sure you have added fusesource repository to the maven, refer to http://fusesource.com/docs/esb/4.3.1/esb_deploy_osgi/Build-GenerateMaven.html#Build-GenerateMaven-AddRepos\n Import the “camel-example” in eclipse as maven project as described in last section and It should create following structure.\n Brief discussion about the main files: MyTransform.java - This class will act as log generator, Its will log the message “\u0026raquo;\u0026gt; set body : \npackage com.tk.example.camel; import java.util.Date; import java.util.logging.Logger; public class MyTransform { private static final transient Logger LOGGER = Logger.getLogger(MyTransform.class.getName()); private boolean verbose = true; private String prefix = \u0026#34;MyTransform\u0026#34;; public Object transform(Object body) { String answer = prefix + \u0026#34; set body: \u0026#34; + new Date(); if (verbose) { System.out.println(\u0026#34;\u0026gt;\u0026gt;\u0026gt;\u0026gt; \u0026#34; + answer); } LOGGER.info(\u0026#34;\u0026gt;\u0026gt;\u0026gt;\u0026gt; \u0026#34; + answer); return answer; } public boolean isVerbose() { return verbose; } public void setVerbose(boolean verbose) { this.verbose = verbose; } public String getPrefix() { return prefix; } public void setPrefix(String prefix) { this.prefix = prefix; } } camel-context.xml – This file is a spring configuration file, it define the camel routing.\n\u0026lt;?xml version=\u0026#34;1.0\u0026#34; encoding=\u0026#34;UTF-8\u0026#34;?\u0026gt; \u0026lt;!-- Generated by Apache ServiceMix Archetype --\u0026gt; \u0026lt;beans xmlns=\u0026#34;http://www.springframework.org/schema/beans\u0026#34; xmlns:xsi=\u0026#34;http://www.w3.org/2001/XMLSchema-instance\u0026#34; xmlns:osgi=\u0026#34;http://camel.apache.org/schema/osgi\u0026#34; xmlns:osgix=\u0026#34;http://www.springframework.org/schema/osgi-compendium\u0026#34; xmlns:ctx=\u0026#34;http://www.springframework.org/schema/context\u0026#34; xsi:schemaLocation=\u0026#34; http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.0.xsd http://camel.apache.org/schema/spring http://camel.apache.org/schema/spring/camel-spring.xsd http://camel.apache.org/schema/osgi http://camel.apache.org/schema/osgi/camel-osgi.xsd http://www.springframework.org/schema/osgi-compendium http://www.springframework.org/schema/osgi-compendium/spring-osgi-compendium.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context.xsd \u0026#34;\u0026gt; \u0026lt;osgi:camelContext xmlns=\u0026#34;http://camel.apache.org/schema/spring\u0026#34;\u0026gt; \u0026lt;route\u0026gt; \u0026lt;from uri=\u0026#34;timer://myTimer?fixedRate=true\u0026amp;amp;period=2000\u0026#34;/\u0026gt; \u0026lt;bean ref=\u0026#34;myTransform\u0026#34; method=\u0026#34;transform\u0026#34;/\u0026gt; \u0026lt;to uri=\u0026#34;log:ExampleRouter\u0026#34;/\u0026gt; \u0026lt;/route\u0026gt; \u0026lt;/osgi:camelContext\u0026gt; \u0026lt;bean id=\u0026#34;myTransform\u0026#34; class=\u0026#34;com.tk.example.camel.MyTransform\u0026#34;\u0026gt; \u0026lt;property name=\u0026#34;prefix\u0026#34; value=\u0026#34;${prefix}\u0026#34;/\u0026gt; \u0026lt;/bean\u0026gt; \u0026lt;osgix:cm-properties id=\u0026#34;preProps\u0026#34; persistent-id=\u0026#34;com.tk.example.camel\u0026#34;\u0026gt; \u0026lt;prop key=\u0026#34;prefix\u0026#34;\u0026gt;MyTransform\u0026lt;/prop\u0026gt; \u0026lt;/osgix:cm-properties\u0026gt; \u0026lt;ctx:property-placeholder properties-ref=\u0026#34;preProps\u0026#34; /\u0026gt; \u0026lt;/beans\u0026gt; This creates route between component “timer” and “log”. EndPoint URI are defined as follows: scheme:contextPath[?queryOptions] Where “scheme” identifies the protocol/component and the “contextPath” is the details that will be interpreted by provided protocol/component and the “queryOptions” are the parameters. List of components can be found at http://fusesource.com/docs/esb/4.4.1/camel_component_ref/Intro-ComponentsList.html\nSo in above example the producer timer generates a message from the myTransform.transform() and propagate it to logger.\npom.xml - The maven build file to build as jar file.\n Run maven “clean install” on the attached project.\nmvn clean install OR Right click on pom.xml and select Run As \u0026gt; Maven clean and then again Run As \u0026gt; Maven install.\nIt will generate “camel-example-1.0.jar” in “target” folder.\n You have created the Camel OSGi bundle successfully. Now deploy these in next steps.\n Deploy the Camel OSGi Bundle in FuseESB This example uses the camel-core feature (Ref to http://fusesource.com/docs/esb/4.4.1/camel_component_ref/Intro-ComponentsList.html ) for timer and log component. Verify that camel-core is installed on FuseESB.\n Run following command on Fuse ESB console.\n$ features:list | grep camel-core [installed] [2.8.0-fuse-01-13] camel-core camel-2.8.0-fuse-01-13 It should list a row on console with status installed something like above.\nIf it does not then install the camel-core by running following command\n$ features:install camel-core Run following command on FuseESB console to install the bundle\n osgi:install -s file:\u0026lt;PROJECT_DIR\u0026gt;/target/camel-example-1.0.jar For example\nosgi:install -s file:D:/cxf-camel-osgi/camel-example/target/camel-example-1.0.jar or Just drop the cxf-example-1.0.jar in deploy folder.\n It will start printing logs on console as well as in log file\n\u0026gt;\u0026gt;\u0026gt;\u0026gt; MyTransform set body: Thu Jan 19 14:02:28 IST 2012 \u0026gt;\u0026gt;\u0026gt;\u0026gt; MyTransform set body: Thu Jan 19 14:02:30 IST 2012 \u0026gt;\u0026gt;\u0026gt;\u0026gt; MyTransform set body: Thu Jan 19 14:02:32 IST 2012 \u0026gt;\u0026gt;\u0026gt;\u0026gt; MyTransform set body: Thu Jan 19 14:02:34 IST 2012 \u0026gt;\u0026gt;\u0026gt;\u0026gt; MyTransform set body: Thu Jan 19 14:02:36 IST 2012 You can locale log for “ExampleRouter” category in log file.\n Exercise 1 – Write message to files. Let’s try to add another route in above example which will direct the log to a file. \u0026lt;route\u0026gt; \u0026lt;from uri=\u0026#34;timer://myTimer?fixedRate=true\u0026amp;amp;period=2000\u0026#34;/\u0026gt; \u0026lt;bean ref=\u0026#34;myTransform\u0026#34; method=\u0026#34;transform\u0026#34;/\u0026gt; \u0026lt;to uri=\u0026#34;file:C:/timerLog\u0026#34;/\u0026gt; \u0026lt;/route\u0026gt; Build and install the bundle. It will write each message to a file in the given directory path. Exercise 2 – Inbox/Outbox Lets write a route which picks files from one folder and put them to other folder.\n Add following route\n\u0026lt;route\u0026gt; \u0026lt;from uri=\u0026#34;file:C:/inbox\u0026#34;/\u0026gt; \u0026lt;to uri=\u0026#34;file:C:/outbox\u0026#34;/\u0026gt; \u0026lt;/route\u0026gt; Build and install the bundle Now try to put any file in input box folder, c:/inbox, it will be immediately moved to c:/outbox folder.\n Exercise 3 – Filter Filter the files based on extension and put them in respective folders.\n Add following route\n\u0026lt;route\u0026gt; \u0026lt;from uri=\u0026#34;file:C:/inbox\u0026#34;/\u0026gt; \u0026lt;choice\u0026gt; \u0026lt;when\u0026gt; \u0026lt;simple\u0026gt;${file:name.ext} == \u0026#39;png\u0026#39; or ${file:name.ext} == \u0026#39;bmp\u0026#39; or ${file:name.ext} == \u0026#39;gif\u0026#39;\u0026lt;/simple\u0026gt; \u0026lt;to uri=\u0026#34;file:C:/outbox/images\u0026#34;/\u0026gt; \u0026lt;/when\u0026gt; \u0026lt;when\u0026gt; \u0026lt;simple\u0026gt;${file:name.ext} == \u0026#39;txt\u0026#39;\u0026lt;/simple\u0026gt; \u0026lt;to uri=\u0026#34;file:C:/outbox/text\u0026#34;/\u0026gt; \u0026lt;/when\u0026gt; \u0026lt;when\u0026gt; \u0026lt;simple\u0026gt;${file:name.ext} == \u0026#39;doc\u0026#39; or ${file:name.ext} == \u0026#39;docx\u0026#39;\u0026lt;/simple\u0026gt; \u0026lt;to uri=\u0026#34;file:C:/outbox/office\u0026#34;/\u0026gt; \u0026lt;/when\u0026gt; \u0026lt;otherwise\u0026gt; \u0026lt;to uri=\u0026#34;file:C:/outbox/other\u0026#34;/\u0026gt; \u0026lt;/otherwise\u0026gt; \u0026lt;/choice\u0026gt; Build and install the bundle\n Now try to put any file in input box folder, c:/inbox, it will be immediately moved to respective folders.\n OSGI Apache Camel and CXF packaging and deployment Integrate Camel and CXF User the same eclipse project of example POC - OSGI Apache CXF packaging and deployment\n Copy the project “cxf-example” and Paste as “camel-cxf-example”\n Add camel dependency in pom.xml\n\u0026lt;properties\u0026gt; \u0026lt;camel.version\u0026gt;2.8.0-fuse-00-08\u0026lt;/camel.version\u0026gt; \u0026lt;/properties\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.apache.camel\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;camel-core\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;${camel.version}\u0026lt;/version\u0026gt; \u0026lt;/dependency\u0026gt; Edit beans.xml - Use following schema definition\n\u0026lt;beans xmlns=\u0026#34;http://www.springframework.org/schema/beans\u0026#34; xmlns:aop=\u0026#34;http://www.springframework.org/schema/aop\u0026#34; xmlns:xsi=\u0026#34;http://www.w3.org/2001/XMLSchema-instance\u0026#34; xmlns:camel=\u0026#34;http://camel.apache.org/schema/spring\u0026#34; xmlns:util=\u0026#34;http://www.springframework.org/schema/util\u0026#34; xmlns:tx=\u0026#34;http://www.springframework.org/schema/tx\u0026#34; xmlns:amq=\u0026#34;http://activemq.apache.org/schema/core\u0026#34; xmlns:cxf=\u0026#34;http://camel.apache.org/schema/cxf\u0026#34; xmlns:jaxws=\u0026#34;http://cxf.apache.org/jaxws\u0026#34; xsi:schemaLocation=\u0026#34;http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.5.xsd http://camel.apache.org/schema/spring http://camel.apache.org/schema/spring/camel-spring-2.1.0.xsd http://www.springframework.org/schema/util http://www.springframework.org/schema/util/spring-util-2.5.xsd http://camel.apache.org/schema/cxf http://activemq.apache.org/camel/schema/cxf/camel-cxf.xsd http://cxf.apache.org/jaxws http://cxf.apache.org/schemas/jaxws.xsd\u0026#34;\u0026gt; Define the service end point as camel-cxf end point instead of jax-ws as follows Change from\n\u0026lt;jaxws:endpoint id=\u0026#34;helloworld\u0026#34; implementor=\u0026#34;com.tk.example.hellowrold.HelloWorldWSImpl\u0026#34; address=\u0026#34;/HelloWorld\u0026#34;/\u0026gt; to\n\u0026lt;cxf:cxfEndpoint id=\u0026#34;helloworld\u0026#34; address=\u0026#34;/HelloWorld\u0026#34; serviceClass=\u0026#34;com.tk.example.hellowrold.HelloWorldWS\u0026#34; /\u0026gt; Lets define camel route which will direct every web service call to a file.\n\u0026lt;camel:camelContext xmlns=\u0026#34;http://camel.apache.org/schema/spring\u0026#34;\u0026gt; \u0026lt;camel:route\u0026gt; \u0026lt;camel:from uri=\u0026#34;cxf:bean:helloworld?synchronous=true\u0026#34; /\u0026gt; \u0026lt;camel:to uri=\u0026#34;file:C:/outbox/messages\u0026#34;/\u0026gt; \u0026lt;/camel:route\u0026gt; \u0026lt;/camel:camelContext\u0026gt;\t Run maven build and deploy the Jar in FuseESB ( I assume, you have learned the deployment by now, so not mentioning steps again)\n It will write whatever message you sent to hello world service to the file and return the same\n This completes the article. You have learned basics for CXF and Camel and working them together. Try more complex examples\n","id":77,"tag":"ESB ; java ; Apache Service Mix ; Apache CFX ; FuseESB ; XML ; Apache Camel ; enterprise ; integration ; technology","title":"Using Apache CFX and Apache Camel in ESB","type":"post","url":"https://thinkuldeep.com/post/esb-introduction-part2/"},{"content":"Calendars are used to store formation for various events. Calendar viewer displays the month-date view and for each day displays the events/reminders information.\nInvites are the events in the form of email. Once the invite for an event sent to a user via email then that event will be stored in the calendar view and same can be seen in the day information of the calendar view.\nCalendars Calendar data can be stored in files; there are some standard formats for storing calendars iCalendar is a computer file format which allows Internet users to send meeting requests and tasks to other Internet users, via email, or sharing files with an extension of .ics. Recipients of the iCalendar data file (with supporting software, such as an email client or calendar application) can respond to the sender easily or counter propose another meeting date/time.\nCore object iCalendar's design was based on the previous file format vCalendar created by the Internet Mail Consortium (IMC).\nBEGIN:VCALENDAR VERSION:2.0 PRODID:-//hacksw/handcal//NONSGML v1.0//EN BEGIN:VEVENT UID:uid1@example.com DTSTAMP:19970714T170000Z ORGANIZER;CN=John Doe:MAILTO:john.doe@example.com DTSTART:19970714T170000Z DTEND:19970715T035959Z SUMMARY:Bastille Day Party END:VEVENT END:VCALENDAR Other tags Events (VEVENT). To-do (VTODO), VJOURNAL, VFREEBUSY, etc Supported Products iCalendar is used and supported by a large number of products, including Google Calendar, Apple iCal, GoDaddy Online Group Calendar, IBM Lotus Notes, Yahoo! Calendar, Evolution (software) , KeepandShare, Lightning extension for Mozilla Thunderbird and SeaMonkey, and by Microsoft Outlook.\nJava Solution We can export events information in a .ics file as per the iCalendar format, using Java File IO apis. Similarly Invites/Meeting requests can be sent as email using Java Mail API. Here is flow diagram.\nImplementation steps for Sending Email Invites Register calendar mime type in java\n// register the text/calendar mime type MimetypesFileTypeMap mimetypes = (MimetypesFileTypeMap)MimetypesFileTypeMap.getDefaultFileTypeMap(); mimetypes.addMimeTypes(\u0026#34;text/calendar ics ICS\u0026#34;); // register the handling of text/calendar mime type MailcapCommandMap mailcap = (MailcapCommandMap) MailcapCommandMap.getDefaultCommandMap(); mailcap.addMailcap(\u0026#34;text/calendar;;x-java-content-handler=com.sun.mail.handlers.text_plain\u0026#34;); Create message and assign recipients and sender.\n// create a message MimeMessage msg = new MimeMessage(session); // set the from and to address InternetAddress addressFrom = new InternetAddress(from); msg.setFrom(addressFrom); InternetAddress[] addressTo = new InternetAddress[recipients.length]; for (int i = 0; i \u0026lt; recipients.length; i++) { addressTo[i] = new InternetAddress(recipients[i]); } msg.setRecipients(Message.RecipientType.TO, addressTo); Create an alternative body part.\n// Create an alternative Multipart Multipart multipart = new MimeMultipart(\u0026#34;alternative\u0026#34;); Set body of invite.\n// part 1, html text MimeBodyPart descriptionPart = new MimeBodyPart(); String content = \u0026#34;\u0026lt;font size=\u0026#39;\\\u0026#39;2\\\u0026#39;\u0026#39;\u0026gt;\u0026#34; + emailMsgTxt + \u0026#34;\u0026lt;/font\u0026gt;\u0026#34;; descriptionPart.setContent(content, \u0026#34;text/html; charset=utf-8\u0026#34;); multipart.addBodyPart(descriptionPart); Add Calendar part\n// Add part two, the calendar BodyPart calendarPart = buildCalendarPart(); multipart.addBodyPart(calendarPart); The method calendar part can be generated in following ways.\n Using Plain Text SimpleDateFormat iCalendarDateFormat = new SimpleDateFormat(\u0026#34;yyyyMMdd\u0026#39;T\u0026#39;HHmm\u0026#39;00\u0026#39;\u0026#34;); BodyPart calendarPart = new MimeBodyPart(); String calendarContent = \u0026#34;BEGIN:VCALENDAR\\n\u0026#34; + \u0026#34;METHOD:REQUEST\\n\u0026#34; + \u0026#34;PRODID: BCP - Meeting\\n\u0026#34; + \u0026#34;VERSION:2.0\\n\u0026#34; + \u0026#34;BEGIN:VEVENT\\n\u0026#34; + \u0026#34;DTSTAMP:\u0026#34; + iCalendarDateFormat.format(currentdate) + \u0026#34;\\n\u0026#34; + \u0026#34;DTSTART:\u0026#34; + iCalendarDateFormat.format(startdatetime) + \u0026#34;\\n\u0026#34; + \u0026#34;DTEND:\u0026#34; + iCalendarDateFormat.format(enddatetime) + \u0026#34;\\n\u0026#34; + \u0026#34;SUMMARY:Test 123\\n\u0026#34; + \u0026#34;UID:324\\n\u0026#34; + \u0026#34;ATTENDEE;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=TRUE:MAILTO:kuldeep@nagarro.com\\n\u0026#34; + \u0026#34;ORGANIZER:MAILTO:kuldeep@nagarro.com\\n\u0026#34; + \u0026#34;LOCATION:gurgaon\\n\u0026#34; + \u0026#34;DESCRIPTION:learn some stuff\\n\u0026#34; + \u0026#34;SEQUENCE:0\\n\u0026#34; + \u0026#34;PRIORITY:5\\n\u0026#34; + \u0026#34;CLASS:PUBLIC\\n\u0026#34; + \u0026#34;STATUS:CONFIRMED\\n\u0026#34; + \u0026#34;TRANSP:OPAQUE\\n\u0026#34; + \u0026#34;BEGIN:VALARM\\n\u0026#34; + \u0026#34;ACTION:DISPLAY\\n\u0026#34; + \u0026#34;DESCRIPTION:REMINDER\\n\u0026#34; + \u0026#34;TRIGGER;RELATED=START:-PT00H15M00S\\n\u0026#34; + \u0026#34;END:VALARM\\n\u0026#34; + \u0026#34;END:VEVENT\\n\u0026#34; + \u0026#34;END:VCALENDAR\u0026#34;; calendarPart.addHeader(\u0026#34;Content-Class\u0026#34;,\u0026#34;urn:content-classes:calendarmessage\u0026#34;); calendarPart.setContent(calendarContent, \u0026#34;text/calendar;method=CANCEL\u0026#34;); Using ICal4J Library ( Ref to Next section for more detail on ICal4J) // Create a TimeZone TimeZoneRegistry registry = TimeZoneRegistryFactory.getInstance().createRegistry(); TimeZone timezone = registry.getTimeZone(\u0026#34;America/Mexico_City\u0026#34;); java.util.Calendar startDate = new GregorianCalendar(); startDate.setTimeZone(timezone); startDate.set(java.util.Calendar.MONTH, java.util.Calendar.JANUARY); startDate.set(java.util.Calendar.DAY_OF_MONTH, 18); startDate.set(java.util.Calendar.YEAR, 2012); startDate.set(java.util.Calendar.HOUR_OF_DAY, 9); startDate.set(java.util.Calendar.MINUTE, 0); startDate.set(java.util.Calendar.SECOND, 0); // End Date is on: April 1, 2008, 13:00 java.util.Calendar endDate = new GregorianCalendar(); endDate.setTimeZone(timezone); endDate.set(java.util.Calendar.MONTH, java.util.Calendar.JANUARY); endDate.set(java.util.Calendar.DAY_OF_MONTH, 18); endDate.set(java.util.Calendar.YEAR, 2012); endDate.set(java.util.Calendar.HOUR_OF_DAY, 13); endDate.set(java.util.Calendar.MINUTE, 0);\tendDate.set(java.util.Calendar.SECOND, 0); // Create the event String eventName = \u0026#34;Progress Meeting\u0026#34;; DateTime start = new DateTime(startDate.getTime()); DateTime end = new DateTime(endDate.getTime()); VEvent meeting = new VEvent(start, end, eventName); // add attendees.. Attendee dev1 = new Attendee(URI.create(\u0026#34;mailto:dev1@mycompany.com\u0026#34;)); dev1.getParameters().add(Role.REQ_PARTICIPANT); dev1.getParameters().add(new Cn(\u0026#34;Developer 1\u0026#34;)); meeting.getProperties().add(dev1); Attendee dev2 = new Attendee(URI.create(\u0026#34;mailto:dev2@mycompany.com\u0026#34;)); dev2.getParameters().add(Role.OPT_PARTICIPANT); dev2.getParameters().add(new Cn(\u0026#34;Developer 2\u0026#34;)); meeting.getProperties().add(dev2); VAlarm reminder = new VAlarm(new Dur(0, -1, 0, 0)); // repeat reminder four (4) more times every fifteen (15) minutes.. reminder.getProperties().add(new Repeat(4)); reminder.getProperties().add(new Duration(new Dur(0, 0, 15, 0))); // display a message.. reminder.getProperties().add(Action.DISPLAY); reminder.getProperties().add(new Description(\u0026#34;Progress Meeting\u0026#34;)); reminder.getProperties().add(new Trigger(new Dur(0, 0, 15, 0))); Transp transp = new Transp(\u0026#34;OPAQUE\u0026#34;); meeting.getProperties().add(transp); Location loc = new Location(\u0026#34;Gurgaon\u0026#34;); meeting.getProperties().add(loc); meeting.getAlarms().add(reminder); // Create a calendar net.fortuna.ical4j.model.Calendar icsCalendar = new net.fortuna.ical4j.model.Calendar(); icsCalendar.getProperties().add(new ProdId(\u0026#34;-//Events Calendar//iCal4j 1.0//EN\u0026#34;)); icsCalendar.getProperties().add(new Method(\u0026#34;REQUEST\u0026#34;)); icsCalendar.getProperties().add(new Sequence(0)); icsCalendar.getProperties().add(new Priority(5)); icsCalendar.getProperties().add(new Status(\u0026#34;CONFIRMED\u0026#34;)); icsCalendar.getProperties().add(new Uid(\u0026#34;324\u0026#34;)); // Add the event and print icsCalendar.getComponents().add(meeting); String calendarContent = icsCalendar.toString(); calendarPart.addHeader(\u0026#34;Content-Class\u0026#34;,\u0026#34;urn:content-classes:calendarmessage\u0026#34;); calendarPart.setContent(calendarContent, \u0026#34;text/calendar;method=CANCEL\u0026#34;); Put above multipart content in message\n// Put the multipart in message msg.setContent(multipart); And finally send above Mime message as email\nTransport transport = session.getTransport(\u0026#34;smtp\u0026#34;); transport.connect(); transport.sendMessage(msg, msg.getAllRecipients()); transport.close(); Follow the references for more detail on creating email session and using java Mail API.\nMore on ICal4J library iCal4j is used for modifying existing iCalendar data or creating new iCalendar data from scratch.\n Parsing an iCalendar file - Support for parsing and building an iCalendar object model is provided by the CalendarParser and ContentHandler interfaces. You can provide your own implementations of either of these, or just use the default implementations as provided by the CalendarBuilder class\nFileInputStream fin = new FileInputStream(\u0026#34;mycalendar.ics\u0026#34;); CalendarBuilder builder = new CalendarBuilder(); Calendar calendar = builder.build(fin); Creating a new Calendar - Creating a new calendar is quite straight-forward, in that all you need to remember is that a Calendar contains a list of Properties and Components. A calendar must contain certain standard properties and at least one component to be valid. You can verify that a calendar is valid via the method Calendar.validate(). All iCal4j objects also override Object.toString(), so you can verify the resulting calendar data via this mechanism.\nCalendar calendar = new Calendar(); calendar.getProperties().add(new ProdId(\u0026#34;-//Ben Fortuna//iCal4j 1.0//EN\u0026#34;)); calendar.getProperties().add(Version.VERSION_2_0); calendar.getProperties().add(CalScale.GREGORIAN); // Add events, etc.. Output BEGIN:VCALENDAR PRODID:-//Ben Fortuna//iCal4j 1.0//EN VERSION:2.0 CALSCALE:GREGORIAN END:VCALENDAR Creating an Event - One of the more commonly used components is a VEvent. To create a VEvent you can either set the date value and properties manually or you can make use of the convenience constructors to initialise standard values.\njava.util.Calendar cal = java.util.Calendar.getInstance(); cal.set(java.util.Calendar.MONTH, java.util.Calendar.DECEMBER); cal.set(java.util.Calendar.DAY_OF_MONTH, 25); VEvent christmas = new VEvent(new Date(cal.getTime()), \u0026#34;Christmas Day\u0026#34;); // initialise as an all-day event.. christmas.getProperties().getProperty(Property.DTSTART).getParameters().add(Value.DATE); Output: BEGIN:VEVENT DTSTAMP:20050222T044240Z DTSTART;VALUE=DATE:20051225 SUMMARY:Christmas Day END:VEVENT Saving an iCalendar File - When saving an iCalendar file iCal4j will automatically validate your calendar object model to ensure it complies with the RFC2445 specification. If you would prefer not to validate your calendar data you can disable the validation by calling CalendarOutputter.setValidating(false).\nFileOutputStream fout = new FileOutputStream(\u0026#34;mycalendar.ics\u0026#34;); CalendarOutputter outputter = new CalendarOutputter(); outputter.output(calendar, fout); References http://en.wikipedia.org/wiki/ICalendar http://en.wikipedia.org/wiki/VCal http://www.imc.org/pdi/vcal-10.txt http://java.sun.com/developer/onlineTraining/JavaMail/contents.html http://wiki.modularity.net.au/ical4j/index.php?title=Main_Page ","id":78,"tag":"microsoft ; java ; exchange ; integration ; mail ; enterprise ; technology","title":"Managing Calendar and Invites in Java","type":"post","url":"https://thinkuldeep.com/post/java-calendar-and-invite/"},{"content":"This article starts from basic terms of ESB worlds and then provides details of the FuseESB with various examples.\nIt contains step by step guide to install FuseESB, and develop and deploy OSGi bundle on FuseESB.\nLets understand terminologies of the ESB world :\nESB An Enterprise Service Bus (ESB) is a software architecture model used for designing and implementing the interaction and communication between mutually interacting software applications in Service Oriented Architecture. As software architecture model for distributed computing, it is a specialty variant of the more general client server software architecture model and promotes strictly asynchronous message oriented design for communication and interaction between applications.\nOSGi The Open Services Gateway initiative (OSGi) framework is a module system and service platform for the Java programming language that implements a complete and dynamic component model.\nApplications or components (coming in the form of bundles for deployment) can be remotely installed, started, stopped, updated and uninstalled without requiring a reboot; management of Java packages/classes is specified in great detail. Application life cycle management (start, stop, install, etc.) is done via APIs that allow for remote downloading of management policies. The service registry allows bundles to detect the addition of new services, or the removal of services, and adapt accordingly.\nMANIFEST.MF file with OSGi Headers:\n Bundle-Name: Defines a human-readable name for this bundle, Simply assigns a short name to the bundle. Bundle-SymbolicName: The only required header, this entry specifies a unique identifier for a bundle, based on the reverse domain name convention (used also by the java packages) Bundle-Description: A description of the bundle\u0026rsquo;s functionality. Bundle-ManifestVersion: This little known header indicates the OSGi specification to use for reading this bundle. Bundle-Version: Designates a version number to the bundle. Bundle-ClassPath: Class path of classes. Bundle-Activator: Indicates the class name to be invoked once a bundle is activated. Export-Package: Expresses what Java packages contained in a bundle will be made available to the outside world. Import-Package: Indicates what Java packages will be required from the outside world, in order to fulfill the dependencies needed in a bundle. Apache KARAF Apache Karaf is a small OSGi based runtime which provides a lightweight container onto which various components and applications can be deployed Key points of Karaf, Hot deployment, Dynamic configuration, Logging System, Provisioning, Remote access, Security framework, Managing instances\nApache ServiceMix Apache ServiceMix is an enterprise-class open-source distributed enterprise service bus (ESB) and service-oriented architecture (SOA) toolkit. and released under the Apache License.\nServiceMix 4 fully supports OSGi and Java Business Integration (JBI) specification JSR 208. ServiceMix is lightweight and easily embeddable, has integrated Spring support and can be run at the edge of the network (inside a client or server), as a standalone ESB provider or as a service within another ESB. You can use ServiceMix in Java SE or a Java EE application server.\nServiceMix uses ActiveMQ to provide remoting, clustering, reliability and distributed failover. The basic frameworks used by ServiceMix are Spring and XBean.\nServiceMix is often used with Apache ActiveMQ, Apache Camel and Apache CXF in SOA infrastructure projects.\nEnterprise subscriptions for ServiceMix is available from independent vendors including the FuseSource Corp. The FuseSource team offers an enterprise version of ServiceMix called Fuse ESB that is tested, certified and supported.\nServiceMix Runtime At the core of ServiceMix 4.x is the small and lightweight OSGi based runtime. This provides a small lightweight container onto which various bundles can be deployed.\nThe directory layout of the ServiceMix Runtime is as follows\n servicemix/ bin/ servicemix.sh # shell scripts to boot up SMX servicemix.bat wrapper/ # Java Service Wrapper binaries system/ # the system defined OSGi bundles (i.e. the stuff shipped in the distro) which generally shouldn't be edited by users deploy/ # where bundles should be put to be hot-deployed - in jar or expanded format (the latter is good to let folks edit spring.xml files etc) etc/ # some config files for describing which OSGi container to use; also a text file to be able to describe the maven repos \u0026amp; mvn bundles to deploy data/ # directory used to store transient data (logs, activemq journal, transaction log, embedded DB, generated bundles) logs/ generated-bundles/ activemq/ Apache Service Mix NMR Apache ServiceMix NMR is the core Bus of Apache ServiceMix 4. It is built as a set of OSGi bundles and meant to be deployed onto any OSGi runtime, but mainly built on top of Apache ServiceMix Kernel. The NMR project is also where the Java Business Integration 1.0 (JBI) specification is implemented.\nWorking with Fuse ESB Fuse ESB is certified, productized and fully supported by the people who wrote the code. Fuse ESB has a pluggable architecture that allows organizations to use their preferred service solutions in their SOA. Any standard JBI or OSGi-compliant service engine or binding component – including BPEL, XSLT or JMX engines – may be deployed to a Fuse ESB container, and Fuse ESB components may be deployed to other ESBs.\nInstall and Run FuseESB Windows\n Unzip the downloaded zip(from http://fusesource.com/products/enterprise-servicemix/) at C:\\ApacheServiceMix, let’s call it FUSE_ESB_HOME. Open a command line console and change directory to FUSE_ESB_HOME. To start the server, run the following command in Windows servicemix.bat Linux\n Download Fuse ESB binaries for Linux from http://fusesource.com/product_download_now/fuse-esb-apache-servicemix/4-4-1-fuse-01-13/unix\n Extract the archive “apache-servicemix-4.4.1-fuse-01-13.tar.gz” to installation directory, let’s say it is ‘/usr/local/FuseESB’\nGo to Installation directory ‘/usr/local/FuseESB’ Extract the content:\n$ cd /usr/local/FuseESB $tar -zxvf /root/apache-servicemix-4.4.1-fuse-01-13.tar.gz We will refer the installation directory “/usr/local/FuseESB /apache-servicemix-4.4.1-fuse-01-13“as FUSE_ESB_HOME in this document.\n Run Fuse ESB on Linux as follows :\n Go to FUSE_ESB_HOME/bin Run following command to start FuseESB console Run following command to start FuseESB console in background\n./start Use following command to re-connect the FuseESB server as client.\n./client -a 8101 -h localhost -u smx -p smx OSGI WAR packaging and deployment Create an OSGi WAR using eclipse Create Dynamic Web Project\n Open Eclipse Galileo for a Workspace (say D:/Workspaces/ESBRnD) Go to File \u0026gt; New \u0026gt; Project \u0026gt; Web \u0026gt; Dynamic Web Project Name it , say “ESB_Test” Create a JSP file\n Right click on Project and Select New \u0026gt; JSP Name it, say “index.jsp” Put some content in jsp file. \u0026lt;% page language=”java” contentType=”text/html”\u0026gt; \u0026lt;!DOCTYPE html\u0026gt; \u0026lt;html lang=\u0026#34;en\u0026#34;\u0026gt; \u0026lt;head\u0026gt; \u0026lt;meta charset=\u0026#34;UTF-8\u0026#34;\u0026gt; \u0026lt;title\u0026gt;This is first ESB test page.\u0026lt;/title\u0026gt; \u0026lt;/head\u0026gt; \u0026lt;body\u0026gt; \u0026lt;h1\u0026gt;This is first Apache Service Mix Project\u0026lt;/h1\u0026gt; \u0026lt;h2\u0026gt;Welcome to the world of ESB and OSGi Packaging\u0026lt;/h2\u0026gt; \u0026lt;/body\u0026gt; \u0026lt;/html\u0026gt; Create a Bundle Activator\n Add required osgi jars in class path, right click on your project choose: Properties \u0026gt; Java Build Path \u0026gt; Add External JARs Choose the file org.eclipse.osgi_*.jar from the eclipse plugin directory Now create a class for Bundle Activator, Right click on Project and Select New \u0026gt; Class Name the class and package, say MyActivator under package com.tk.bundle and implement it from org.osgi.framework.BundleActivator , click finish Provide implementation for unimplemented methods package com.tk.bundle; import org.osgi.framework.BundleActivator public class MyActivator implements BundleActivator { @Override public void start(BundleContext ctx) throws Exception { System.out.println(\u0026#34;MyActivator.start()\u0026#34;); } @Override public void start(BundleContext ctx) throws Exception { System.out.println(\u0026#34;MyActivator.stop()\u0026#34;); } } Update Menifest File as per OSGi Bundle - Provide all bundle properties as described in OSGi section. /WebContent/META-INF/MANIFEST.MF\nManifest-Version: 1.0 Bundle-Name: ESB_Test Bundle-SymbolicName: ESB_Test Bundle-Description: This is the first OSGi war Bundle-ManifestVersion: 2 Bundle-Version: 1.0.0 Bundle-ClassPath: .,WEB-INF/classes Bundle-Activator: com.tk.bundle.MyActivator Import-Package: org.osgi.framework:version=\u0026#34;1.3.0\u0026#34; Create WAR\n Export the project as WAR, right click on project and select Export\u0026gt;WAR Name the war file, say it “ESB_Test.war” Create an OSGi WAR using Maven Other way is to create the WAR using maven.\n Create Maven Web Project\n Open Eclipse Galileo for a Workspace (say D:/Workspaces/ESBRnD) Go to File \u0026gt; New \u0026gt; Project \u0026gt; Web \u0026gt; Maven Project Select “Create Simple Project” provide the data and Finish Create a JSP file as described in last section. In ‘webapp’ folder.\n Create a web.xml in ‘webapp’ folder, copy the default web app generated from last section.\n\u0026lt;?xml version=\u0026#34;1.0\u0026#34; encoding=\u0026#34;UTF-8\u0026#34;?\u0026gt; \u0026lt;web-app id=\u0026#34;WebApp_ID\u0026#34; version=\u0026#34;2.4\u0026#34; xmlns=\u0026#34;http://java.sun.com/xml/ns/j2ee\u0026#34; xmlns:xsi=\u0026#34;http://www.w3.org/2001/XMLSchema-instance\u0026#34; xsi:schemaLocation=\u0026#34;http://java.sun.com/xml/ns/j2ee http://java.sun.com/xml/ns/j2ee/web-app_2_4.xsd\u0026#34;\u0026gt; \u0026lt;display-name\u0026gt;ESB_Test\u0026lt;/display-name\u0026gt; \u0026lt;welcome-file-list\u0026gt; \u0026lt;welcome-file\u0026gt;index.html\u0026lt;/welcome-file\u0026gt; \u0026lt;welcome-file\u0026gt;index.htm\u0026lt;/welcome-file\u0026gt; \u0026lt;welcome-file\u0026gt;index.jsp\u0026lt;/welcome-file\u0026gt; \u0026lt;welcome-file\u0026gt;default.html\u0026lt;/welcome-file\u0026gt; \u0026lt;welcome-file\u0026gt;default.htm\u0026lt;/welcome-file\u0026gt; \u0026lt;welcome-file\u0026gt;default.jsp\u0026lt;/welcome-file\u0026gt; \u0026lt;/welcome-file-list\u0026gt; \u0026lt;/web-app\u0026gt; Now update pom.xml for following - Add dependency for osgi.core\n\u0026lt;dependencies\u0026gt; \u0026lt;dependency\u0026gt; \u0026lt;groupId\u0026gt;org.apache.felix\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;org.osgi.core\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;1.0.0\u0026lt;/version\u0026gt; \u0026lt;/dependency\u0026gt; \u0026lt;/dependencies\u0026gt; Add Build configuration for war and OSGi Bundle.\n\u0026lt;build\u0026gt; \u0026lt;defaultGoal\u0026gt;install\u0026lt;/defaultGoal\u0026gt; \u0026lt;plugins\u0026gt; \u0026lt;plugin\u0026gt; \u0026lt;groupId\u0026gt;org.apache.maven.plugins\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;maven-compiler-plugin\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;2.3.2\u0026lt;/version\u0026gt; \u0026lt;configuration\u0026gt; \u0026lt;source\u0026gt;1.6\u0026lt;/source\u0026gt; \u0026lt;target\u0026gt;1.6\u0026lt;/target\u0026gt; \u0026lt;encoding\u0026gt;UTF-8\u0026lt;/encoding\u0026gt; \u0026lt;/configuration\u0026gt; \u0026lt;/plugin\u0026gt; \u0026lt;plugin\u0026gt; \u0026lt;groupId\u0026gt;org.apache.maven.plugins\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;maven-resources-plugin\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;2.4.3\u0026lt;/version\u0026gt; \u0026lt;configuration\u0026gt; \u0026lt;encoding\u0026gt;UTF-8\u0026lt;/encoding\u0026gt; \u0026lt;/configuration\u0026gt; \u0026lt;/plugin\u0026gt; \u0026lt;plugin\u0026gt; \u0026lt;artifactId\u0026gt;maven-war-plugin\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;2.1-alpha-1\u0026lt;/version\u0026gt; \u0026lt;executions\u0026gt; \u0026lt;execution\u0026gt; \u0026lt;id\u0026gt;default-war\u0026lt;/id\u0026gt; \u0026lt;phase\u0026gt;package\u0026lt;/phase\u0026gt; \u0026lt;goals\u0026gt; \u0026lt;goal\u0026gt;war\u0026lt;/goal\u0026gt; \u0026lt;/goals\u0026gt; \u0026lt;/execution\u0026gt; \u0026lt;/executions\u0026gt; \u0026lt;configuration\u0026gt; \u0026lt;archive\u0026gt; \u0026lt;manifestFile\u0026gt;${project.build.outputDirectory}/META-INF/MANIFEST.MF\u0026lt;/manifestFile\u0026gt; \u0026lt;/archive\u0026gt; \u0026lt;/configuration\u0026gt; \u0026lt;/plugin\u0026gt; \u0026lt;plugin\u0026gt; \u0026lt;groupId\u0026gt;org.apache.felix\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;maven-bundle-plugin\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;2.2.0\u0026lt;/version\u0026gt; \u0026lt;extensions\u0026gt;true\u0026lt;/extensions\u0026gt; \u0026lt;executions\u0026gt; \u0026lt;execution\u0026gt; \u0026lt;id\u0026gt;bundle-manifest\u0026lt;/id\u0026gt; \u0026lt;phase\u0026gt;process-classes\u0026lt;/phase\u0026gt; \u0026lt;goals\u0026gt; \u0026lt;goal\u0026gt;manifest\u0026lt;/goal\u0026gt; \u0026lt;/goals\u0026gt; \u0026lt;/execution\u0026gt; \u0026lt;/executions\u0026gt; \u0026lt;configuration\u0026gt; \u0026lt;supportedProjectTypes\u0026gt; \u0026lt;supportedProjectType\u0026gt;jar\u0026lt;/supportedProjectType\u0026gt; \u0026lt;supportedProjectType\u0026gt;bundle\u0026lt;/supportedProjectType\u0026gt; \u0026lt;supportedProjectType\u0026gt;war\u0026lt;/supportedProjectType\u0026gt; \u0026lt;/supportedProjectTypes\u0026gt; \u0026lt;instructions\u0026gt; \u0026lt;Bundle-SymbolicName\u0026gt;${project.artifactId}\u0026lt;/Bundle-SymbolicName\u0026gt; \u0026lt;Bundle-Activator\u0026gt;com.tk.bundle.MyActivator\u0026lt;/Bundle-Activator\u0026gt; \u0026lt;Bundle-ClassPath\u0026gt;.,WEB-INF/classes\u0026lt;/Bundle-ClassPath\u0026gt; \u0026lt;Bundle-RequiredExecutionEnvironment\u0026gt;J2SE-1.5\u0026lt;/Bundle-RequiredExecutionEnvironment\u0026gt; \u0026lt;Import-Package\u0026gt;org.osgi.framework;version=\u0026#34;1.3.0\u0026#34;\u0026lt;/Import-Package\u0026gt; \u0026lt;/instructions\u0026gt; \u0026lt;/configuration\u0026gt; \u0026lt;/plugin\u0026gt; \u0026lt;/plugins\u0026gt; \u0026lt;/build\u0026gt; Now execute maven to run goal install\nmvn clean install or right click on pom.xml and select Run As \u0026gt; Maven Install, it will generate the war file ‘esb_test1-0.0.1-SNAPSHOT.war’ in target folder.\n Deploy WAR Console Based deployment\n Go to FuseESB Console Install WAR file to Apache Server Mix by running following command osgi:install war:\u0026lt;File URL\u0026gt; It will give a bundle id ( say 221), Run following command to start the installed bundle\n$ osgi:start \u0026lt;bundle id\u0026gt; $ osgi:install war:file://kuldeeps/test/Esb_Test.war Bungle ID : 221 $ osgi:start 221 MyActivator.start() Manual Deployment (alternative to console based deployment) Just drop the war file in FUSE_ESB_HOME\u0026gt;/deploy directory. It will invoke the Bundle Activator’s Start method. See the console log\n After you see above message, try to access following URL in browser\nhttp://localhost:8181/ESB_Test/\nIt will open the web application deployed here.\n You may also try osgi:update, osgi:uninstall, osgi.stop commands.\nThis completes and basic introduction, to ESB. In the next article we will cover how apache camel and CXF based webservices are deployed on ESB.\n","id":79,"tag":"ESB ; java ; Apache Service Mix ; OSGi ; FuseESB ; XML ; Apache Camel ; enterprise ; integration ; technology","title":"Working with Enterprise Service Bus","type":"post","url":"https://thinkuldeep.com/post/esb-introduction-part1/"},{"content":"MailMerge is a process to create personalized letters and pre-addressed envelopes or mailing labels for mass mailings from a word processing document (Template). A template contains placeholders (Fields) along with the actual content. The placeholders are filled by the data source which is typically a spreadsheet or a database having column for each place holders.\nThis article talks about the java approach for generating multiple documents from a single MS Word template.\nA typical mail merge: Microsoft Words generates individual documents, email or prints as a result of the mail merge. It does following :\n Process template w.r.t each record available in data source and generate content for each record. Generate individual documents or Email/Print the generated content. Java Solution using MS Word Mail Merge. We can use a 3rd party library “Aspose.Words for Java” which is an advanced class library for Java that enables you to perform a great range of document processing tasks directly within your Java applications. We can generate multiple documents from a single template in the same way as MS word does; output of this process is the documents, one for one record in data source, not the email or print. We can generate output documents in various formats (doc, docx, mhtml, msg etc) but for sending email or sending these documents to print, we need to develop custom solution.\nSolution Diagram: In the demo example, we want to generate some invitation letters for the given template.\nWe have following data and keys in the template\n ContactName ContactTitle Address1 CourseName Kuldeep Singh Professional Address1 Test User 2 Technical operator Test Ready User 3 Expert Regular Normal Here is the java file which generate one document for each record (MailMerge.java) :\npackage com.aspose.words.demos; import java.io.File; import java.net.URI; import java.text.MessageFormat; import com.aspose.words.Document; /** * Demo Class. */ public class MailMerge { private static String [] fields = {\u0026#34;ContactName\u0026#34;, \u0026#34;ContactTitle\u0026#34;, \u0026#34;Address1\u0026#34;, \u0026#34;CourseName\u0026#34;}; private static Object[][] valueRows = {{\u0026#34;Kuldeep Singh\u0026#34;, \u0026#34;Professional\u0026#34;, \u0026#34;Address1\u0026#34;, \u0026#34;Test\u0026#34;}, {\u0026#34;User 2\u0026#34;, \u0026#34;Technical operator\u0026#34;, \u0026#34;Test\u0026#34;, \u0026#34;Read\u0026#34;}, {\u0026#34;User 3\u0026#34;, \u0026#34;Expert\u0026#34;, \u0026#34;Regular\u0026#34;, \u0026#34;Normal\u0026#34;}}; public static void main(String[] args) throws Exception { //Sample infrastructure. URI exeDir = MailMerge.class.getResource(\u0026#34;\u0026#34;).toURI(); String dataDir = new File(exeDir.resolve(\u0026#34;data\u0026#34;)) + File.separator; produceMultipleDocuments(dataDir, \u0026#34;CourseInvitationDemo.doc\u0026#34;); } public static void produceMultipleDocuments(String dataDir, String srcDoc) throws Exception { // Open the template document. Document doc = new Document(dataDir + srcDoc); // Loop though all records in the data source. for (int i = 0; i \u0026lt; valueRows.length; i++) { // Clone the template instead of loading it from disk (for speed). Document dstDoc = (Document)doc.deepClone(true); // Execute mail merge. dstDoc.getMailMerge().execute(fields, valueRows[i]); dstDoc.save(MessageFormat.format(dataDir + \u0026#34;TestFile Out {0}.doc\u0026#34;, i+1)); System.out.println(\u0026#34;Done \u0026#34; + i); } } } It will generate\nTestFile Out1.doc TestFile Out2.doc TestFile Out3.doc\nand so on. “Aspose.Words” can generate these output documents in various formats and streams. We can develop logic to direct the data streams to email or print.\nAspose.Words Aspose.Words for Java is an advanced class library for Java that enables you to perform a great range of document processing tasks directly within your Java applications. Aspose.Words for Java supports DOC, OOXML, RTF, HTML and OpenDocument formats. With Aspose.Words you can generate, modify, and convert documents without using Microsoft Word\nReferences http://www.aspose.com/categories/java-components/aspose.words-for-java/default.aspx http://www.javaworld.com/javaworld/jw-10-2000/jw-1020-print.html http://java.sun.com/developer/onlineTraining/JavaMail/contents.html ","id":80,"tag":"microsoft ; java ; mail ; integration ; Aspose ; world ; technology","title":"Mail Merge in Java","type":"post","url":"https://thinkuldeep.com/post/java-mail-merge/"},{"content":"Last year we have migrated one of our application from Java 5 to Java 6. The Java 6(OpenJDK) installation was done by client on a fresh system. After installing our application (under JBoss app), we found the fonts on JFreeCharts and on the images generated from AWT were not correct.\nWe have solved the issue by configuring the fonts at JVM level. Let understand it better.\nHow AWT / JFreeCharts shows the fonts AWT rely on native fonts, it usages default Java fonts (which best fits) if no native fonts mapping found. JFree also usage the AWT library to generates chart images.\nJVM Font Configuration JVM has font configurations, where font mapping to the native fonts can be defined.\nFonts configuration exists in [JRE Installation]/lib/fontconfig.properties.src or look for the similar file fontconfig.properties\nIn our application, we noticed an error at the logs saying that the font mapping was defined in the fontconfig.properties.src but font files did not exists on the mentioned paths.\nWe then installed the missing font library and font issue get fixed.\n","id":81,"tag":"awt ; java ; jvm ; jfreechart ; fonts ; customizations ; technology","title":"Font configuration in JVM","type":"post","url":"https://thinkuldeep.com/post/java-font-mapping/"}]