Settings Results in 4 milliseconds

C#.NET Developer 3
Category: Jobs

<a title="C#.NET Developer 3" href="https//careers-quadax.icims.com/jobs/1961/c%23.net-software- ...


Views: 0 Likes: 39
Full time Javascript Developer wanted!
Category: Jobs

An award winning account firm local to Cleveland/Akron, OH is looking for a mid level (3-6 years ...


Views: 38 Likes: 89
7 Profitable AI Business Ideas for Startups (2023)
7 Profitable AI Business Ideas for Startups (2023)

This post may contain paid links to my personal recommendations that help to support the site! Today, Artificial Intelligence (AI) is transforming businesses’ operations by increasing operational efficiency and unlocking previously untapped opportunities. Whether you are a small startup or a large enterprise, AI offers an array of potential business ideas and solutions. In this blog post, I’ll discuss some of the most lucrative AI business ideas and how incorporating them into your startup or business strategy could give you a competitive advantage. Read on to learn more! What Are Some AI Business Ideas for Startups? 1. Fraud Detection Firm AI-driven fraud detection is one of the most lucrative AI business startup ideas today. This technology uses machine learning and predictive analytics to detect fraudulent activities in real time. By leveraging AI, businesses can identify and stop fraudulent activities before they cause significant losses. AI-driven fraud detection solutions enable companies to detect and verify suspicious activities quickly, accurately, and automatically. Not only can this help companies to save time and money, but it also provides them with the peace of mind that their operations are secure. To launch an AI-driven fraud detection firm, you need experienced professionals who understand both the technical aspects of AI and the specific needs of your customers. You must also develop a comprehensive fraud detection solution that meets the customer’s needs and can detect and prevent fraudulent activities in real time. 2. AI Healthcare Startup AI healthcare startups have become increasingly popular as they offer many benefits. AI-enabled healthcare solutions can help reduce operational costs, enhance patient care improve overall health outcomes through predictive and data insights. These businesses are transforming how we approach medical diagnosis, enabling doctors to make more informed decisions quickly. Healthcare AI startups also can leverage machine learning to improve pharmaceutical discovery and help automate administrative processes. With its potential to revolutionize the healthcare system, entrepreneurs can use artificial intelligence to create effective solutions to Meet patient needs Reduce operational costs Provide health insights 3. AI Logistics and Supply Startup Artificial Intelligence (AI) can benefit logistics and supply chain management too! Having an AI system to manage logistics can help reduce costs, optimize inventory management, automate processes, and improve customer service. For example, AI systems can automatically identify patterns in data and make informed decisions based on those results. Additionally, AI could be used to forecast supply and demand, helping companies prepare for unexpected events. AI-powered solutions also enable startups to track shipments and optimize delivery routes efficiently. This adds visibility into the entire process and lowers costs associated with shipping errors or late arrival of goods. Furthermore, AI can identify potential problems before they arise, resulting in smoother operations and improved customer experience. 4. AI-Personal Shopper Business AI-enabled personal shoppers are becoming increasingly popular as they allow businesses to provide personalized shopping experiences for their customers. AI-powered virtual agents can interact with customers via chatbots, voice bots, or other interfaces to identify their needs and recommend products that best meet them. By leveraging AI’s powerful data analytics capabilities, personal shoppers can collect data to customize product recommendations, detect trends and offer discounts based on customer preferences. Additionally, AI-powered personal shoppers can help streamline customer service by providing personalized support and advice in real time. Companies that invest in this technology can expect to gain a competitive edge as they differentiate their offering with the latest trends and products that meet customer demands. This business idea offers a unique opportunity to improve customer experiences and loyalty while improving operational efficiency using customer data. You can even build a startup that provides AI chatbots for online shops. In this case, you’ll be working with B2B partnerships and selling your AI bot as a product. 5. AI Marketing Startup AI marketing startups have the potential to revolutionize how companies interact with customers and generate leads. AI technologies such as natural language processing (NLP), computer vision, machine learning, and chatbots can automate mundane marketing tasks and enable more personalized interactions. AI startups utilize these tools to improve customer segmentation, develop targeted campaigns, optimize website conversion rates, and generate new leads. For example, NLP algorithms can analyze customer data to predict sentiment or future behavior based on past customer interactions. AI-based chatbots offer a more personalized approach to customer service, quickly identifying and resolving highly specific queries. 6. Personalized Education AI technology can be used to personalize education and offer tailored content, recommendations, and feedback based on a student’s needs. This allows teachers to provide instruction tailored to each student’s capability level, interests, and goals. AI can also automate grading and provide insights into students’ academic performance. AI-based assessment tools can detect patterns in a student’s work and offer personalized learning plans based on those patterns. This helps teachers identify areas where students may need additional help or recommend more challenging material when they are ready for it. AI technology also provides the ability to quickly analyze large amounts of data, allowing teachers to identify skills gaps and weaknesses in their students. This enables them to provide more targeted instruction tailored to each student’s needs. By using AI technology in education, businesses can create an individualized learning experience that encourages each student to reach their full potential. 7. AI-Content Generator AI-based content generators are becoming increasingly popular in the business world. And that’s because they enable businesses to create personalized and high-quality content quickly and efficiently. The technology leverages natural language processing (NLP) algorithms to generate content tailored to a customer’s needs with minimal effort from the business side. This can save companies time and money in the long run, as they can produce content faster and at a lower cost than traditional methods. AI-based content generators have a variety of applications, from generating reports to creating personalized emails. Companies are also using the technology to optimize their website content, which helps them improve search engine rankings and increase their online visibility. Why Should You Launch Your AI Business Idea? AI is revolutionizing how businesses operate and creating new opportunities in the market. Incorporating AI-driven business ideas into your strategy gives you a competitive advantage. It can lead to improved operational efficiency, greater insights into customer needs, increased revenue, and improved customer experience. These benefits of AI create an impetus for businesses of all sizes to launch an AI business idea. By leveraging the power of AI, you can effectively manage data and other business processes and automate operations with machine learning algorithms. Additionally, AI-driven solutions such as natural language processing and computer vision can provide valuable insights into customer preferences while improving customer experience. Launching an AI business idea is the perfect way to capitalize on AI opportunities and stay ahead of your competition. Related Questions How are AI companies making money? AI companies are making money through various sources, such as developing and selling AI-based products. Some others provide AI-as-a-service or offer solutions that help companies leverage AI technology to improve their operations. Additionally, some AI companies are monetizing the data they collect from customers for targeted advertising. How do I start an artificial intelligence business? Starting an AI business requires a solid understanding of the technology and insight into applying it in different industries and scenarios. Additionally, mastering basic data science and programming skills is key. Many startups are leveraging the power of pre-existing AI systems, such as Google’s TensorFlow, to develop their applications. Since many such tools are open-source, starting a business in AI is a good way to start a future-proof business. Is an AI startup idea good for making money? An AI startup idea can be a great way to make money, particularly if it provides a tangible solution to existing problems or creates new opportunities. With the right strategy and execution, AI startup ideas can generate significant revenue growth by leveraging their data-driven insights into customer needs and behaviors. How will AI affect business in the future? In the future, AI will continue to revolutionize business processes by offering more efficient, cost-effective solutions that can help companies to remain competitive. Furthermore, AI startups and businesses will have the opportunity to develop innovative products and services that could profoundly change the way we live and work. Incorporating artificial intelligence into business operations could increase productivity, better decision-making capabilities, and improve customer engagement. How can businesses prepare for AI? Businesses can prepare for AI by developing a comprehensive strategy incorporating technological advances and trends in their AI business ideas. Additionally, businesses should ensure they have the right team to drive their AI efforts forward. Investing in training and upskilling employees on topics related to artificial intelligence is essential if companies are to stay ahead of the curve. Lastly, understanding the ethical considerations behind AI is essential for organizations to ensure their AI applications comply with regulations. Final Thoughts AI businesses require technical skills and are usually complex to set up. But, considering the current state of the market, there is massive potential for those who decide to take a risk and launch an innovative AI-driven business idea. From fraud detection to personalized education, entrepreneurs have many options for creating a successful AI-driven enterprise. Whether you focus on healthcare, logistics and supply chains, personal shoppers, or marketing, each presents unique opportunities. Investing time and effort in building an artificial intelligence platform can create sustainable value for your customers and team. The post 7 Profitable AI Business Ideas for Startups (2023) appeared first on Any Instructor.


ASP.NET Core authentication using Microsoft Entra External ID for customers (CIAM)
ASP.NET Core authentication using Microsoft Entra ...

This article looks at implementing an ASP.NET Core application which authenticates using Microsoft Entra External ID for customers (CIAM). The ASP.NET Core authentication is implemented using the Microsoft.Identity.Web Nuget package. The client implements the OpenID Connect code flow with PKCE and a confidential client. Code https//github.com/damienbod/EntraExternalIdCiam Microsoft Entra External ID for customers (CIAM) is a new Microsoft product for customer (B2C) identity solutions. This has many changes to the existing Azure AD B2C solution and adopts many of the features from Azure AD. At present, the product is in public preview. App registration setup As with any Azure AD, Azure AD B2C, Azure AD CIAM application, an Azure App registration is created and used to define the authentication client. The ASP.NET core application is a confidential client and must use a secret or a certificate to authenticate the application as well as the user. The client authenticates using an OpenID Connect (OIDC) confidential code flow with PKCE. The implicit flow does not need to be activated. User flow setup In Microsoft Entra External ID for customers (CIAM), the application must be connected to the user flow. In external identities, a new user flow can be created and the application (The Azure app registration) can be added to the user flow. The user flow can be used to define the specific customer authentication requirements. ASP.NET Core application The ASP.NET Core application is implemented using the Microsoft.Identity.Web Nuget package. The recommended flow for trusted applications is the OpenID Connect confidential code flow with PKCE. This is setup using the AddMicrosoftIdentityWebApp method and also the EnableTokenAcquisitionToCallDownstreamApi method. The CIAM client configuration is read using the json EntraExternalID section. services.AddDistributedMemoryCache(); services.AddAuthentication(OpenIdConnectDefaults.AuthenticationScheme) .AddMicrosoftIdentityWebApp( builder.Configuration .GetSection("EntraExternalID")) .EnableTokenAcquisitionToCallDownstreamApi() .AddDistributedTokenCaches(); In the appsettings.json, user secrets or the production setup, the client specific configurations are defined. The settings must match the Azure App registration. The SignUpSignInPolicyId is no longer used compared to Azure AD B2C. // -- using ciamlogin.com -- "EntraExternalID" { "Authority" "https//damienbodciam.ciamlogin.com", "ClientId" "0990af2f-c338-484d-b23d-dfef6c65f522", "CallbackPath" "/signin-oidc", "SignedOutCallbackPath " "/signout-callback-oidc" // "ClientSecret" "--in-user-secrets--" }, Notes I always try to implement user flows for B2C solutions and avoid custom setups as these setups are hard to maintain, expensive to keep updated and hard to migrate when the product is end of life. Setting up a CIAM client in ASP.NET Core works without problems. CIAM offers many more features but is still missing some essential ones. This product is starting to look really good and will be a great improvement on Azure AD B2C when it is feature complete. Strong authentication is missing from Microsoft Entra External ID for customers (CIAM) and this makes it hard to test using my Azure AD users. Hopefully FIDO2 and passkeys will get supported soon. See the following link for the supported authentication methods https//learn.microsoft.com/en-us/azure/active-directory/external-identities/customers/concept-supported-features-customers I also require a standard OpenID Connect identity provider (Code flow confidential client with PKCE support) in most of my customer solution rollouts. This is not is supported at present. With CIAM, new possibilities are also possible for creating single solutions to support both B2B and B2C use cases. Support for Azure security groups and Azure roles in Microsoft Entra External ID for customers (CIAM) is one of the features which makes this possible. Links https//learn.microsoft.com/en-us/azure/active-directory/external-identities/ https//www.microsoft.com/en-us/security/business/identity-access/microsoft-entra-external-id https//www.cloudpartner.fi/?p=14685 https//developer.microsoft.com/en-us/identity/customers https//techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/microsoft-entra-external-id-public-preview-developer-centric/ba-p/3823766 https//github.com/AzureAD/microsoft-identity-web


Describing UX Design, or the User Experience of Beer Labels
Describing UX Design, or the User Experience of Be ...

When I made the jump from my traditional graphic design role to a new position in UX/UI, I often found myself trying to describe the difference between the two. My friends and family would look at me with furrowed brows as I attempted to explain what the new role would entail. I realized in these strained conversations that the term “user experience” was apparently not as prevalent as I had come to believe. And graphic design seemed to be understood exclusively from the final product; the pretty picture, if you will. I found myself saying things like, “more problem solving and functionality; less art,” to the unfamiliar audience. But was that true? Was what I was doing in graphic design all that different? At Simple Thread, we often describe the UX process as a series of five phases – Research, Define, Prototype, Implement, Operate. It seems to me that these steps are critical to the design of just about anything, and I’d like to explore them through the lens of one of my favorite graphic design projects the craft beer label. Imagine we’re tasked with designing a beer label. That’s simple enough, right? We just need something that identifies the contents. Make it pretty. We need it by tomorrow. Is that enough to finalize a design? Maybe. But certainly not a good one. Let’s explore the steps of the design process. Step One Research For the design to be effective, we need to start by understanding the product, the brewery, the distribution plan, the audience, and the competition. We have a lot of questions to ask What is this beer? Are there any unique identifiers to this particular brew, like different hops or unusual ingredients? How does it differ from other beers at this brewery? How will it be distributed? Who is the audience? Answering these questions is critical to delivering a successful design. Just like in UX, we need to fully understand the problem before we can begin to solve it. The research phase is also multifaceted. We need to understand the product itself and the brewery that’s producing it, but we also need to consider the goals of the visual style. Looking for inspiration, sometimes on the supermarket shelf, will help us form a vision for things like color palette, typography, and illustration style. Step Two Define Once the product is identified, more definition around the label comes into play. There are technical considerations to take into account like the size of the vessel (often 12 or 16 ounces), the availability and timeline of the materials, and the intended release date. As we begin to think about designing the label, we need to know if it will be applied like a sticker to the can, shrink wrapped for full coverage, or printed on the metal itself. More involved production techniques are often only cost effective at high quantities, and understanding the process is critical to setting up our file correctly for the printer. There are also legal considerations for packaging; rules that are more stringent for alcohol sales than much else. Part of defining the design is identifying what components are essential to include. In the beer label world, those elements include the legal warning, the address of the brewing and canning facility, the size of the can, the ABV, and the name and style of beer. With, might I add, some awfully specific rules like the text size, placement, and even character count per square inch for some of the more legal components. We also need to define our goals for the visual language. Some breweries have visual systems that help buyers quickly understand what they’re getting. The blue one is a pilsner, the white is an IPA, etc. We need to decide if this label will fit within a pre-existing system, create a new system, or be a unique one-hit-wonder. Step Three Prototype Now it’s time to put our elements together and start creating the bones of the label. A low fidelity mockup for a beer label is a rough placement of the brewery logo, beer name, legal requirements, and illustration footprint to gain an understanding of how the elements will interact with each other. This wireframe, or sketch, gets wrapped around old cans and analyzed for things like type hierarchy and visual appeal. At this stage, we’re determining what elements should be most pronounced and why. If we’re designing for an established brewery with a solid reputation, there’s a good chance that a potential buyer might select this can based on the logo alone. If the beer itself is seasonal or includes unique ingredients, the name or style of beer might be the most compelling component. The prototyping phase includes early wireframes and sketches which will later transition to more completed designs. This can be a tricky step because the design needs to be completed enough to communicate the idea, but not so complete as to sink precious billable hours on detailed design work that may or may not make the cut. Step Four Implement After the trial and error of quick prototyping, it’s time to design the final label. This phase is what most people think of when they envision designing something – it’s the part when final colors, type, and layout come together to create something that didn’t exist before. It’s when the magic happens. But really, it’s the last step. The implementation of the design requires knowledge from the research, define, and prototype phases to create an effective final product. Step Five Operate The last phase of the UX process is to set the product free, let it operate, and measure the results. For our label, success may be measured by sales numbers in the taproom and in distribution, or by whether it was completed in time for canning day. Conclusion In UX, these five steps are a very iterative process. Results are measured and changes are implemented as new needs develop. The steps are not always adhered to in a perfectly linear fashion. On the flip side, once the beer label exists in the world, there it is. We can iterate on the next round of canning and make different design choices the second time around, but we don’t have the ability to update this version in real time. The design for a craft beer label sounds simple, but the design process is closely aligned to that of the UX/UI workflow. I am still learning the myriad of ways they differ – the unique programs and processes, the more in-depth research, the iterative nature, and the focus on the user’s needs. But as far as I can tell, all good design should follow these steps. Form should always follow function, and it’s never been just about pretty pictures. The post Describing UX Design, or the User Experience of Beer Labels appeared first on Simple Thread.


Good Problem Solving Tip
Category: Software Development

<span style="font-size 12pt; font-family Arial; background-color ...


Views: 317 Likes: 108
Bold text printing unbold
Category: Hardware

#If you're experiencing the #issue where #bold text is #printing unbold, you're not alone. This i ...


Views: 0 Likes: 30
LLMs in a Vacuum Are Useless
LLMs in a Vacuum Are Useless

“What hath God wrought?” “What hath God wrought?” That was the first message ever delivered via telegraph. The four-word phrase was sent by inventor Samuel Morse on May 24th, 1844 at 845 am, and traveled from Washington, D.C., to Baltimore, Maryland in the blink of an eye—a journey which would have previously taken 4-8 hours on horseback, even in the most ideal of conditions. Samuel Morse, whose name you may recognize immortalized in Morse code, was aware of the gravity of this event. He was a smart man. He knew that telecommunications were going to change the world in some way or another. Hence the melodramatic message. And it seems that we’re on top of yet another moment in time. Some very smart people have come out and said that this is the biggest invention since the internet. I’m referring to ChatGPT, of course, which needs no introduction. Winter Is Coming? So, we’re coming out of the peak heat of another, sizzling-hot AI summer, and the world might never be the same. “The world might never be the same…” I know how that sounds. Maybe a little bit over the top? Let me explain. If you were to ask a person disillusioned by the new advances in artificial intelligence, they might tell you that LLMs are a fad, a passing trend, and that it’s just a matter of time before they go the way of Bitcoins and Ethereums and Google+ and such. And if you ask an AI evangelist, or doomsayer, they might tell you that your job’s in danger, or that your company should restructure, or that the foundation of the education system is at risk, and we’d all better adapt or become obsolete. Wherever you land on that spectrum, let’s put that aside for now and assume that the community as a whole has run up on, and perhaps even surpassed an inflection point in the hype cycle—the point sometimes called the “Peak of Inflated Expectations”. This is where the expectations which, in the heat of the moment, grew to unreachable, unrealistic heights, and subsequently the hype begins to dissipate. Of course we can argue about whether or not we’re actually at this point or not. There are people smarter than me out there actually studying such trends very closely, but for the sake of this blog post, let’s carry on with the hypothetical that we have passed the Peak of Inflated Expectations. The thing about the hype cycle, though, is that it’s not all just hot, inflated air. It’s inflated, yes, but there is usually some smaller, consistent flame burning beneath, carrying the metaphorical hot air balloon of LLMs, with its Basket of Usefulness up and down through the Sky of Uncertainty. All that to say—LLMs are actually useful. And as the summer of AI comes to an end, two things are clear LLMs are here to stay (in some capacity), and; GPT-4 is the heavyweight champion. At this point, you’ve probably heard of GPT-4. It’s widely available, it’s usable and accessible through ChatGPT or the GPT-4 API endpoints, and it’s mostly affordable. Even after the inflated hype, we’re seeing ChatGPT used for many tasks, from tutoring, to learning, to paired-programming, to accelerating administrative tasks, and so on. I joke about ChatGPT not needing an introduction, but— Even though ChatGPT broke the shortest-time-to-100 million-users record, and even though openai.com gets over 1 billion visits per month, it’s still true that most people don’t use ChatGPT! I introduced my parents to ChatGPT today. They never heard about it, had trouble signing up, and were completely mindblown that such a thing exists or how it works or how to use it. Fun reminder that I live in a bubble. — Andrej Karpathy (@karpathy) July 23, 2023 I talk about ChatGPT with my coworkers, but have you talked about ChatGPT with your close friends, family? Your neighbors, your Amazon driver? Oh, what, you don’t discuss software with your friends? Well, it seems that whenever I mention it to people outside of work, they haven’t even tried it! I take this as a side-effect of the Peak of Inflated Expectations. No matter how excited the tech community gets about a new invention, the rest of the world takes much longer to adapt. Yes, even in 2023. So what would it take for ChatGPT to really break into the public conscience? Maybe, let’s say, how long will it be until it reaches the level of ubiquity as something like a Google search engine? (I know. Comparing the ubiquity of an AI chatbot to a search engine is not a perfect apples-to-apples comparison, but we don’t have much else to go on.) ChatGPT, anecdotally, is already creeping into some of Google’s search engine territory. People are querying the chatbot with questions they would have asked the search bar just a year ago, at least for certain types of searches. But if ChatGPT can become as commonly used as Google search, it will not just be because it’s used in conjunction with search engines, but it will mostly be because ChatGPT is being used in novel ways. It’s being used in areas that were previously untouchable by the cold, metallic hands of artificial intelligence. These are areas like teaching, tutoring, math assistance, cheating on homework(?), brainstorming, code generation, as a writing partner, secretary, completing administrative tasks, etc. So where are we now? As the hype comes to an end—and as the dust still settles—where have we landed? It looks to me that people aren’t using Large Language Models in the very epic, extraordinary ways that were ideated at the peak of the hype cycle. No, they’re in the much more reserved, basic ways—the brainstorming, the code-completion, or the replacement of a Google search here and there. So how much further can LLMs go? In the remainder of this post, I’ll delve into the true value of Large Language Models (LLM) and attempt to back up the idea that the usefulness and ubiquity of LLM’s will ultimately depend on the capabilities of their supporting software. The Robots Are Going to Take Our Jobs So is ChatGPT going to take your job? Probably not. But someone using ChatGPT might take your job. I’ve heard Marc Andreesen talk around this sentiment, and I recently heard Damien Riehl say it on the Practical AI podcast as well. Here’s the quote, referring specifically to lawyers. I’d say to lawyers that are worried about AI, that AI will not take a lawyer’s job, but a lawyer that uses AI will take the job of a lawyer that does not use AI. (Quote by Damien Riehl, Practical AI, Episode 232, somewhere around the 3827 mark)   So, if you’re a programmer, you should probably be using AI tools like GitHub Copilot, or starting to learn how to incorporate them into your workflow. That’s my recommendation. Good programmers are going to be a lot more productive because of tools like GitHub Copilot and Sourcegraph’s Cody. They’re really good at this stuff already, and they’re just going to get better. But even without some of the code-specific tools, programmers are also getting more productive by having ChatGPT as a paired programmer. Figuring out a path around or through a roadblock can sometimes be tough, and could potentially take hours, or, so help us, days. We’ve all been there. I’ve found ChatGPT extremely helpful in these situations. Now, the counterpoint to all of this is that you don’t have to adapt. There are still mainframes, and COBOL programmers, and those who eat machine code all day long. And that’s fine too! There are different paths that people can take, and still make money, and have a very fulfilling career—and at no step are you required to use artificial intelligence. Should My Business Be Using ChatGPT? Large Language Models are excellent at certain tasks. If your business leans heavily on chat-based systems, if you do a lot of customer service, receive a lot of emails, service requests, or anything which involves a lot of short, unstructured or semi-structured text, then GPT-4 might just revolutionize your business and you should absolutely begin figuring out how to incorporate artificial intelligence. There are many solid use-cases for LLMs. But LLMs are not good for everything. Let me restate—you might not need ChatGPT. LLMs might make your marketing department 10x more productive, and it might make your web developers 10x, 20x, or 100x more productive, but you do not need to put together your own custom web interface that interacts with the GPT-4 API, or to build a vector database with all of your company data, or to buy an on-premise machine learning cluster to power your business. Yes, there are some instances where it might make sense for a business to build these solutions, but for most people these complex solutions are not going to be worth it. If you’re a home renovation business, you might just have ChatGPT help you draft some responses to customer reviews, or navigate the county or city permitting systems. It’ll be helpful, but if you mostly install windows and build decks, your life probably isn’t going to be flipped upside down. And let’s not forget that machine learning and artificial intelligence are bigger fields than just language models. Machine learning algorithms have revolutionized fault analysis, fraud detection, and protein modeling, just to name a few. Pick the right tool for the job—it’s not always going to be a large language model. Other times, when you find yourself hankering for ChatGPT, you might just need a Python script. Need to reformat a 100,000 line CSV? Use a Python script! Due to context window restrictions, ChatGPT can’t be used to parse your 100,000 line CSV—does that mean you need to build a complex system to break down the 100,000 lines into digestible chunks for LLM to decipher, then rebuild the CSV? No! Just use a Python script! Now, do you find yourself writing a short Python script? ChatGPT can definitely help you with that. We might see game-changing productivity boosts. We might already be seeing such productivity changes in programming. We will continue to see improvement among some administrative tasks, and we might see some less important decision-making in some industries be changed forever by large language models. But most of the other stuff? Well, that’s not going to change very much. What Makes an LLM Useful? Thus far I’ve talked about large language models, specifically GPT-4, which we interact with through ChatGPT or GitHub Copilot. I’ve talked about how popular these tools are and how much they might affect your work and your life at large. Now I want to focus on what actually matters, and that’s everything outside of the large language model. LLMs in a vacuum are useless. You see, if there was a model which was 100x more powerful than GPT-4, but you had to interact with it using Morse code via a telegraph, it wouldn’t be very useful, now would it? (Just imagine what Samuel Morse might’ve written to GPT-4 via Morse code if that had happened in 2023. “New phone, who dis?”)   (By Mathew Benjamin Brady – Christies, Public Domain)   ChatGPT’s success has been based not only in its secret sauce (GPT-4), but in its novelty. It was the first readily available and good chatbot, and it’s only $20 per month. That’s an amazing value. But as we go forward, it becomes clearer and clearer that there’s just not that many instances where interacting with ChatGPT via a chatbot-style interface is that useful. I won’t continue to hammer on the real use-cases for LLMs. Now what I want to focus on is how GPT-4 might continue to grow, in a steadier, more functional fashion. On the hype cycle graph, this is what we might call the “Slope of Enlightenment”. GitHub Copilot is useful not just because of the LLM that backs it, but because I don’t have to leave my code editor to use it. And because they figured out how to recommend the perfect amount of code without losing the context. And because it’s so conveniently easy to insert the suggested code at just the press of a ([tab]) button. We are still discovering how useful LLMs are. We’re only seeing nascent fruits of these early applications. Who knows how many hundreds of venture backed startups are building products, and searching for these other niches—the note-taking applications, the train-AI-on-your-data apps—of which only a few will succeed. But it will be these more specific, more integrated use-cases which drive LLMs to the level of ubiquity as something like a Google search. Another way to think about what I mean by LLMs in a vacuum are useless is that there are two distinct problems—the LLMs and how can we use them? You have OpenAI, Anthropic, Meta, and a few others, along with the open source world trailing behind just a bit, and they’re all working on making the best language models possible. But then you have the product people and the consumers who are taking that model and figuring out how to use it. Slapping a ChatGPT window on top of your product probably isn’t going to be very useful. But if your users have a specific need and you have an elegant way to incorporate the interface, then you might be onto something. ChatGPT as a tutor, as a tool to help students with homework, as a writing partner, and as a paired programmer—these roles will probably continue to exist. But we’ve seen some months of ChatGPT usage drops. I’d posit that those users are not completely disappearing though. They’re still using GPT-4—just via the API, or more specifically, through products which have integrated GPT-4. The Difficulties Faced by LLM Products After playing around with many of the open source LLMs, I must say that it would be difficult for me to take the decrease in quality after using GPT-4. Running privately and locally is certainly a benefit for some applications, and/or some companies, but getting the open source models to work at the same level as GPT-4 is difficult. And it’s not just about the language model. Again, LLMs don’t exist in a vacuum! I’m not trying to minimize the work that these open-source folks are doing. The democratization of LLMs is very important. But the point I want to emphasize is that it’s not just about getting a model closer and closer to GPT-4’s accuracy. These local, open-source systems can be slow, can be difficult to set up, difficult to deploy, and they tend to require specific hardware, which is expensive. That being said, these challenges might be conquerable for some larger companies, and depending on the size of the company or product, it might save you a lot of money to develop a custom LLM pipeline which uses an open source model. But for most of the population, if you want to save money by using an open source LLM, you might end up paying much more in an even more valuable resource–time. Of course, you could always use AWS SageMaker, or Google’s VertexAI to run the open source LLM of your choice, but then it’s not so local and not so private anymore. You could rent cloud GPU to fine-tune the model on your data, but that can be difficult and expensive as well. At the end of the day, many larger companies might just pay one of the old-heads, like IBM, who just recently released their generative AI product, watsonx. Even then, if you do choose to pay for one of the hosted, and/or proprietary solutions, there are still plenty of challenges that you’re going to face, data integrations you’re going to have to spend developer time on, and so on. We’re still at the tip of the iceberg when it comes to solving all of the problems, technical and otherwise, that surround large language models, but if you’re interested in the difficulties that you’ll face actually using an LLM in production, check out this article from Honeycomb. It’s a really good read. Design Is King A sentiment that has been going around is “Wait a second, that’s not an AI startup! That’s just a UI on top of the GPT-4 API…” To that, I say, good! GitHub Copilot could be overly simplified into being called a UI on top of GPT-4. Just as easily, an iPhone could be called a UI on top of a processor. These language models need to find the right use-cases to unlock their potential. They need great design and skilled application of those designs. Big, great inventions, like electricity, the internet, or large language models often get built out more quickly than the rest of us can keep up. Liken it to the Field of Dreams motif, “Build it and they will come”. The products will come. I guess the baseball players (or ghosts, or whatever happens in that movie) are designers in this metaphor. I pause here just to emphasize the value that a design practice provides to the world of software and digital products. This idea of taking a more holistic approach to product design is by no means a bespoke idea. At least not in the last 10-15 years of software. It’s all about figuring out what the product is, what it does or should do, doing product and user research, putting together the right pieces, and finding the right fit for the product. But when a technology gets as hyped-up as ChatGPT and LLMs, sometimes it becomes difficult to see straight, and we start throwing it at everything. Maybe it’s a fear of falling behind the curve. Or maybe in the thrill of potentially being ahead of the curve, and, you know, gaining a competitive edge. It can be dizzying when it feels like all that people are talking or writing about is ChatGPT. So, I guess, sorry to contribute to that. But hopefully you took away something worthwhile about the post-ChatGPT world we live in today. Thanks for Reading! If you want custom software, Simple Thread can help you. If you want help integrating ChatGPT or other LLM technology into your business, let’s talk. We’ve been making products for a long time, and we’re really good at all of the stuff around the model. Agree or disagree, love or despise what you read? Leave a comment! The post LLMs in a Vacuum Are Useless appeared first on Simple Thread.


What is Computer Programming
Category: Computer Programming

<div class="group w-full text-gray-800 darktext-gray-100 border-b border-black/10 darkborder-gray- ...


Views: 0 Likes: 17
Software Development Refactoring Wisdom I gained t ...
Category: Software Development

Software Development Refactoring Wisdom I gained through R ...


Views: 175 Likes: 84
Replacing PDFKit with Grover for Rails PDF Generation
Replacing PDFKit with Grover for Rails PDF Generat ...

Background I have been working on a long running Rails application where one of the primary pieces of functionality is the ability to export dozens of reports in PDF format. When the application was first written the PDF generation was handled by PDFKit. This Ruby gem uses wkhtmltopdf under the hood to generate PDFs from HTML. PDFKit and WKHTMLTOPDF Over the years we have encountered many issues with formatting due to wkhtmltopdf using the WebKit engine to render the HTML for conversion. Over time this led to several workarounds to allow the generated PDF to match the HTML provided. As we went through a total redesign of the UI and modernized our CSS wkhtmltopdf required more and more special treatment to allow the rendering to somewhat match what was intended. It also meant that newer CSS features, such as flexbox, were unsupported. The wkhtmltopdf upgrade process was also slightly painful as we were using a version with a patched QT that we would need to track down and replace with every upgrade. Grover and Puppeteer With all of the considerations above and after developing a visualization that made heavy use of flexbox for its layout and realizing that the PDF output did not appear anything like the HTML provided we were on the search for a replacement. After another team member (Shout Out to Sam Ehlers) put together a proof of concept using Chrome Headless along with the –print-to-pdf functionality it seemed like it would be a viable option for generating PDFs of our reports. That proof of concept also came with the realization that we would need to come up with a way to present the HTML so that we could provide both landscape and portrait orientations. It also meant updating out CSS that was targeting the print media type as that was what Chrome Headless was targeting in the rendering of the export. We went looking for options that would make the process easier and finally found Grover. Grover is a combination of a Ruby gem that you can call directly or use as middleware in your Rails application and the puppeteer npm package used to control Chrome Headless. Since the application already had the code infrastructure built in to handle PDF generation with PDFKit we opted not to use Grover as middleware, but to call it directly, replacing the PDFKit calls that already existed. This worked very well for the most part, keeping in mind that Chromium will try to convert any relative path into a full path, so there needed to be logic for our local development environment and production to convert any links into their fully qualified counterparts. This was not an issue with the content of the reports themselves as they contained no links. However, in the GCP environment with IAP turned on the PDFs were missing the styling and some of the content that was rendered through javascript to the page. It turns out that when rendering the HTML, Chrome Headless was trying to follow the links to the CSS and Javascript and since it was not authenticated through IAP that content was blocked. The solution for this was to write a helper for the report export views that read the compiled CSS and Javascript, converted them to a Base64 encoded string and then embedded the links on the page like this tag(link, rel stylesheet, href "datatext/css;base64,#{base64_data}") This allowed all of the data to be available to Chrome Headless without the need to configure any IAP access. The configuration for Grover ended up being very similar to the example configuration provided with the exception of using wait_until 'networkidle0' versus wait_until 'domcontentloaded' to account for some of the javascript content taking a little longer to render. End Results The transition to this new approach brought about a welcome transformation in our workflow. By embracing this switch, we had the opportunity to shed a substantial portion of the previously convoluted conditional HTML and CSS formatting we had been heavily dependent on when dealing with wkhtmltopdf. We could now embark on the development of increasingly intricate and sophisticated layouts for future reports, secure in the knowledge that the resulting PDF exports would align with our expectations. The post Replacing PDFKit with Grover for Rails PDF Generation appeared first on Simple Thread.


Onboarding users in ASP.NET Core using Azure AD Temporary Access Pass and Microsoft Graph
Onboarding users in ASP.NET Core using Azure AD Te ...

The article looks at onboarding different Azure AD users with a temporary access pass (TAP) and some type of passwordless authentication. An ASP.NET Core application is used to create the Azure AD member users which can then use a TAP to setup the account. This is a great way to onboard users in your tenant. Code https//github.com/damienbod/AzureAdTapOnboarding The ASP.NET Core application needs to onboard different type of Azure AD users. Some users cannot use a passwordless authentication (yet) and so a password setup is also required for these users. TAP only works with members and we also need to support guest users with some alternative onboarding flow. Different type of user flows are supported or possible AAD member user flow with TAP and FIDO2 authentication AAD member user flow with password using email/password authentication AAD member user flow with password setup and a phone authentication AAD guest user flow with federated login AAD guest user flow with Microsoft account AAD guest user flow with email code FIDO2 should be used for all enterprise employees with an office account in the enterprise. If this is not possible, then at least the IT administrators should be forced to use FIDO2 authentication and the companies should be planning on a strategy on how to move to a phishing resistant authentication. This could be forced with a PIM and a continuous access policy for administration jobs. Using FIDO2, the identities are protected with a phishing resistant authentication. This should be a requirement for any professional solution. Azure AD users with no computer can use an email code or a SMS authentication. This is a low security authentication and applications should not expose sensitive information to these user types. Setup The ASP.NET Core application uses Microsoft.Identity.Web and the Microsoft.Identity.Web.MicrosoftGraphBeta Nuget packages to implement the Azure AD clients. The ASP.NET Core client is a server rendered application and uses an Azure App registration which requires a secret or a certificate to acquire access tokens. The onboarding application uses Microsoft Graph applications permissions to create the users and initialize the temporary access pass (TAP) flow. The following application permissions are used User.EnableDisableAccount.All User.ReadWrite.All UserAuthenticationMethod.ReadWrite.All The permissions are added to a separate Azure App registration and require a secret to use. In a second phase, I will look at implementing the Graph API access using Microsoft Graph delegated permissions. It is also possible to use a service managed identity to acquire a Graph access token with the required permissions. Onboarding members using passwordless When onboarding a new Azure AD user with passwordless and TAP, this needs to be implemented in two steps. Firstly, a new Microsoft Graph user is created with the type member. This takes an unknown length of time to complete on Azure AD. When this is finished, a new TAP authentication method is created. I used the Polly Nuget package to retry this until the TAP request succeeds. Once successful, the temporary access pass is displayed in the UI. If this was a new employee or something like this, you could print this out and let the user complete the process. private async Task CreateMember(UserModel userData) { var createdUser = await _aadGraphSdkManagedIdentityAppClient .CreateGraphMemberUserAsync(userData); if (createdUser!.Id != null) { if (userData.UsePasswordless) { var maxRetryAttempts = 7; var pauseBetweenFailures = TimeSpan.FromSeconds(3); var retryPolicy = Policy .Handle<HttpRequestException>() .WaitAndRetryAsync(maxRetryAttempts, i => pauseBetweenFailures); await retryPolicy.ExecuteAsync(async () => { var tap = await _aadGraphSdkManagedIdentityAppClient .AddTapForUserAsync(createdUser.Id); AccessInfo = new CreatedAccessModel { Email = createdUser.Email, TemporaryAccessPass = tap!.TemporaryAccessPass }; }); } else { AccessInfo = new CreatedAccessModel { Email = createdUser.Email, Password = createdUser.Password }; } } } The CreateGraphMemberUserAsync method creates a new Microsoft Graph user. To use a temporary access pass, a member user must be used. Guest users cannot be onboarded like this. Even though we do not use a password in this process, the Microsoft Graph user validation forces us to create one. We just create a random password and will not return this, This password will not be updated. public async Task<CreatedUserModel> CreateGraphMemberUserAsync (UserModel userModel) { if (!userModel.Email.ToLower().EndsWith(_aadIssuerDomain.ToLower())) { throw new ArgumentException("A guest user must be invited!"); } var graphServiceClient = _graphService .GetGraphClientWithManagedIdentityOrDevClient(); var password = GetRandomString(); var user = new User { DisplayName = userModel.UserName, Surname = userModel.LastName, GivenName = userModel.FirstName, OtherMails = new List<string> { userModel.Email }, UserType = "member", AccountEnabled = true, UserPrincipalName = userModel.Email, MailNickname = userModel.UserName, PasswordProfile = new PasswordProfile { Password = password, // We use TAP if a paswordless onboarding is used ForceChangePasswordNextSignIn = !userModel.UsePasswordless }, PasswordPolicies = "DisablePasswordExpiration" }; var createdUser = await graphServiceClient.Users .Request() .AddAsync(user); return new CreatedUserModel { Email = createdUser.UserPrincipalName, Id = createdUser.Id, Password = password }; } The TemporaryAccessPassAuthenticationMethod object is created using Microsoft Graph. We create a use once TAP. The access code is returned and displayed in the UI. public async Task<TemporaryAccessPassAuthenticationMethod?> AddTapForUserAsync(string userId) { var graphServiceClient = _graphService .GetGraphClientWithManagedIdentityOrDevClient(); var tempAccessPassAuthMethod = new TemporaryAccessPassAuthenticationMethod { //StartDateTime = DateTimeOffset.Now, LifetimeInMinutes = 60, IsUsableOnce = true, }; var result = await graphServiceClient.Users[userId] .Authentication .TemporaryAccessPassMethods .Request() .AddAsync(tempAccessPassAuthMethod); return result; } The https//aka.ms/mysecurityinfo link can be used to complete the flow. The new user can click this link and enter the email and the access code. Now that the user is authenticated, he or she can add a passwordless authentication method. I use an external FIDO2 key. Once setup, the user can register and authenticate. You should use at least two security keys. This is an awesome way of onboarding users which allows users to authenticate in a phishing resistant way without requiring or using a password. FIDO2 is the recommended and best way of authenticating users and with the rollout of passkeys, this will become more user friendly as well. Onboarding members using password Due to the fact that some companies still use legacy authentication or we would like to support users with no computer, we also need to onboard users with passwords. When using passwords, the user needs to update the password on first use. The user should add an MFA, if not forced by the tenant. Some employees might not have a computer and would like user a phone to authenticate. An SMS code would be a good way of achieving this. This is of course not very secure, so you should expect these accounts to get lost or breached and so sensitive data should be avoided for applications used by these accounts. The device code flow could be used together on a shared PC with the user mobile phone. Starting an authentication flow from a QR Code is unsecure as this is not safe against phishing but as SMS is used for these type of users, it’s already not very secure. Again sensitive data must be avoided for applications accepting these low security accounts. It’s all about balance, maybe someday soon, all users will have FIDO2 security keys or passkeys to use and we can avoid these sort of solutions. Onboarding guest users (invitations) Guest users cannot be onboarded by creating a Microsoft Graph user. You need to send an invitation to the guest user for your tenant. Microsoft Graph provides an API for this. There a different type of guest users, depending on the account type and the authentication method type. The invitation returns an invite redeem URL which can be used to setup the account. This URL is mailed to the email used in the invite and does not need to be displayed in the UI. private async Task InviteGuest(UserModel userData) { var invitedGuestUser = await _aadGraphSdkManagedIdentityAppClient .InviteGuestUser(userData, _inviteUrl); if (invitedGuestUser!.Id != null) { AccessInfo = new CreatedAccessModel { Email = invitedGuestUser.InvitedUserEmailAddress, InviteRedeemUrl = invitedGuestUser.InviteRedeemUrl }; } } The InviteGuestUser method is used to create the invite object and this is sent as a HTTP post request to the Microsoft Graph API. public async Task<Invitation?> InviteGuestUser (UserModel userModel, string redirectUrl) { if (userModel.Email.ToLower().EndsWith(_aadIssuerDomain.ToLower())) { throw new ArgumentException("user must be from a different domain!"); } var graphServiceClient = _graphService .GetGraphClientWithManagedIdentityOrDevClient(); var invitation = new Invitation { InvitedUserEmailAddress = userModel.Email, SendInvitationMessage = true, InvitedUserDisplayName = $"{userModel.FirstName} {userModel.LastName}", InviteRedirectUrl = redirectUrl, InvitedUserType = "guest" }; var invite = await graphServiceClient.Invitations .Request() .AddAsync(invitation); return invite; } Notes Onboarding users with Microsoft Graph can be complicated because you need to know which parameters and how the users need to be created. Azure AD members can be created using the Microsoft Graph user APIs, guest users are created using the Microsoft Graph invitation APIs. Onboarding users with TAP and FIDO2 is a great way of doing implementing this workflow. As of today, this is still part of the beta release. Links https//entra.microsoft.com/ https//learn.microsoft.com/en-us/azure/active-directory/authentication/howto-authentication-temporary-access-pass https//learn.microsoft.com/en-us/graph/api/authentication-post-temporaryaccesspassmethods?view=graph-rest-1.0&tabs=csharp https//learn.microsoft.com/en-us/graph/authenticationmethods-get-started https//learn.microsoft.com/en-us/azure/active-directory/authentication/howto-authentication-passwordless-security-key-on-premises Create Azure B2C users with Microsoft Graph and ASP.NET Core Onboarding new users in an ASP.NET Core application using Azure B2C Disable Azure AD user account using Microsoft Graph and an application client Invite external users to Azure AD using Microsoft Graph and ASP.NET Core https//learn.microsoft.com/en-us/azure/active-directory/external-identities/external-identities-overview https//learn.microsoft.com/en-us/azure/active-directory/external-identities/b2b-quickstart-add-guest-users-portal


Google like a pro
Category: Technology

As a Software Developer, it is important to know how to find good information in the fastest manner. ...


Views: 311 Likes: 101
FooBar is FooBad
FooBar is FooBad

FooBar is FooBad FooBar is a metasyntactic variable. A “specific word or set of words identified as a placeholder in computer science”, per wikipedia. It’s most abstract stand-in imaginable, the formless platonic ideal of a Programming Thing. It can morph into a variable, method or class with the barest change of capitalization and spacing. Like “widget”, it’s a catch-all generic term that lets you ignore the specifics and focus on the process. And it’s overused. Concrete > Abstract Human brains were built to deal with real things. We can deal with unreal things, but it takes a little bit of brainpower. And when learning a new language or tool, brainpower is in scarce supply. Too often, `FooBar` is used in tutorials when almost anything else would be better. Say I’d like to teach Python inheritance to a new learner. # Inheritance class Foo def baz(self) print("FooBaz!") class Bar(Foo) def baz(self) print("BarBaz!") A novice learner will have no idea what the above code is doing. Is it `Bar` inheriting from `Foo` or vice versa? If it seems obvious to you that’s because you already understand the code! It makes sense because we already know how it works. Classic curse of knowledge. Why force learners to keep track of where Foo comes before Bar instead of focusing on the actual lesson? Compare that to this example using concrete, real-world, non-abstract placeholders # Inheritance class Animal def speak(self) print("") class Dog(Animal) def speak(self) print("Bark!") This is trite and reductive. But it works. It’s immediately clear which way the inheritance runs. Your brain leverages its considerable real-world knowledge to provide context instead of mentally juggling meaningless placeholder words. As a bonus, you effortlessly see that the Cat class is a noun/thing and the speak() method is verb/action. Concrete Is Better for Memory Even if a learner parses your tutorial, will they remember it? The brain remembers concrete words better than abstract ones.  Imagine a cherry pie, hot steaming, with a scoop of ice cream melting down the side. Can you see it?   Now try to imagine a “Foo”… Can you see it? Yeah, me neither. Concrete examples are also more unique. AnimalDog is more salient than FooBar in the same way “John is a baker” is easier to remember than someone’s name is “John Baker”. It’s called the Baker-Baker Effect.  Your brain is full of empty interchangeable labels like Foo, Bar, John Smith. But something with relationships, with dynamics and semantic meaning? That stands out. Concrete Is Extensible Lets add more examples to our tutorial. Sticking to Foo, I suppose I could dig into the Metasyntactic variable wikipedia page and use foobar, foo, bar, baz, qux, quux, corge, grault, garply, waldo, fred, plugh, xyzzy and thud. # Inheritance class Foo def qux(self) print("FooQux!") class Bar(Foo) def qux(self) print("BarQux!") class Baz(Foo) def qux(self) print("BazQux!") But by then, we’ve strayed from ‘beginner demo’ to ‘occult lore’. And the code is harder to understand than before! Using a concrete example on the other hand… # Inheritance class Animal def speak(self) print("") class Dog(Animal) def speak(self) print("Bark!") class Cat(Animal) def speak(self) print("Meow!") Extension is easy and the lesson is reinforced rather than muddied. Exercise for the reader See if you can rewrite these python examples on multiple inheritance in a non-foobar’d way. Better Than Foo Fortunately, there are alternatives out there. The classic intro Animal, or Vehicle and their attending subclasses. Or might I suggest using Python’s convention of spam, eggs, and hams? A five-year old could intuit what eggs = 3 means. There’s also cryptography’s Alice and Bob and co. Not only are they people (concrete), but there’s an ordinal mapping in the alphabetization of their names. As an added bonus, the name/role alliteration aids in recall. (Mallory is a malicious attacker. Trudy is an intruder) New Proposal Pies Personally, I think Pies make excellent example variables. They’re concrete, have categories (Sweet, Savory), subtypes (Fruit, Berry, Meat, Cream) and edge cases (Pizza Pies, Mud Pies). # Pies fruit = ['cherry', 'apple', 'fig', 'jam'] meat = ['pork', 'ham', 'chicken', 'shepherd'] nut = ['pecan', 'walnut'] pizza = ['cheese', 'pepperoni', 'hawaiian'] other = ['mud'] They also come baked-in with a variety of easy-to-grasp methods and attributes like slice(), bake(), bake_time or price. All of which can be implicitly understood. Though if pies aren’t your thing, there’s a whole world of concrete things to choose from. Maybe breads? ['bun', 'roll', 'bagel', 'scone', 'muffin', 'pita', 'naan'] Conclusion I’m not holding my breath for foobar to be abolished. It is short, easy, abstract, and (most importantly) established. Mentally mapping concrete concepts is hard. Analogies are tricky and full of false assumptions. Maps are not the territory. You’re trying to collapse life in all its complexity to something recognizable but not overly reductive or inaccurate. But the solution is not to confuse abstractness for clarity. For tutorials, extended docs and beginner audiences, skip foobar. Use concrete concepts instead, preferably something distinct that can be mapped onto the problem space. And if it gives implicit hierarchy, relationships, or noun/verb hinting, so much the better. Use FooBar when you’re trying to focus on the pure abstract case without extra assumptions cluttering the syntax. Use it in your console, debuggers, and when you’re talking to experienced programmers. But for anything longer than a brief snippet, avoid it. The post FooBar is FooBad appeared first on Simple Thread.


Short Cut for Creating Constructor in C-Sharp
Category: C-Sharp

It is very helpful when developing software to know the shortcut to implement code snippet. For exam ...


Views: 304 Likes: 86
Lead Software Engineer
Category: Jobs

LawnStarter is a marketplace that makes lawn care easy for homeowners while helping small busines ...


Views: 0 Likes: 34
WLED and BTF-Light Strip and ESP 32 Configuration
Category: Home

1. Make sure that you have flashed wled software on ESP 32 using install.wled.me&nbsp; &nbs ...


Views: 0 Likes: 30
How to solve problems
Category: Software Development

Instead of asking what problems should I solve. Ask, what problems do I wish someone else would s ...


Views: 303 Likes: 121
How to Optimize Software performance
Category: Computer Programming

Software performance is very important, early 201 ...


Views: 0 Likes: 31
Is Ark Cross Platform on PC, PS5, PS4 & Xbox? (2023)
Is Ark Cross Platform on PC, PS5, PS4 & Xbox? (202 ...

This post may contain paid links to my personal recommendations that help to support the site! Ever wanted to team up with your friends on different systems to battle the dinosaurs of Ark since its launch in 2017? Well, in 2023, it might be possible! The question you’ll most likely be having is—is Ark cross-platform? In this article, I’ll answer this plus all your other related questions about crossplay on Ark! Let’s find out! Is Ark Cross Platform? Yes, Ark Survival Evolved is cross-platform. However, the cross-platform capabilities for Ark are not that straightforward. The cross-play compatibility is dependent on the platform Ark is running on. For example, Ark is cross-platform compatible between Windows PC players and Xbox One users. The cross-platform feature is also available for play among Android and iOS users. However, when it comes to PlayStation platforms (PS5 & PS4), you won’t be able to join servers with your friends from other systems. Here’s a neat chart for your quick reference! Is Ark Survival Evolved Cross Platform for Xbox and PS4? No, Ark Survival Evolved is not cross-platform for Xbox and PS4. Native cross-platform is not currently supported for Xbox and PS4 or PS5. There have been no official announcements for plans to release crossplay for these two platforms. However, Xbox players will still be able to play with friends on multiplayer with other Xbox devices. Ark players on Playstation 4 will be able to play with players on the Playstation network, which means Playstation5 players can also join in. Is Ark Survival Evolved Cross Platform for Xbox and Epic Games? No, Ark Survival Evolved does not support cross-platform for Xbox and Ark on the Epic Games Store. The two platforms have no official compatibility or crossover ability, meaning you won’t be able to join your friends on Epic Games from an Xbox console for a game of Ark. This also applies to PlayStation users; they cannot join their friends from the Epic Games platform or Steam platform. Is Ark Survival Evolved Cross Platform for Steam and Epic Games? Yes, Ark Survival Evolved is cross-platform for Steam and Epic Games. Players on both platforms can join one another in the same server through their respective interfaces, meaning they will be able to enjoy Ark with friends from all over the world! If you’re looking to team up with your friends while playing Ark, I recommend getting a dedicated ARK gamer server to hang out at! Related Questions Can PS4 and Xbox play Ark together? No, Ark is not cross-platform compatible between PS4 and Xbox. However, you can still join your friends on a multiplayer server if they are both on the same platform. Ark also supports cross-gen multiplayer, which allows PS4 players to play with PS5 players. Can PC and PS4 play Ark together? No, PC and PS4 players cannot play Ark together as the two platforms are not cross-platform compatible. Can Android users and iOS users play Ark together? Yes, Android and iOS users can play Ark together as the game is cross-platform compatible between the two platforms. Can Nintendo Switch and PC users play Ark together? No, Nintendo Switch and PC players cannot play Ark together as the two platforms are not cross-platform compatible. Will Ark ever be fully cross-platform? Ark is unlikely to ever be fully cross-platform. Although there is support for cross-platform on mobile devices and PC users with Xbox users, the developers at Studio Wildcard have not made any official announcements regarding plans for full cross-platform support in the future. We know that as a current trend with games, more developers are looking to include cross-platform support as time goes on, though, so keep an eye out for updates! Also, another thing to mention is that with the upcoming launch of ARK 2, it might be possible for developers to be working on a more integrated gaming experience. There might be a chance for ARK to be fully cross-platform for this new game. How do I make my Ark server cross-platform? In order to make your Ark server cross-platform, you’ll need to rent a dedicated server from a game server hosting provider. From there, you’ll be able to set up the server settings and enable cross-play for players on different platforms (although limited to the options I mentioned above). Final Thoughts While Ark Survival Evolved doesn’t support cross-platform play for all platforms, there are still several platforms that do. For those wanting the full experience of playing with their friends from all platforms, rent a dedicated server or look into other cross-platform games to have fun with your friends. Whatever you decide to do, we hope you have fun playing Ark and enjoy the game! I hope this article has helped clarify your questions about cross-platform play for Ark. The post Is Ark Cross Platform on PC, PS5, PS4 & Xbox? (2023) appeared first on Any Instructor.


7 Best Data Analyst Tools To Use in 2023 (Free & Paid!)
7 Best Data Analyst Tools To Use in 2023 (Free & P ...

This post may contain paid links to my personal recommendations that help to support the site! Are you seeking the best data analytics tools to gain insightful business intelligence? If you answer yes, then I’ve just the right list for you! This blog post will provide an in-depth list of the most popular and effective data analyst tools available, as well as an overview of each of them. We will look at both free and paid options – so no matter what size organization or budget you have – there is something here for everyone. Let’s dive right in! What Are The Best Data Analyst Tools? Here is our list of the 7 best data analyst tools 1. Tableau Tableau is one of the most popular and powerful data analysis tools available. It allows users to explore, visualize, and interact with their data intuitively. With its drag-and-drop user interface, you don’t need prior programming knowledge or specialized skills to create stunning data visualization. Tableau helps organizations in various industries uncover insights from their data that can be used to make better business decisions. It provides a range of features, including dashboard creation, advanced analytics, predictive analytics, forecasting tools, ETL integrations, and social media integrations. Tableau is also designed to be scalable to meet the needs of any business size. Whether you’re an individual or a large enterprise, Tableau can adapt to most of the data analysis needs. I’ve had the chance to work on Tableau to build a data warehouse and integrate it with a data integration platform and found it a must-learn for beginner data analysts! 2. Microsoft Power BI Microsoft Power BI is another popular data analytic tool used by businesses worldwide. It provides advanced analytics capabilities that allow users to produce impressive visualizations from big data sources. It also has a drag-and-drop user interface, where users can easily transform vast amounts of raw data into visuals such as charts, graphs, and dashboards. Additionally, Microsoft Power BI provides a range of features, including predictive analytics, segmentation analysis, artificial intelligence (AI) powered insights, and M language. With its comprehensive feature set and scalable platform, Microsoft Power BI is an ideal choice for those looking to gain valuable business intelligence from their data. Additionally, the cloud-based architecture allows users to access the latest real-time updates and share them with their team members or clients, regardless of location. This is good for companies that use a Microsoft ecosystem of applications. 3. Microsoft Excel Microsoft Excel is the most simple and popular data analysis tool available. It is packed with powerful features to help data analysts work with data for a quick analysis. It offers a range of features that allow you to manipulate, visualize, and analyze data quickly and easily. You can also use Excel for tasks such as creating pivot tables or performing calculations on big datasets. This makes Excel perfect for data modeling. With its easy-to-use graphical user interface, you can create sophisticated reports in just a few clicks. Additionally, Excel has built-in macros that allow users to automate some of the common tasks associated with analyzing data. Excel also makes it simple to share information with other users by providing options for exporting and importing files from various formats (including CSV, HTML, and XML). This means that any data analysis done in Excel can be easily shared and distributed. One of the biggest advantages to using Microsoft Excel is that it’s relatively inexpensive; compared to other more powerful data analysis tools, Excel is a very cost-effective choice. It also has a large user base, so plenty of resources are available if you need help with your analysis tasks. Lastly, Excel supports multiple versions of Windows operating systems, making it easier for users to access their data no matter which version they’re using. 4. Jupyter Notebook Jupyter Notebook is one of the most popular and powerful open-source data analytics tools. It provides an interactive environment to write and execute code and visualize data outputs. With Jupyter Notebook, you can quickly gain insights from your data through code and graphical representations. The platform integrates many popular programming languages, such as Python, R, and Julia, making it easy for users to explore their data differently. One of the major advantages of using Jupyter Notebook is that you can quickly test and modify your code without having to restart the session every time. This makes it easier for users to find errors and tweak their programs accordingly. Furthermore, Jupyter Notebook is extremely flexible and customizable – allowing users to customize styles, plot outputs, and even add interactive widgets. It also provides a secure environment with multi-user access control and an inbuilt web server. These features make Jupyter Notebook a great data analytics tool for experienced professionals and newbies. 5. Apache Spark Apache Spark is a powerful open-source data analytics tool that allows users to process large datasets quickly and efficiently. It offers an intuitive interface that enables users to easily load, query, and manipulate data at scale. With Apache Spark, you can easily perform complex calculations on the data – making it ideal for machine learning and predictive analytics applications. The platform is designed to be fault-tolerant, meaning that if any node in the system goes down, the job will still get done without any disruption of service. Apache Spark can also be used for real-time streaming analysis by leveraging its built-in streaming engine. One of the major advantages of using Apache Spark is its speed – it’s capable of processing large datasets faster than Hadoop MapReduce. Another great thing about Apache Spark is its scalability, allowing users to easily add new nodes to the cluster and scale out their applications as needed. Last but not least, Apache Spark also comes with an extensive library of tools and APIs that can be used to integrate other frameworks into your applications as needed. Overall, Apache Spark is one of the best data analytics tools available today – offering powerful features and an intuitive interface that makes it easy for users to gain insights from their data quickly and accurately. 6. SAS Business Intelligence SAS Business Intelligence is a powerful suite of data analytics and business intelligence tools designed to help organizations better understand their data. The platform offers extensive features and capabilities, including data integration, reporting, forecasting, modeling, and visualization The software’s drag-and-drop interface makes it easy for users to access their data sources and create insightful reports with just a few clicks. Additionally, SAS Business Intelligence supports multidimensional analysis – enabling users to quickly identify trends and correlations in large datasets. It also comes with advanced forecasting capabilities that allow users to develop more accurate predictions based on historical data. Furthermore, the platform provides real-time analysis capabilities, enabling users to make informed decisions faster than ever. SAS Business Intelligence also features a comprehensive library of pre-built models and algorithms that can be used to quickly create accurate predictive models from data. Overall, SAS Business Intelligence is an incredibly powerful suite of tools that enable organizations to make better decisions by gaining deeper insights into their data. 7. Python Python is a powerful programming language with many open-source libraries for data science and analysis. Being one of the most used languages for data science, it is also a popular option among data analysts and data scientists! It provides an efficient data structure in data frames through Pandas, making it easy to analyze, manipulate, and visualize data. With its rich features, Python can be used as a standalone tool or as part of larger analytics pipelines that connect to data integration tools. Pandas offers unique capabilities, such as indexing and labeling, which helps users select specific subsets of their datasets quickly and accurately. Furthermore, it supports merging, joining, concatenation, and aggregation – allowing users to easily combine different datasets into a unified view. The Pandas library in Python also features an extensive range of built-in functions such as group by (), pivot_table(), and melt() that can be used to execute complex analytics tasks with minimal coding. With these functions, users can easily create powerful data visualizations such as bar graphs, pie charts, and scatter plots – providing valuable insights into their data. Another great feature of Python is its support for various data formats like CSV, JSON, and Excel spreadsheets. This lets users quickly import any data source into the library without converting it first – making the analysis process much smoother than other solutions. Finally, the language also provides a rich set of tools for developers looking to build custom applications with advanced functionality. Python Pandas has something for everyone, from web scraping to data preparation statistical analysis, and machine learning pipelines. Related Questions What Are Data Analyst Tools? Data analyst tools are software programs used by data analysts designed to gather, store and present data to gain insight. They are used by businesses of all sizes to gain better insights into customer behavior and target audiences. These tools help organizations make informed decisions about their operations, products, and services. What type of data do I need to analyze? Data analysis generally involves looking at different data types, such as numerical data, website traffic, financial data, and customer information. You’ll need to understand how the data is structured and what insights you are trying to uncover to optimize your analysis. If you’re unsure what kind of data you need to analyze, plenty of online tutorials and resources are available to help guide you through the process. Are data analyst tools difficult to use? Not at all! Many of the most popular analytical tools are designed with user-friendly interfaces that allow anyone to gain insights quickly and easily. Plus, many come with intuitive tutorials and resources to help you get to grips with the software more quickly. With a little practice, you can analyze data like a pro quickly. However, more advanced data analytics tools like Python and Spark have a steeper learning curve. Why are data analysis tools important? Data analysis tools are essential for uncovering valuable insights from your data. Using the right tool lets you quickly identify trends and patterns that may have gone unnoticed, giving your business a competitive edge. Plus, with the right tool, you can make informed decisions more quickly and accurately – helping you achieve success faster. Are data analytics tools secure? Most data analytics tools are secure and encrypted to protect your data and privacy. Additionally, many of them come with built-in authentication systems that allow you to manage user access and control who has access to the information. This ensures that only authorized personnel can access and analyze the data, providing a secure and reliable way to gain insights. However, using only open-source tools would make your data less secure than proprietary tools storing data on the cloud. Wrapping Up I hope this article has given you an overview of some of the most popular and effective data analyst tools available and answered any questions. The post 7 Best Data Analyst Tools To Use in 2023 (Free & Paid!) appeared first on Any Instructor.


Is AI going to take Software Development Jobs?
Category: Research

Artificial Intelligence (AI) is becoming increasingly prevalent in the software development indu ...


Views: 0 Likes: 32
docker: Cannot connect to the Docker daemon at uni ...
Category: Docker

Question How do you resolve "docker Cannot connect to the Docker daemon at unix///var/run/doc ...


Views: 380 Likes: 108
Asp.Net 5 Development Notes (DotNet Core 3.1 Study ...
Category: Software Development

Study Notes to use when progra ...


Views: 423 Likes: 61
Embracing ?????: Programming as Imitation of the Divine
Embracing ????? Programming as Imitation of the D ...

Within the field of software development, we are prone to gazing upon the future – new libraries, new tools. But from where did we come? The philosophical foundation of the field is largely absent from the contemporary zeitgeist, but our work is deeply rooted in the philosophical traditions of not only Logic, but Ontology, Identity, Ethics and so on. Daily, the programmer struggles with not only their implementation of logic but the ontological and identity questions of classifying and organizing their reality into a logical program. What is a User? What are its properties? What actions can be taken on it? “Oh the mundanity!” – cries the programmer. But in-deed, as we will explore here – you are doing God’s work! Because the work of programmers is not too dissimilar from that of philosophers throughout history, we can look to them for guidance on the larger questions of our own tradition. In this piece, we will focus mainly on the ancient Greeks and their metaphysical works. Guided by their knowledge, we can better incorporate Reason and Logic into our programs and strive to escape Plato’s Cave (https//en.wikipedia.org/wiki/Allegory_of_the_cave). Furthermore, because the results of our work is our reason manifested into reality, we must suffer under the greater burden of responsibility to aim towards the divine Reason. ????? [T]he spermatikos logos in each man provides a common, non-confessional basis in each man, whether as a natural or supernatural gift from God (or both), by which he is called to participate in God’s Reason or [?????], from which he obtains a dignity over the brute creation, and out of which he discovers and obtains normative judgments of right and wrong (https//lexchristianorum.blogspot.com/2010/03/st-justin-martyr-spermatikos-logos-and.html) The English word logic is rooted in the Ancient Greek ????? (Logos) – meaning “word, discourse or reason”. ????? is related to the Ancient Greek ???? (légo) – meaning “I say”, a cognate with the Latin legus or “law”. Going even further back, ????? derives from the PIE root *le?- which can have the meanings “I put in order, arrange, gather, I choose, count, reckon, I say, speak”. (https//en.wikipedia.org/wiki/Logos) The concept of the ????? has been studied and applied philosophically throughout history – going back to Heraclitus around 500 BC. Heraclitus described the ????? as the common Reason of the world and urged people to strive to know and follow it. “For this reason it is necessary to follow what is common. But although the ????? is common, most people live as if they had their own private understanding.” (Diels–Kranz, 22B2) With Aristotelian, Platonic and early Stoic thought, the ????? as universal and objective Reason and Logic was further considered and defined. ????? was seen by the Stoics as an active, material phenomenon driving nature and animating the universe. The ????? spe?µat???? (“logos spermatikos”) was, according to the Stoics, the principle, generative Reason acting in inanimate matter in the universe. Plutarch, a Platonist, wrote that the ????? was the “go-between” between God and humanity. The Stoics believed that humans each possess a part of the divine ?????. The ????? was also a fundamental philosophical foundation for early Christian thought (see John 11-3). The ????? is impossible to concisely summarize. But for the purpose of this piece, we can consider it the metaphysical (real but immaterial) universal Reason; an infinite source of Logic and Truth into which humans tap when they reason about the world. Imitation of the Divine In so far as the spirit is also a kind of ‘window on eternity’… it conveys to the soul a certain influx divinus… and the knowledge of a higher system of the world (Jung, Carl. Mysterium Coniunctionis) What is “imitation of the divine”? One could certainly begin by considering what the alternative would be. A historical current has existed in the philosophical tradition of humanity’s opportunity and responsibility to turn to and harness the divine ????? in their daily waking life. With language and thought we reason about the material and immaterial. As Rayside and Campbell declared in their defense of traditional logic in the field of Computer Science – “But if what is real and unchanging (the intelligible structure in things) is the measure of what we think about it (concept) and speak (word) about it, then it too is a work of reason not our reason, for our reason is the measured, but of Reason.” (Rayside, D, and G Campbell. Aristotle and Object-Oriented Programming Why Modern Students Need Traditional Logic. https//dl.acm.org/doi/pdf/10.1145/331795.331862.) Plato, in his theory of the tripartite soul, understood that the ideal human would not suffer passions (??µ?e?d??, literally “anger-kind”) or desires (?p???µ?t????) but be led by the ????? innate in the soul (????st????). When human reasoning is concordant with Reason, for a moment, Man transcends material reality and is assimilated with the divine (the ?????). “Hence, so many of the great thinkers who have gone before us posited that the natural way in which the human mind gets to God is in a mediated way — via things themselves, which express God to the extent that they can.” (Rayside, Campbell) God here is the representative of the ????? – humanity can achieve transcendental knowledge by consideration (in the deepest sense of the word) of the things around them. The Programmer Assimilated It is simply foolish to pretend that human reason is not concerned with meaning, or that programming is not an application of human reason (Rayside, Campbell) The programmer must begin by defining things – material or conceptual. “We are unable to reason or communicate effectively if we do not first make the effort to know what each thing is.” (Rayside, Campbell) By considering the ontological questions of the things in our world, in order to represent them accurately (and therefore ethically) in our programs, the programmer enters into the philosophical praxis. Next, the programmer adds layers of identity and logic on top of their ontological discovery, continuing in the praxis. But the programmer takes it a step further – the outcome of their investigation is not only their immaterial thought but, in executing the program, the manifestation of their philosophical endeavor into material reality. The program choreographs trillions of elementary charges through a crystalline maze, harnessing the virtually infinite charge of the Earth, incinerating the remains of starlight-fueled ancient beings in order to realize the reasoning of its programmer. Here the affair enters into the realm of Ethics. “The programmer is attempting to solve a practical problem by instructing a computer to act in a particular fashion. This requires moving from the indicative to the imperative from can or may to should. For a philosopher in the tradition, this move from the indicative to the imperative is the domain of moral science.” (Rayside, Campbell) Any actions taken by the program are the direct ethical responsibility of the programmer. Furthermore, the programmer, as the source of reason and will driving a program, manifesting it into existence, becomes in that instant the ????? spe?µat???? (“logos spermatikos”) incarnate. The programmer’s reason, tapped into the divine Reason (?????), is generated into existence in the Universe and commands reasonable actions of inanimate matter. Feeble Earthworm What sort of freak then is man? How novel, how monstrous, how chaotic, how paradoxical, how prodigious! Judge of all things, feeble earthworm, repository of truth, sink of doubt and error, glory and refuse of the universe! (Pascal, B. (1670). Pensées.) Pascal would be even more perplexed by the paradox of the programmer – in search of Logic and simultaneously materializing their logic; their “repository of truth” a hand emerging from the dirt reaching towards the ?????. Programmers are equals among the feeble earthworms crawling out of Plato’s cave. We enjoy no extraordinary access to Reason and yet bear a greater responsibility as commanders of this technical revolution in which we find ourselves. While the Greeks had an understanding of the weight of their work, their impact was restricted to words. The programmer’s work is a true hypostatization or materialization of the programmer’s reason. As programmers – as beings of Reason at the terminal of this grand system – we should most assuredly concern ourselves with embracing and modeling ourselves and our work after the divine and eternal ?????. The post Embracing ????? Programming as Imitation of the Divine appeared first on Simple Thread.


Full Stack Software Developer
Category: Jobs

We have an opening for a Full Stack Software Developer. Please send resumes asap for our team to ...


Views: 0 Likes: 76
Books for Programmers Manning.com
Category: Technology

Books for High-End Software DevelopersEarly in November 2018, I spoke with a ver ...


Views: 290 Likes: 107
Software Developer (remote job) at Renalogic
Category: Jobs

Software Developer Compensation <span data-contrast="a ...


Views: 0 Likes: 44
Drupal 8 Bootstrap 4 Not Loading CSS Files
Category: Bootstrap

<span style="font-weight bold; font-size medium; textline underl ...


Views: 298 Likes: 90
Amazon is hiring SDE2s
Category: Jobs

Amazon is hiring SDE2s all around the US, Canada and Mexico!!! (No 3rd parties. Thanks!)Ple ...


Views: 43 Likes: 41
Why Open Source Libraries are the Future of Softwa ...
Category: Computer Programming

We have seen famous Social Networks like Facebook being made using ...


Views: 0 Likes: 30
SQL Developer
Category: Jobs

Would you be interested in the following long-term opportunity? &nbsp; If not int ...


Views: 0 Likes: 73
A Practical Use-Case of Render Functions in Vue
A Practical Use-Case of Render Functions in Vue

What Are Render Functions? Render functions are what Vue uses under the hood to render to the DOM. All vue templates are compiled into render functions which return a virtual DOM tree and get mounted to the actual DOM. This template  <div>hello</div> is compiled to h(‘div’, ‘hello’) Vue gives us the option to skip writing templates and instead directly author these render functions. Templates are so much easier to read and write, so why and when would you ever want to use render functions? That’s something I’ve always wondered about until something came up on a project I was working on a few months ago. Background The project I’m working on is pretty heavy on tables. We have multiple tables which display different data so naturally we built a component that takes data and displays it in an HTML table. Our primary table started out simple but grew in complexity. Instead of displaying text in each cell, some cells needed to render out links, icons, buttons, tooltips, and other custom components. We did this by building new components for each of these new types of cells, but all of these new components became difficult to maintain. In addition to this added complexity, our table grew in size. We started out with less than 15 columns but it ballooned to more than 50. Even after implementing virtual scrolling, with all 50 columns, the scrolling performance on the table was poor, especially on our client’s work machines. This performance drop was because 50+ component instances for each row needed to be mounted as they scrolled into view. It seemed like the clear answer to both of these problems was to reduce the number of components being used, but how? One solution that we turned to was using slots and render functions. Project Overview I’ve built a starter project that takes user data and displays it in an HTML table. You can check it out on Stack Blitz. The BaseTable component takes two props source which is the data that we’ll want to display, and columns which is an array of column definitions. <template> <table> <thead> <tr> <th v-for="column in columns" key="column.sourceKey"> <HeaderCell title="column.title" /> </th> </tr> </thead> <tbody> <tr v-for="row in source" key="row.id"> <td v-for="column in columns" key="column.sourceKey"> <component is="column.component" v-bind="column.props(row[column.sourceKey])" ></component> </td> </tr> </tbody> </table> </template> <script setup> import HeaderCell from './HeaderCell.vue'; defineProps({ source { type Array, required true, }, columns { type Array, required true, }, }); </script> This is what our user object looks like { id 1, name 'Lauri Pitman', email 'lpitman0@google.com', phoneNumber '568-246-1591', ip_address '250.99.76.244', saved false, avatar 'https//robohash.org/nequenonfacere.png?size=50x50&set=set1', rating 2, }, And a column definition { sourceKey 'name', title 'Name', component BaseCell, props(value) { return { sourceValue value, }; }, }, This object indicates what attribute from our data objects to use for each of the columns. It also provides a title which BaseTable uses as the column header, what component to use for the column cell, and a props object that is bound to the indicated component. Our BaseCell component is pretty simple. It takes a single prop sourceValue and renders it directly <template> <div> {{ sourceValue }} </div> </template> <script setup> defineProps({ sourceValue { type [Array, String, Object, Number, Boolean], default null, }, }); </script> There are two other components being used in our table LinkCell and IconCell which are being used for the ‘email’ and ‘saved’ columns respectively. LinkCell displays an email-to link and IconCell displays an svg icon that we import as a component with the help of vite-svg-loader. We’re going to update these columns to use BaseCell with the help of slots and render functions. Implementing Render Functions We want to use a render function to generate HTML and insert it into a slot in BaseCell. We’ll start by adding a slot to BaseCell and keeping sourceValue as the default slot content. Now that we have a slot, we’ll want to add a prop called slotContent that takes a render function. We’ll also invoke it and save the return value into a variable called SlotContent. Since render functions return a virtual DOM node, we can insert this into a slot like any other component or tag. BaseCell should look like this <template> <slot> <SlotContent v-if="slotContent" /> {{ sourceValue }} </slot> </template> <script setup> const props = defineProps({ sourceValue { type [Array, String, Object, Number, Boolean], default null, }, slotContent { type Function, default null, }, }); const SlotContent = props.slotContent && props.slotContent(); </script> Now that BaseCell is ready to work with render functions, we can use it in our other columns. Basic Usage We’ll start with the email column. This column uses LinkCell which renders a simple mailto link. Instead of using LinkCell in the email column, we can now try using BaseCell. In our props object, instead of sending in a sourceValue prop, we send in a render function in our slotContent prop which renders a link props(value) { return { slotContent () => h('a', { href `mailto${value}` }, value), }; }, The first argument to this indicates the root tag to use which is an anchor tag. The second optional argument is an object where you can define props or attributes. In this case we want to set an href with a mailto link. The third argument sets the node’s children, which in this case is a string. Conditional Rendering Next we’ll convert the ‘saved’ column. In the ‘saved’ column definition, instead of using IconCell we can change it to use BaseCell and add another render function return { slotContent () => h('div', [ value ? [ h('span', { class 'visually-hidden' }, 'yes'), h(BookmarkAddedIcon, { height '24', width '24', 'aria-hidden' true, }), ] h('span', { class 'visually-hidden' }, 'no'), ]), }; There is a bit more going on here than last time. First is the use of a ternary operator. This operates like v-if which we’re using to display a bookmark icon only if the value of our column is true. In the case that it is true, we also have multiple children under the root node a bookmark icon and screen-reader only text. We’re also setting various attributes a visually-hidden class for hiding the screen reader text, width, height, and an aria-hidden attribute for the svg. Previously in our IconCell component, all of this was baked in. If we wanted to make any of this dynamic or if it needed to be used differently, we’d need to add more props and more functionality making it harder to maintain. Using a render function this way helps separate the implementation of our components from our actual component code, making them simpler to maintain. Loops and Slots Next, we’re going to add a new column to our table – a ratings column. Each of our users has a rating 1-5 and instead of displaying the numeric value, we want to display the equivalent number of stars. We’ll start with a basic column definition { sourceKey 'rating', title 'Rating', component BaseCell, props(value) { return {} } }, We can render loops by using the map function slotContent () => h('div', [ h('span', { class 'visually-hidden' }, value), [...Array(value).keys()].map((star) => { return h(StarIcon, { key star, height '24', width '24', 'aria-hidden' true, }); }), ]), Here, we’re simply creating an array with the same length as our user’s rating and iterating through that to render the equivalent number of stars. Now we should have a new ratings column that displays a star rating for each user. What if we want to see an average rating that gets displayed in the column header as a tooltip? We can start by setting up HeaderCell similar to BaseCell resulting in this <template> <div> <span> {{ title }} </span> <slot> <SlotContent v-if="slotContent" /> </slot> </div> </template> <script setup> const props = defineProps({ title { type String, default '', }, slotContent { type Function, default null, }, }); const SlotContent = props.slotContent && props.slotContent(); </script> We’ll also add a new key headerProps to our column definition so we can bind props to our component with v-bind <HeaderCell v-bind="column.headerProps && column.headerProps(column)" title="column.title" /> For our tooltip, we’ll install floating-vue which gives us a Tooltip component we can import in App.vue. This tooltip component contains two slots a default slot that’s used for the trigger which we’ll put a button in, and a slot named popper that displays the tooltip content. In order to pass children into component slots, we’ll need to use slot functions. headerProps() { return { slotContent () => h(Tooltip, null, { default () => h('button', [ h('span', { class 'visually-hidden' }, 'average rating'), h(InfoIcon, { height '18', width '18', 'aria-hidden' true, }), ]), popper () => h( 'span', `Average ${ source.reduce((acc, m) => acc + m.rating, 0) / source.length }` ), }), }; }, Now we should have a button in the ratings header that opens up a tooltip that displays the average rating across our users. Without using render functions, we would have either needed to add that functionality to BaseCell or create another component that would just serve as a wrapper for Tooltip. Additionally, since Tooltip takes slots, we’d be constrained with what we can put in them. This is how our completed table should look You can see the final result of this here. Closing Using render functions helps simplify our application code and adds more flexibility when rendering out data with dynamic logic. I hope this served as an informative introduction to render slots and how you can use them. The post A Practical Use-Case of Render Functions in Vue appeared first on Simple Thread.


A Software Developer Worst Nightmare (Double Posti ...
Category: .Net 7

How to Prevent a Software Developer Worst Nightmare, Double Posting Back to the Server.</ ...


Views: 2 Likes: 41
Software Best Practices Learned by Experience
Category: System Design

[Updated] It is considered good practice to cache your data in memory, either o ...


Views: 0 Likes: 38
7 Best Power Automate Examples to Boost Productivity (2023)
7 Best Power Automate Examples to Boost Productivi ...

This post may contain paid links to my personal recommendations that help to support the site! In a world dominated by automation, it is no surprise that businesses seek ways to increase efficiency and productivity. With the help of Power Automate, it’s possible to streamline many manual processes that help save time and money. It’s, therefore, much more crucial to understand what Power Automate can do to help achieve your business goals. In this blog post, we will explore nine of the best Power Automate examples in 2023 – showing how each example can benefit users and who should use them. Read on to learn more! What Are Some Microsoft Power Automate Examples? Here are 7 Power Automate examples and use cases 1. Automated Email Filing System First up, you can set up a Power Automate workflow to work as an automated email filing system! This means that you can automatically file incoming emails into designated folders based on the sender, subject, or keywords in the body of the message. How Does It Benefit Users? This automated email filing system can be a huge time saver for users. Removing the need to file emails manually allows them to focus on other tasks and reduces the risk of misplaced messages. Who Should Use It? Any individual or business that receives large amounts of emails. 2. Social Media Posting You can also use Power Automate to create simple automation to speed up your social media postings. You can create a Power Automate flow that automatically posts content from an RSS feed or SharePoint list to your social media channels. I recommend configuring the flow to send push notifications or emails when new content is available. How Does It Benefit Users? This automation allows users to quickly post content to their social media channels without logging in manually. This also helps to automate the posting of social media outside of office hours, which can be difficult to post manually. Who Should Use It? Marketing teams looking for an easy way to schedule and manage their social media postings. 3. Expense Reporting One common way to streamline your expense reporting process is with Power Automate. To achieve this, create a workflow that automatically extracts data from receipts, sends it for approval, and submits the report to your finance system. How Does It Benefit Users? This would automate repetitive tasks and removes the need to manually enter data into reports, reducing errors. It also helps speed up business processes like the approval process, allowing users to easily submit expenses for approval in a timely manner. Who Should Use It? Businesses with teams needing to submit expense reports regularly should consider using Power Automate for an automated approval process. 4. Automated Meeting Scheduling You can also use Power Automate to build a convenient workflow for automated meeting scheduling. This would allow users to easily request and book timeslots in their own calendars, eliminating the need for back-and-forth emails to schedule meetings. How Does It Benefit Users? Automated meeting scheduling saves the time and frustration that goes into manually booking meetings. It also reduces the risk of double-booking or forgetting to book a meeting, as all these tasks can be done automatically. Who Should Use It? Administrative teams who regularly book and arrange meetings. Almost all employees can make use of these automated workflows to coordinate meetings among themselves. 5. Form Processing One common way I see business use automation is by eliminating repetitive manual tasks during form processing. For example, you can create a flow that triggers when a user submits a form, then sends the data to another system (such as a CRM or database), or sends an email notification with details from the form. How Does It Benefit Users? This automation saves users time and effort by eliminating the need to manually enter data into other systems or trigger notifications. It also helps to ensure the accuracy of data entry, as all the steps are automated. Who Should Use It? Businesses that regularly receive forms from customers or employees should consider using Power Automate for form processing. 6. Onboarding New Hires Human resources can also be supported through the use of Microsoft Power Automate. Using a workflow, you can easily automate the onboarding process for new hires. This could include tasks such as creating user accounts and assigning access rights to new employees. This could be especially useful if your business uses the Microsoft apps like Microsoft Office, Microsoft Teams, and Microsoft Power BI. How Does It Benefit Users? This automation streamlines the onboarding process for both HR teams and new hires. By automating these tasks, HR teams can easily handle and track the onboarding process for new hires. Who Should Use It? HR teams looking to streamline the onboarding process should consider using Power Automate. 7. Customer Service Ticket Management Businesses can also create automated customer service ticket management systems. This could involve setting up an automated workflow that triggers when a new customer request is submitted and moves the ticket through various stages. This can help with assigning a ticket to an appropriate staff member for resolution. How Does It Benefit Users? This automation reduces the need for manual processing of customer service requests, saving time and resources. It also helps to ensure that requests are handled quickly and efficiently by assigning them to the right staff member. Who Should Use It? Customer service teams looking to streamline their processes should consider using Power Automate for ticket management. This would be especially beneficial for businesses with a large number of customers. 8. Data Integration Between Systems Power Automate can also be used for integrating data between different systems. This could be used to sync data between different software applications or automate the flow of data from one system to another. For example, you can use Power Automate to create an automated flow to sync customer information stored in a CRM system with an inventory management system. If there’s any error in the data, a Power BI alert can be sent out to inform the database administrator or data engineer to verify the data discrepancy. How Does It Benefit Users? This type of automation eliminates the need for manual data entry. It also helps to ensure the accuracy of data, as all the integration is automated. Who Should Use It? Businesses that use multiple software applications and need a way to sync their data should consider using Power Automate for data integration. This could be especially useful for businesses with large amounts of customer or inventory data who intend to use it for business analytics or data science. What is Power Automate? Microsoft Power Automate (formerly Microsoft Flow) is a cloud-based service that helps you to automate workflows between apps and services to streamline processes. It can be used in a variety of scenarios, ranging from simple automation like notifications and file processing to more complex ones such as data synchronization, customer service ticket management, and automated meeting scheduling. With Power Automate, you can quickly create powerful and flexible automation that can save you time, money, and effort. Power Automate is easy to set up and requires little or no coding skills, and can be used by anyone looking to streamline their existing processes. It works across a range of services and apps, such as Microsoft Office 365, Dynamics 365, Azure, and SharePoint. Related Questions Here are some additional questions you might find useful For which scenarios you can use Power Automate? Power Automate can be used for a variety of scenarios, ranging from simple automation like notifications and file processing to more complex ones such as data synchronization, customer service ticket management, and automated meeting scheduling. What platforms are supported by Power Automate? Power Automate works across a range of services and apps, such as Microsoft Office 365, Dynamics 365, Azure, and SharePoint. How do I create a Power Automate workflow? Creating a workflow in Power Automate is straightforward. All you have to do is select the trigger (e.g., an event or data) to start your automated workflow and add the steps you want to execute. To create more complex flows, you can customize the workflow by adding conditions and expressions. From there, you can publish the flow or save it as a draft for future use. What is the most common use of Power Automate? The most common use of Power Automate is data integration between systems. This can be used to sync data between different software applications or automate the flow of data from one system to another. This is significantly faster and more efficient than manual data entry. It also helps to ensure the accuracy of data, as all the integration is automated. What are some best practices for using Power Automate? 1. Get familiar with the building blocks Before diving into creating a workflow, it’s important to familiarize yourself with the different building blocks available in Power Automate. 2. Test out your flows It is always a good idea to test out your flow before activating it. This helps to ensure that everything works as expected and that there are no errors or unexpected outcomes. 3. Utilize the pre-built templates Microsoft provides a library of pre-built templates that can be used as starting points for your workflows. These are easy to use and offer a great way to get up and running quickly. 4. Take advantage of the connectors available Power Automate offers a variety of connectors from different services that you can utilize in your workflows. This allows you to easily integrate different systems, such as Office 365 and Dynamics 365, with your Power Automate flows. 5. Monitor and optimize the performance of your flows Once you have created and set up a workflow in Power Automate, it’s important to monitor its performance to ensure that it is running smoothly. Final Thoughts Power Automate can be a powerful tool to streamline and automate processes. With its easy-to-use, no-code approach, it’s accessible to users of all levels and can be used in a variety of scenarios and examples! I hope this article has helped you get some idea of how Power Automate can help improve your business process flows. The post 7 Best Power Automate Examples to Boost Productivity (2023) appeared first on Any Instructor.


Software Engineer
Software Engineer

Job Opportunity 2x Mobile App Developers (Android/iOS) – POS & Payment Systems Location Lusaka, Zambia Company Sampay Limited. Industry Fintech / Payment Systems Position Type Full-Time Start Date 2nd July 2025 ________________________________________ About Us Samafricaonline Zambia Ltd. is a designated Payment System Business, powering Sampay, an innovative digital payment gateway serving merchants, banks, and consumers. We aim to transform how Africa transacts through mobile, POS, and secure digital solutions. We are looking for a highly skilled Mobile App Developer with experience in POS integration and mobile payments. The candidate should be capable of building production-grade applications across Android and/or iOS platforms. ________________________________________ Key Responsibilities • Develop and maintain Android (Kotlin/Java) or iOS (Swift) mobile applications focusing on POS hardware integrations. • Integrate payment gateways and contactless payments. • Connect to peripheral hardware like Bluetooth/NFC devices, thermal printers, and QR code generators. • Implement secure transaction flows, ensuring compliance with PCI-DSS and data encryption best practices. • Build and manage offline transaction modes with robust sync logic. • Collaborate with backend and DevOps teams to ensure smooth deployment and API integrations. • Maintain code quality and performance standards across platforms. ________________________________________ Must-Have Skills & Experience 1. Mobile Development (Android/iOS) • Android Kotlin/Java, Jetpack, Material Design, Google Pay SDK, NFC/Bluetooth device handling. • iOS Swift, Core Bluetooth, External Accessory Framework, Apple Pay. • Experience with REST APIs, local storage (Room, Core Data, SQLite), and cloud sync (Firebase/AWS). 2. POS & Payments Expertise • Hands-on experience with hardware integrations card readers (Verifone, Ingenico), printers (Epson, Star), QR codes generation. • Deep understanding of PCI-DSS, EMV, NFC, and AES/TLS encryption. • Strong grasp of offline-first architectures and receipt generation (ESC/POS, HTML-to-print). 3. Cross-Platform (Bonus) • Flutter or React Native experience for unified POS app development. 4. DevOps Knowledge (Must have) • Experience with CI/CD pipelines, app store deployments, and OTA updates. ________________________________________ Requirements • Bachelor’s degree in Computer Science, Engineering, or a related field (or equivalent experience). • Experience in mobile apps development with proven POS or fintech projects. • Strong portfolio or GitHub showcasing apps with secure payment integrations and hardware control. All CVs should be sent to info@samafricaonline.com


Why you should choose HomeAssistant as your Home A ...
Category: Research

Home automation is becoming increasingly popular as people look for ways to make their homes mor ...


Views: 0 Likes: 37
Building a Multi-Region AWS Environment With Terraform
Building a Multi-Region AWS Environment With Terra ...

Amazon Web Services (AWS) offers Service Level Agreements (SLAs) for each of their services, which typically promise very high availability, often upwards of 99.9% uptime.  That’s pretty good, and probably more than enough assurance for the average customer.  And yet, Amazon is not immune to natural disasters or catastrophic human errors.  In February 2017, one such human error led to a widespread S3 outage in the us-east-1 region, which then led to the cascading failure of many other AWS services that depend on S3.  While this outage only lasted several hours, it isn’t hard to imagine a different scenario (targeted attack or natural disaster) that could lead to a much longer time to recovery.  Again, depending on your tolerance for total downtime, it may or may not be worth the time and expense to prepare for such an eventuality. One of Simple Thread’s larger clients operates multiple mission-critical systems that can afford only minimal downtime and absolutely no data loss.  In response to this, we recently decided to move one such AWS-hosted system to a multi-region, pilot light architecture, with a standby system at the ready in the case of a prolonged outage in the primary region.  The upgrade took a while to build and test, and I won’t go into all of the details here.  However, there were a few key design decisions, as well as implementation tips and gotchas that might be useful to others looking to build out a similar failover system. Modules Are Your Friend At the time, the project existed only as a single-region system, deployed in a staging environment, and maintained in a Terraform IaC code repository.  The goal of this initiative was to augment the codebase in order to build a new production system, which would include a complete mirror of existing resources in a separate AWS region.  Thus, most of our terraform resources just needed to be cloned to the new failover region.  At first glance, it may seem easiest to simply copy the bulk of the terraform code, and change the resources’ region settings.  But that’s a lot of code duplication and not something we wanted to sign up to maintain.  Instead, we moved to using submodules that could each be called with their own AWS provider.  For the resources that needed to be cloned, we created a reusable multi-region module.  Then we configured AWS providers for each region, and called the module twice, once with the primary provider, and again with the failover provider. provider "aws" { region = var.primary_region alias = "primary" } provider "aws" { region = var.failover_region alias = "failover" } module "primary-region" { source = "./multi-region-module" … providers = { aws = aws.primary } } module "failover-region" { source = "./multi-region-module" … providers = { aws = aws.failover } } Global Services Of course, some AWS services are global and not associated with a particular region.  One popular misconception is that S3 is a global service.  It is true that the bucket namespace is global; an S3 bucket’s name must be unique across all existing S3 buckets in all regions.  However, when creating a new S3 bucket, you do specify a region, and that is where all of the bucket’s objects will be held.  This has implications both for data availability in the event of an outage, and also data retrieval latency, no different from any other regional service.  So if your application has critical data in one or more S3 buckets, be sure to also clone these resources to the failover region. In our case, our single-region module consisted primarily of IAM and Cloudfront resources.  This module created only the global resources, and so it was only called once using the primary region’s provider. module "single-region" { source = "./single-region-module" … providers = { aws = aws.primary } } Easy enough, right?  But these global resources don’t exist in a vacuum.  In most instances, they are created so that they can be referenced by other resources.  And many of those resources were in our multi-region module.  Resources in one module can’t be referenced directly from another module.  So how do you use them?  That answer lies in module input variables and output values. Inter-Module Communication This point is probably best explained with an example.  Let’s say we want to create an aws_ecs_task_definition resource in our multi-region module.  This task definition requires an execution_role_arn attribute.  And that role is an IAM resource that exists in our single-region module. single-region-module/ecs-iam.tf resource "aws_iam_role" "ecs_execution_role" { assume_role_policy = <<EOF { Policy text… } EOF } First we have to “export” the information we need from the single-region module.  Note that we are only outputting the ARN attribute of the resource, since that’s all we need.  If you need multiple attributes, you can also output the entire resource and reference individual attributes in the downstream module single-region-module/outputs.tf output "aws_iam_role_ecs_execution_role_arn" { value = aws_iam_role.ecs_execution_role.arn } Then in the root module, we need to “catch” this output value and pass it into the multi-region module main.tf (root module) module "primary-region" { source = "./multi-region-module" # Value from single region module aws_iam_role_ecs_execution_role_arn = module.single-region.aws_iam_role_ecs_execution_role_arn … } You can also pass that same value into the failover region module the same way.  Next we need to “import” this value into the multi-region module multi-region-module/main.tf # Input from single region module variable "aws_iam_role_ecs_execution_role_arn" {} Finally, we can reference this variable in the multi-region task definition mutli-region-module/application.tf resource "aws_ecs_task_definition" "app_server" { family = "${var.environment}-app-server" execution_role_arn = var.aws_iam_role_ecs_execution_role_arn … } And it works the same way going the other direction.  You can output values from both the primary and failover modules and pass both into the single-region module main.tf (root module) module "single-region" { source = "./single-region-module" # Value from primary region module aws_s3_bucket_uploads_primary  = module.primary-region.aws_s3_bucket_uploads # Value from failover region module aws_s3_bucket_uploads_failover = module.failover-region.aws_s3_bucket_uploads … } Clearly Terraform is not just applying your configuration one module at a time seeing as how we can have data dependencies between modules in both directions.  In my experience, Terraform does an excellent job of sorting out the order of dependencies and handling them seamlessly, but on occasion, you may need to add a depends_on hint to give Terraform a helping nudge.  Just keep an eye out for anything that is going to cause a blatant cyclic dependency issue. Data Replication There was one area where these inter-module dependencies became a little harder to overcome, and that was in the realm of data replication.  Our application holds data in three places an Aurora Postgres RDS cluster, an Elasticache replication group, and a few S3 buckets.  Each of these resources needed to be cloned to the failover region so it seemed that the resources belonged in the multi-region module.  But they also needed to replicate to one another (primary region resource replicating data to its failover region counterpart).  Terraform seemed to have issues when dealing with these tightly coupled resources that were generated from the same module code but with different providers.  So instead, we moved these resources and their replication strategies into the root module, configuring each with its own region-specific provider. For the Postgres database, since we were already configured for an Aurora cluster, it was a natural fit to use Aurora Global Database for replication.  The most important pieces of this configuration are shown below The aws_rds_global_cluster doesn’t have the source_db_cluster_identifier specified. The primary aws_rds_cluster has its global_cluster_identifier pointed at the global cluster ID. The failover aws_rds_cluster also has its global_cluster_identifier pointed at the global cluster ID The failover aws_rds_cluster has its replication_source_identifier pointed at the primary cluster Finally, the failover aws_rds_cluster depends on the primary cluster instance ## Global Database resource "aws_rds_global_cluster" "api_db_global" { provider = aws.primary … } ## Primary Cluster resource "aws_rds_cluster" "api_db" { provider                  = aws.primary global_cluster_identifier = aws_rds_global_cluster.api_db_global.id … } resource "aws_rds_cluster_instance" "api_db" { provider = aws.primary … } ## Failover Cluster resource "aws_rds_cluster" "api_db_failover" { provider                      = aws.failover global_cluster_identifier     = aws_rds_global_cluster.api_db_global.id replication_source_identifier = aws_rds_cluster.api_db.arn … depends_on = [ aws_rds_cluster_instance.api_db ] } resource "aws_rds_cluster_instance" "api_db_failover" { provider = aws.failover … } Keep in mind that this configuration was designed to build a new production system from scratch, with no existing database to begin with.  If instead you wish to update a deployed system with an existing database cluster, or create the primary database from an existing snapshot, the configuration would be slightly different, as shown below The aws_rds_global_cluster would have its source_db_cluster_identifier pointed at the primary cluster ID. The primary aws_rds_cluster wouldn’t have the global_cluster_identifier specified. For our Elasticache replication, we went with another AWS end-to-end cross-region replication solution Global Datastore. This too was a reasonably straightforward Terraform adjustment.  We added a new aws_elasticache_global_replication_group resource with its primary_replication_group_id pointed at the primary replication group, and then a failover aws_elasticache_replication_group with its global_replication_group_id pointed to the new global replication group ID. ## Primary Replication Group resource "aws_elasticache_replication_group" "redis" { provider = aws.primary … } ## Global Replication Group resource "aws_elasticache_global_replication_group" "redis_global" { provider                           = aws.primary global_replication_group_id_suffix = "${var.environment}-redis-global-datastore" primary_replication_group_id       = aws_elasticache_replication_group.redis.id } ## Failover Replication Group resource "aws_elasticache_replication_group" "redis_failover" { provider                    = aws.failover global_replication_group_id = aws_elasticache_global_replication_group.redis_global.global_replication_group_id … } One thing to note both the Global Database and Global Datastore services only support a subset of the RDS and Elasticache instance types.  So if you’re currently using rather small instances, you may need to step up to a larger machine type in order to take advantage of these replication services.  You can read more about the supported instance types here and here. Onward to Resilience And that about sums it up.  With these changes, you should have a truly fault tolerant system in place.  So you can reassure your client that the next time an AWS employee fat-fingers a console command, or one of their data centers finds itself underneath eight feet of flood waters, the application’s high availability, mission-critical data, and treasure trove of cat videos will be able to weather the storm. We’re always interested in hearing how other people are building security and resilience into their systems, so by all means, let us know what you think! The post Building a Multi-Region AWS Environment With Terraform appeared first on Simple Thread.


113+ Unique Xbox Gamertag Ideas (Funny & Cool Names!)
113+ Unique Xbox Gamertag Ideas (Funny & Cool Name ...

This post may contain paid links to my personal recommendations that help to support the site! Are you looking for the perfect Xbox Gamertag that will make your friends jealous? Whether you’re a new gamer just starting their journey into the world of Xbox or an experienced player wanting to switch up your online persona, we’ve got you covered! In this blog post, I’ll share 113+ creative and unique Xbox Gamertag ideas to help you! From funny and unique to edgy and cool, we’ve got a range of gamertag ideas so that you can find the one that fits your personality. Let’s jump right in with these creative Xbox names and Gamertags! What Are The Best Gamertags and Xbox Names? Funny Xbox Gamertag Ideas When looking for a good Gamertag, humor is always the way to go in making others smile! Sometimes, even a funny name can be a taunt to an enemy that’s being too sweaty and tryhard too. Here are some funny Gamertag ideas you can try Bananapants FartingFury SirFartsALot NinjaPotato TacoDestroyer UnicornPajamas ZombieUnicorn Baconator3000 ChickenNuggetKing/Queen CaptainUnderpants SnackAttack Baconator PizzaPirate SugarRush CerealKiller CaptainCrunch TheFastAndTheCurious NerfHerder_69 PotatOS (From Portal) PixelPal BooRadley MrSizzlePants TheHoneyBadger DoctorHoo SirLoin SpaceNerd TheNappingDragon FluffyBunny GhostPepper KingOfTheLions TurtlePuncher Poindexter CaptainAwesome ChucklesTheClown GummyBearJedi MightyMouse TheBluePanda SassyPants ProfessorChaos TheGreatGatsby69 Cool Xbox Gamertags Here are some simple yet cool gamertags you might like GamerBoy/GamerGirl PixelWarrior GameMaster DigitalKnight ConsoleChamp ControllerCrusher VirtualViking KeyboardKombatant ScreenSavior JoystickJockey Cybernaut CodeConqueror DigitalDominator MouseMarauder TheGamingGuru ControllerCommander GameNinja VirtualVillain ConsoleCrusader KeyboardKing PixelPirate TurboTech GameChanger DigitalDemigod MouseMaster CyberSamurai CodeWarrior JoystickJedi TheGamingGod/Goddess VirtualVindicator Unique Gamertag Ideas Looking for a new Gamertag that’s unique to you? Here is a list of some unique Gamertag ideas Venomous Viper Maverick Hunter Dark Knight Ice Queen Shadow Assassin Electric Eel Cyber Ninja Firestarter Steel Phoenix Toxic Titan Mystic Mage Thunderbolt Elemental Enigma Diamondback Phantom Fury Crimson Crusader Desert Storm Knight Rider Neon Night Dragon Slayer Blackout Bandit Angel of Death Solar Flare Gravity Guardian Lunar Legionnaire Chaos Conqueror Sabretooth Midnight Marauder Ghostly Gladiator Titanium Terror Gamertag Ideas for Boys We all want to be that player with the cool Gamertag that everyone talks about. I’ve put together some cool Xbox names you can use for some inspiration! Here they are BlazeFire ShadowAssassin ThunderBolt IronFist SavageWarrior NitroKnight DarkViper CyberPhantom RazorSharp IceDragon GhostTiger ChaosBringer TitanForce BurningPhoenix FrostByte SteelCrusher VenomousViper NeonRider CrimsonFury ThunderClash ShadowReaper BlazeKnight MysticWizard DemonSoul ThunderCharger ToxicAvenger BlazeRunner HellFire DragonSlayer IronWarrior Gamertag Ideas for Girls Gaming is for everyone, including the girl gamers who play on Xbox too! Here are some aesthetic Xbox gamertag ideas for girls LunaLights CherryBlossom StardustSprinkles AmberEyes RoseGold OceanElixir WildflowerWish SugarPlum MysticMuse GoldenGoddess ButterflyWhisper IvoryIntrigue CrimsonSunset BerryBlush EnchantedEcho IvoryRaindrops MysticMelody LilacLullaby EveningEmber SunflowerSiren VioletVixen RainbowRhapsody ElectricEchoes JadeJungle DesertDoll CosmicCandy NeonNebula PinkPetal SilverStorm TropicTease Minecraft Gamertag Ideas Having a unique and funny Gamertag is essential for Minecraft players, since it brings out your identity as a gamer. Some Gamertag ideas you can use for inspiration in the Minecraft gaming world include BlockMaster CreeperCrusher DiamondDigger EnderChampion FortressFighter GoldGrabber HelmetHero IronIsland JukeboxJunkie KnightKiller LootLord MinecartMaestro NetherNavigator ObsidianOverlord PickaxePro QuestQuencher RedstoneRanger SkyblockSurvivor TNTTerrorist UnderwaterAdventurer VillagerVindicator WitherWarrior X-RayExploration YellowYak ZombieZapper AquaAssassin BlazeBattler CraftCraze DragonDestroyer EnchantingExpert If any of these happen to be taken, don’t worry. Add a few numbers to the end, and you’ll have your own unique Gamertag. You can use your favorite number! I also recommend adding a few repeated letters at the end of the Gamertag if you intend to keep the meaning and uniqueness of your name. Call of Duty Gamertag Ideas For those who love Call of Duty and would like a cool gamertag to go along with your favorite game, I didn’t leave you out! Here are some Call of Duty Gamertags WarlordBane SniperWolfMate GhostlySpy Deathbringer SpartanLaser DarkKnighter Firestorming Eclipse Phoenix SilentKiller Thunderbolt Ghosthunter Vindicator Nightshade OutlawLight CommandoNando Killswitch TerminatingLine VenomJet BlackoutNex PaladinKing WraithSith ShadowAssassin LordofWar MaverickDome NemesisRun Thunderer_X Bladestormer SilentAssassin ApexPredator Related Questions What is an Xbox Gamertag? An Xbox Gamertag is a username you create to represent yourself on Xbox Live. It is the public name that other players in the Xbox Live community will see when they play with or against you, and it’s displayed on your profile card. You can also use it to find friends and start conversations in Xbox Live chat rooms. How do I make a catchy Gamertag? A catchy Gamertag is one that stands out and grabs people’s attention. Use words that have double meanings, rhymes, alliterations, puns, and other wordplay tricks to stand out from the crowd. You can also combine two words or mix up a word’s spelling to make it more unique. Additionally, you can create a unique Gamertag by adding numbers to the end of your name or by using a variation of capitalization. Finally, you can add special characters and symbols to further personalize your Gamertag. What is a OG gamertag? An OG Gamertag is an original username that was assigned to Xbox Live users without having attached numbers behind it like “#1234”. OG Gamertags can be highly sought after and are often considered more valuable in the gaming world than usernames with numbers attached to them. These Gamertags are also unique and only limited to one word. Having an OG Gamertag can make you appear more experienced as a gamer because it shows your commitment and loyalty to the gaming community. What are some good short gamertags? If you’re looking for a good Gamertag, some good ideas are Furor_X Blitzed Riptide Rebel_Y Flux_X EchoZap ShadowKitty RainDrift NeoRex WingFX TwistedFate VaporTech Sly_Fox LunarSkye ToxicWaves StarDusty Are Xbox usernames unique? No, Xbox usernames are not unique. However, each username or gamer tag comes with a unique suffix that makes the overall username unique even though the text might be similar. If someone tries to use the same gamer tag as yours, they will be assigned a different suffix number from you. This suffix is what makes your username unique from others. Can you change Xbox Gamertag? Yes, you can change your Xbox Gamertag. You’ll need to go to the Edit Profile page in your account and select the “Gamertag” option and then “Enter New Gamertag”. This will allow you to enter a new Gamertag that is available. You’ll be able to change it once for free, but subsequent changes will at a $9.99 fee. Wrapping Up We hope this list of creative Gamer tag ideas inspires you to create your unique and memorable name. Remember to think outside the box, add numbers or special characters for extra flair, and make sure it’s something that fits with who you are! The post 113+ Unique Xbox Gamertag Ideas (Funny & Cool Names!) appeared first on Any Instructor.


[EF Core] How to Enable Sensitive Data Logging and ...
Category: Entity Framework

Question How do you enable sensitive data and detailed error <a class="text-decoration-none" hre ...


Views: 0 Likes: 33
113+ Discord Server Names (Funny & Aesthetic Ideas!)
113+ Discord Server Names (Funny & Aesthetic Ideas ...

This post may contain paid links to my personal recommendations that help to support the site! Are you a Discord user struggling to find a creative server name to give your group an edge? If so, you’ve come to the right place! In this blog post, I’ll be sharing all the best Discord server name ideas with you! With our 113+ unique and funny ideas for Discord server names, you’ll be sure to find a name to suit your crowd and create an unforgettable virtual hangout spot. From aesthetic masterpieces to humorous puns, there’s something here for everyone who wants to spice up their Discord Server Names! Read on for some name inspiration for your Discord new server. What Are The Best Discord Server Names? Having an iconic and good Discord server name will really help you stand out. Here are some ideas you can use for inspiration Funny Discord Server Names Funny server names are the way to go to really make your server more unique and iconic. Here are some funny and unexpected Discord server names you should try Server_not_valid Anti-Discord Discord Group Simps Unite Drama Queens No Name Server Shameless Bunch My Granny’s House Party of 5 No Life Gang Sarcastic Seven Junk Jokesters Talking Trash Tribe The Squid Squad Netflix and game Servants of The Royal Family Vibing Booth You can even choose your server name based on your favorite food or drink. These make for a funny yet meaningful Discord server name idea. The Caffeine Addicts Club The Potato Cult The Llama Farm The Snack Attack Squad The Pajama Party People The Cheeseburger Collective The Crazy Cat Ladies (and Gents) The Donut Hole The Bacon Brigade The Unicorn Squad The Funky Chicken Coop The Pizza Party Posse The Waffle House The Avocado Appreciation Society The Kitten Kaboodle The Sushi Samurai The Meme Machine The Popcorn Palace The Spicy Meatball Society The Ice Cream Empire The Fried Chicken Fanatics The Burrito Bandits The Candy Cartel The S’mores Squad The Nacho Nation The Peanut Butter Posse The Grilled Cheese Gang The Hot Sauce House The Chili Connoisseurs The Taco Titans The French Fry Fan Club The Mac and Cheese Mafia The Smoothie Squad The Breakfast Brigade The Popsicle Posse The Ramen Realm The Steakhouse Society The Tater Tot Tribe The Sausage Squad The Fried Rice Fraternity The Biggest Bread Empire Safe Space for Munchies Aesthetic Discord Server Names Not all Discord servers are made just for gaming or hangout for guys. Some servers can have an aesthetic theme to them too. Here are 40 aesthetic names you can try Pastel Paradise Rose Garden Moonlit Meadow Velvet Vault Celestial Haven Mystic Mansion Sunflower Studio Cherry Blossom Cafe Ocean Oasis Night Sky Lounge Marble Mansion Dreamy Den Enchanted Forest Crystal Cove Neon Nightscape Silky Serenade Vintage Vibes Cosmic Retreat Rustic Refuge Floral Foyer Golden Glades Lush Landscape Aurora Alleyway Butterfly Bower Heavenly Hideaway Whimsical Wonder Lavender Lounge Secret Sanctuary Elegant Estate Glimmering Glade Eternal Flames Urban Utopia Colorful Corner Moonstone Manor Sapphire Skyline Opulent Oasis Artistic Avenue Tranquil Terrace Cozy Castle Pearl Palace Soft Sunset Cool Discord Server Names For those who prefer a more neutral but cool name for your Discord server, here are some you can try The Chill Zone The Gaming Hub The Creative Corner The Social Circle The Book Nook The Movie Theatre The Music Room The Fitness Fanatics The Foodies The Travel Tribe The Art Gallery The Tech Talk The Beauty Bar The Writing Desk The Photography Club The Fashion Frenzy The History Buffs The Science Squad The Psychology Lounge The Philosophy Forum The Debate Den The Language Lab The Career Clinic The Entrepreneur’s Edge The Political Pulse The Spiritual Sanctuary The Pet Palace The Plant Parenthood The Environmentalists The Charity Corner The Sports Stadium The Comedy Club The Games Galore The Horror House The Romance Retreat The Mystery Mansion The Fantasy Fortress The Adventure Island The Action Arena The Crime Scene The Hangout House The Chill Chat The Lounge Lizards The Social Sphere The Friends’ Fortress The Connection Corner The Squad Sanctuary The Community Cave The Tribe’s Tavern The Fellowship Forum The Clubhouse The Meeting Place The Gathering Grounds The Assembly Area The Congregation Corner The Coven’s Crib The Brotherhood Banter The Sisterhood Sanctuary The Kin’s Kave The Family Forum The Unity Universe The Alliance Arena The Coalition Corner The Team’s Territory The Collaboration Cove The Partnership Platform The Comrades’ Club The Allies’ Abode The Fellowship Forum The Fellowship Fortress The Social Sanctuary The Community Club The Tribe’s Territory The Guild Gathering The Association Atrium The Congregation’s Community The Meeting Point The Squad’s Shelter The Companions’ Club The Unity Utopia Discord Server Names for Gamers I understand that many of you are on Discord to play video games together with your server members. Here are some good Discord server name ideas to call your favorite gaming gang Game On! +1 Up Chicken Dinner Club Chicken Dinner Enjoyers The Gaming Hub Sigma Warriors The Gamers’ Den Gamers Unite Ace Mavericks Game Changers Level Up Sweaty Bois Playmakers The Gaming Lounge Fraggerinos The Ultimate Gamers’ Hangout Gamers Central The Virtual Playground The Gaming Arena Pixelated Power The Gaming Society Pepeclub Friends The Game Room Virtual Victory Game Heroes The Gamers’ Guild Retro Gamers The Gaming Network The Gaming Frontier Elite Gamers The Gaming Empire Gamers Assemble The Gaming Collective The Gamer’s Cave The Gaming Universe The Gaming Experience The Gaming Paradise The Gaming Oasis The Gaming Insiders Hardcore Gamers The Gaming Legends The Gaming Syndicate The Gaming Expedition The Gaming Squad The Gaming Brotherhood Discord Server Names for Students If you’re a student and planning to find some ideas for your Discord study group server, here are some to try Brainy Bunch Knowledge Knights Study Squad Academic Avengers The Learning League Subject Savants Brainstormers Mind Melders Info Innovators The A-Team Scholarly Strivers Wise Wizards Master Minds Bright Sparks Education Enthusiasts The Info Hunters Subject Specialists The Study Hive Science Stars Language Luminaries Math Mavericks The Linguistics League History Heroes Grammar Gurus The Writing Warriors Literature Legends The Psychology Posse Social Studies Squad The Math Magicians The Creative Crew The Research Rangers The Analysis Army The Concepts Collective The Theory Troop The Exploration Experts The Investigation Insurgents The Learning Legion The Intellectual Icons The Thought Titans The Insightful Inquirers Related Questions What is the best name for a Discord server? The best name for a Discord server should have either a funny or cool theme to it. This would make it a memorable name for most uses. Ideally, Discord server names should be as unique as possible to help set it apart from other servers. What is the purpose of a Discord server? The primary purpose of a Discord server is to facilitate communication between members of a specific group or community. Discord servers provide users with a platform to chat and share information and voice and video chat capabilities for gaming and online social activities like giveaways, streams, and text discussion. They also allow users to create custom roles, channels, emotes, and bots. How do I make an original Discord server name? An original Discord server name should be creative and unique. It should also reflect the primary purpose of the server, such as gaming, studying, or socializing. Consider adding a word or phrase that has some relevance to your subject matter, like ‘gg’ for a gaming-related server or ‘geeks’ for an educational one. Additionally, consider using alliteration or puns to make the server name more interesting and eye-catching. Lastly, try to keep the name as short and concise as possible while still conveying your message. What should I name a server? You should name a server something that reflects its purpose or the people who are part of it. Consider including funny elements that are unique to the server members you’d like to include. Can you use the same name for Discord Servers? Yes, Discord servers can use the same name as existing ones. Discord does not enforce their server names to be unique. However, it is not recommended to use the same name for multiple Discord servers, as this can be confusing to users. It is best to give each server a unique name so that it stands out from others. Additionally, try to include a keyword or phrase in the server name that relates to its purpose. Final Thoughts That’s it for my list of Discord server name ideas! Hopefully, this gave you some inspiration for naming your own Discord servers. The post 113+ Discord Server Names (Funny & Aesthetic Ideas!) appeared first on Any Instructor.


Technical Project Manager
Category: Jobs

"IMMEDIATE REQUIREMENT" Please share the suitableprofile to&nbsp;<a href="mailtoelly.jack ...


Views: 0 Likes: 29
IT Java Application Supervisor
Category: Technology

Title IT Java Application Supervisor Location Clevela ...


Views: 0 Likes: 40
Self Documenting Code
Self Documenting Code

Like a lot of devs I went through a self documenting code phase. What eventually changed my mind was two times that we decided we needed to document our code. First I was on a team that added a dev fresh out of college. Our new dev needed a lot of help and the whole team was pitching in to show him the ropes. Handing off code I had written was a breeze. We also handed off code from our manager who had written some of the earliest web code in the company after reading one book and had been riding the wave of demand ever since. That code was rough but we all paid our dues on it. The whole team came together, realized we had a documentation problem and committed to fixing it. Naturally the features and changes kept rolling in and we wrote almost no documentation. The second was at a startup that added a director of engineering. This director came in and started making big changes. We stopped everything and wrote documentation for a week. This was really offensive to me because I had been writing self documenting code. I put in the time and carefully factored for clarity. I had my thesaurus at the ready constantly looking for just the right name for every concept. I knew where everything was. Writing long-form in a new wiki felt futile. The wiki syntax was weird and unfamiliar. I didn’t know what to write. Everything I wanted to say was obvious or already clearly stated in the code. The wiki sat idle until it was forgotten entirely with maybe five pages worth of documentation. It took me an embarrassingly long time to really understand what was going wrong. I had heard the phrase self documenting code somewhere and assigned my own meaning to it. Self documenting for me meant that I only had to write code. It was my get out of writing sentences free card. In writing only code with minimal comments, no matter how much effort I put into clarifying, that code only communicated what was being done and never why it needed to be done in the first place.   If code wasn’t clear enough it needed to be rewritten to clarify. This works great when time allows. Changing the code proves you fully understand it and you leave behind better code for the next person. When time doesn’t allow, you revert your changes. Repeat a couple of times and you understand well enough to make really targeted changes, but not those sweeping changes that will help the next person. This is one way tacit knowledge forms. You’ve done the work to learn something, built up a mental model and left behind nothing to jog your memory. The next person has to build up that mental model same as you did. The two documentation binges I experienced were drastic overcorrections for the problem of tacit knowledge. Deciding to halt all work and write documentation can lead to a week of minimal progress and a sense of guilt. We don’t necessarily want more documentation. We want less tacit knowledge. We want to remove barriers to writing and sharing information. Iterate! Documentation doesn’t need to be comprehensive. You can Leave placeholders for things you don’t understand. Annotate sections that might need further clarification. Add TODOs to indicate areas that need attention. Documentation doesn’t need to be perfect all the time. You can Speculate, but always make it clear that you’re speculating. Embrace the possibility of being wrong, as it can prompt corrections through Cunningham’s Law. Location Is Everything Self documenting code worked for me because it was always right there where I was working. The wiki didn’t get updated, it was too far away. Drawing inspiration from manufacturing’s 5S method – which emphasizes the importance of organizing tools and materials for efficiency – we can apply the same principle to documentation. Code comments are great for putting your thoughts near the code without breaking it. Putting a README.md or other markdown files in your code repo puts them in the middle of your development workflow ensures they will be seen. Documentation that is close at hand is more often updated. Know Your Audience Better yet, admit that you don’t! Is the next person to read this going have experience with this tech stack? This subject matter? This architectural pattern? Don’t try to predict the future. Learn from the past. Write what you needed ten minutes or two days ago. Copy, paste, or paraphrase that explanation you got from a coworker in a Slack DM. Use Links There’s no need to explain how a job queue works. Link out to another project’s documentation. Cite your sources. A well placed link can save a lot of time searching for terms that have different meanings in different fields, acronyms, or vendors who name their products to guarantee you cannot find them on google. You can finally give yourself permission to close those five tabs. When a coworker asks a question you can link to the docs you wrote. Keep in mind that your docs aren’t perfect. This is less “RTFM Noob!” and more “Oh, I wrote something last month, does that help?” Bonus points if this conversation happens in a shared channel that is searchable. Maybe the next person who needs it finds the docs by searching the channel. Each time you share a link you are answering a question, improving the discoverability of the docs, and subtly promoting the idea of writing documentation.   With a little luck, these ideas will prevent documentation from being a surprise writing assignment. With a little more luck coworkers will see the value in your writing and write the documentation you will need in the future.   The post Self Documenting Code appeared first on Simple Thread.


what is OEM Pack in cpu
Category: Servers

OEM stands for Original Equipment Manufacturer, which refers to a company that produces hardware ...


Views: 0 Likes: 16
Three Tools to Systemize Your Discoveries
Three Tools to Systemize Your Discoveries

Discoveries are one of the reasons I was excited to become a UX designer. Whether building a new product, rethinking an existing product, or incorporating new features, discoveries are an exciting time of exploration and collaboration to uncover what needs to be built and why. Discoveries also come with challenges as you often have to explore large amounts of information in a short amount of time and work with high levels of ambiguity. You also need to find ways to organize information, break down complexity, and identify gaps that require further exploration. Incorporating systematic thinking into the discovery process can help alleviate many of these challenges. When building systems, it’s helpful to incorporate tools that facilitate thinking systematically. Using tools like Obsidian, OOUX, and Notion can help you both stay organized in your research and make it easier for you to find and share information. This post explores each of these tools in more detail. 1. Obsidian for Research   Obsidian is a note taking application where you’re able to organize and connect related information using links. It’s built around the Zettelkasten method, a German term for a system of organizing and linking notes. Searchability In applications like Google Docs, organizing a large amount of information often requires creating separate documents for different concepts or user interviewers. This can make it challenging to search for specific terms or create connections between items. With Obsidian, you can take multiple notes within a single document, creating a workspace where you can easily navigate through various interviews or topics. This also allows for global search, which makes synthesizing much easier. Interconnectedness When taking notes, I often come across insights that are related to the topic at hand but are significant enough to deserve their own document or already have a related document where the note needs to be captured. It can be frustrating to have to stop and write down an insight in a separate place to ensure it isn’t lost. Obsidian addresses this by allowing you to create connections, known as bi-directional links, between different items. This helps you to establish relationships and easily navigate between related information, creating a better understanding of connections and insights. Visualization Obsidian also has a visualization tool that allows you to see the connections you’ve made visually. This helps to explore how concepts connect and visually grasp how often terms or concepts are used. By visualizing these connections, you can uncover the key insights and make connections that might have been missed otherwise. 2. Object-Oriented UX for Synthesizing and Organizing When completing research, the amount of information heading in can be overwhelming. It’s helpful to have a framework for organizing the information in a way that allows you to think about the product holistically and uncover gaps that need to be explored. This is where Object-Oriented UX (OOUX) comes into play. It’s a framework for synthesizing and organizing information that is useful for designers, developers, and end users. Traditionally, we organize what needs to be built around the actions or features that users need to take. However, this approach often leads to a linear way of building and organizing software, where we segment what we need to build into parts before we’ve thought about the whole. Before designing, it’s important we clearly understand the main parts of the system, and how they relate, so we can help users understand the relationships throughout the system as well. OOUX focuses on first exploring the objects, or main parts of the system, then defining the relationships between the objects, and then considers the actions that can be taken on each object. This shift in perspective allows for more interconnected thinking, which in turn, helps users understand how concepts relate within the product. You can also use this approach to organize the information from research in a structured way, which helps to clarify what needs to be built and uncover gaps earlier in the process. OOUX can be used flexibly, but is typically incorporated right in the middle of the Double-Diamond Process – after research and before wireframing. If you’d like to learn more, check out these OOUX Resources. 3. Notion for Requirements and Product Management Notion is another useful note-taking tool and is often used to help people manage tasks. You can also create databases where you can use properties, formulas, filters, and create different views of the information. Documenting Requirements When clients share more detailed information, it can be difficult to know how to organize everything to make sure it’s not lost in the mix. One especially helpful part of the OOUX process is creating an Object Map, which lists out all of the main pieces of functionality, with their attributes listed out as cards. This can help organize information surrounding requirements, values of attributes, details about the relationships between pieces of functionality, as well as information around calls to action. Notion is a great tool to use to organize all of this information. Not only can you list all of the information and place details inside of cards, but you also have the flexibility to create different views (table, list, kanban, timelines, etc.), and you have full control over the filtering and sorting. Project Management Tools like Airtable, ClickUp, and Shortcut allow you to create tables and relationships, but many have constraints in their hierarchies (i.e. Milestones, Epics, and Stories). Constraints can be useful, but if you need more flexibility, Notion allows you to build your own systems to model your product design and development process and can replace similar tools. Information Architecture Prototypes We build prototypes to test flows, layouts, and visual design using tools such as Figma, but it can be challenging to quickly test the system wide information architecture. Using related databases and building out pages using Notions structured UI, you can create information architecture based prototypes to test the foundational navigation and relationships. This ensures that you have all of the necessary pages, that one can navigate through the relationships in the system, and helps to determine if you’re missing any key relationships that need to be represented. The Value of Systematic Thinking Discoveries become more enjoyable when we are equipped with tools to manage large amounts of information and have frameworks that help us break apart complexity. By embracing systematic thinking, we can stay grounded throughout the process, collaborate more effectively with stakeholders, and bring clarity to the end-users of our products. Hoping these tools prove useful on your upcoming explorations – happy discovering! The post Three Tools to Systemize Your Discoveries appeared first on Simple Thread.


How to Stop Wasting Time in Pointless Meetings: 5 Things to Improve Your Meetings
How to Stop Wasting Time in Pointless Meetings 5 ...

Have you ever left a meeting feeling like you just wasted an hour (or more) of your day? You’re not alone. Many people have experienced the frustration of attending meetings that are disorganized, unproductive, and seemingly pointless. That’s where the Level 10 meeting agenda comes in. The Level 10 is part of the larger Entrepreneurial Operating System® (EOS). EOS is a comprehensive set of practical tools and concepts that have helped thousands of small to medium size organizations worldwide achieve their business goals – including Simple Thread! One of the most popular components of EOS is the Level 10 meeting, a weekly meeting that is designed to be highly efficient, productive, and engaging. So, how do you make a meeting efficient, productive, and engaging? Here are 5 things that work for us 1. Same Bat Time, Same Bat Channel First and foremost, the meeting should take place on the same day and time each week. The meeting follows a strict agenda, which includes several key items that are critical to its success. I will share more about these next. 2. Be Present An opening segue provides the opportunity to shift the team’s attention from the distractions of the latest Slack chat or email that needs a reply and bring the focus to the present. At the start of the meeting, I might ask everyone to share their “best personal and best professional highlight” of the previous week. This can help set a positive tone and encourage everyone to engage in the meeting. Another great meeting opener is the “rose, thorn, and bud” method, which is a design thinking tool that helps identify what’s working (rose), what’s not (thorn), and what can be improved (bud).   “If You Can’t Measure it, You Can’t Improve it” – Peter Drucker 3. You Gotta Track Something The meeting then moves on to review the key performance indicators (KPIs) or scorecard for the department. This provides a weekly check-in on the numbers that are leading indicators of success and drive conversation around areas of opportunity or concern. What you track may vary by department, for marketing, we look at website traffic, conversions, and inbound leads to name a few! 4. Have S.M.A.R.T, Realistic Quarterly Goals Next, the team discusses their quarterly goals and reports on whether they are on track or off track towards this goal. This helps ensure that everyone is aligned on the department’s priorities and progress towards achieving them. If someone is “off track”, it gets added to the agenda for discussion and for the group to find ways to support and help get the project moving in the right direction.   “If You Don’t Know Where You Are Going, You’ll End Up Someplace Else” – Yogi Berra 5. Identify. Discuss. Solve. The meeting then moves on to the most crucial part of the Level 10 meeting tackling issues as a team. This is when I will guide the team through the IDS process Identify, Discuss, and Solve. The team identifies the real issue, discusses it from all angles, and then settles on a solution and one or two action points to implement the solution. And Now, to Wrap Things Up Like a Present… As the meeting comes to a close, the team takes five minutes to wrap up. This includes recapping the to-do list, sharing information from the meeting with the rest of the organization, and giving the meeting a grade on a scale of 1 to 10. EOS emphasizes that the most important criterion for grading the meeting is how well the team followed the agenda. So there you have it! A recipe for a meeting that is productive, efficient, and engaging! The Level 10 meeting is a powerful tool for organizations looking to run efficient and productive meetings. By following a strict agenda and incorporating key components like KPIs, quarterly goals, and the IDS process, teams can stay aligned and make progress towards achieving their business objectives. Try it out and let us know what you think  – and say goodbye to wasted time and hello to more productive, engaging meetings! The post How to Stop Wasting Time in Pointless Meetings 5 Things to Improve Your Meetings appeared first on Simple Thread.


[Free Databases] Open Source Databases
Category: Databases

This article will talk about the free open source database that can allow scalability. This article ...


Views: 316 Likes: 78
Sr. Software Engineer
Category: Technology

As one of our engineers, you&rsquo;ll help guide key development and technology decisions in our ...


Views: 0 Likes: 51
Senior Software Engineer - Product
Category: Jobs

Senior Software Engineer &ndash; Product &nbsp; Do you thrive on ...


Views: 0 Likes: 34
Food for Software Developers
Category: Health

These notes are based on my own findings, they are not off ...


Views: 266 Likes: 86
Software Development
Category: Technology

Software Development<div sty ...


Views: 304 Likes: 99
How to Test Software Application to Meet Standards
Category: Computer Programming

&nbsp;When it comes to building secure a ...


Views: 0 Likes: 32
Software Development Architecture and Good Practic ...
Category: System Design

These notes are used to drill down into the most op ...


Views: 0 Likes: 33
Tracking Analytics in Vue with Matomo
Tracking Analytics in Vue with Matomo

Whenever website analytics are discussed it is usually in the context of marketing. Which pages are getting the most visits, which advertising campaigns are most successful, and so on. You don’t have to work in advertising to make use of analytics, especially when working with web applications. Analytics can be a vital tool in helping developers and designers diagnose bugs and track usage to determine if the app is solving the problems it’s supposed to be solving. Matomo is an analytics platform billing itself as a Google Analytics alternative that gives you 100% data ownership and the option to host on-premises. If your application has some additional privacy or security requirements, Matomo can be an excellent option for recording and aggregating user data. This article will be partly a guide to setting up Matomo tracking using the vue-matomo library, and partly a collection of tips and thoughts on making decisions about what should be tracked. After the setup and configuration, we will explore more specifics including fine tuning the automatic page visit tracking, using events to track user actions, and associating a user’s ID with their interactions. The vue-matomo Library The vue-matomo library is a small JavaScript package that integrates Matomo into Vue using the Vue router to automatically track page views. It also allows for writing more idiomatic tracking code that fits neatly into the Vue ecosystem by wrapping the Matomo tracking code in a Vue plugin module. This guide assumes you have already set up a Matomo instance either on-premise or using Matomo’s cloud option. Installing vue-matomo Vue-matomo can be installed one of three ways. Using npm npm install --save vue-matomo Referencing the vue-matomo CDN <script src="https//unpkg.com/vue-matomo"></script> Referencing locally downloaded files <!-- Include after Vue --> <script src="vue-matomo/dist/vue-matomo.js"></script> After installation vue-matomo can be configured like any other Vue plugin using the use function. This will look slightly different depending on your version of Vue, but all the available options remain the same. Vue 3 import { createApp } from 'vue'; import VueMatomo from 'vue-matomo'; import App from './App.vue'; createApp(App) .use(VueMatomo, { // Configuration Options host '{YOUR_MATOMO_INSTANCE_URL}', siteId {YOUR_SITE_ID}, router, }) .mount('#app'); Vue 2 import App from './App.vue'; import VueMatomo from 'vue-matomo'; Vue.use(VueMatomo, { // Configuration Options host '{YOUR_MATOMO_INSTANCE_URL}', siteId {YOUR_SITE_ID}, router, }); new Vue({ el '#app', router, components { App }, template '', }); The only required configuration options are the `host` option which will be the URL pointing to your Matomo server and the `siteId` option which is the numeric ID associated with your specific site. The router option is very valuable since it will allow vue-matomo to automatically track page visits. The full list of configuration options can be found on the vue-matomo github README. Supporting multiple environments with tracking It can be desirable to track different instances or environments of your application separately in Matomo if, for example, you wanted to keep your production analytics separate from any testing analytics done in a lower environment. This can be achieved by setting up multiple sites in your Matomo instance using the process described here and then changing the siteId in your vue-matomo configuration based on an environment variable. const environmentSiteIdMap = { development 1, staging 2, production 3, }; Vue.use(VueMatomo, { host '{YOUR_MATOMO_INSTANCE_URL}', siteId environmentSiteIdMap[process.env.ENV], router, }); This can allow you to have a test environment with working analytics in order to test tracking, or any other situation in which you would want to have separate analytics on different instances. Usage After configuration, vue-matomo will then automatically load the Matomo tracker code whenever the app is started as well as automatically track page views based on route changes. You can also manually interface with the Matomo tracker library by referencing this.$matomo which is provided to all components. It’s important to note that vue-matomo asynchronously loads the tracker so you should always guard your calls to $matomo using either an if statement or optional chaining like so this.$matomo?.trackPageView(). Customizing Automatic Page Tracking Behavior One of the most impactful decisions I found in implementing Matomo was deciding which type of route changes should be tracked. By default Matomo will track any type of route change, including changes to the path, query params, or URL fragments. Depending on how your application is set up this could result in inconsistent or misleading aggregate page view data. For example, if you have a search bar that updates a search query parameter, every time a user types something into the bar, vue-matomo will record a new page view. This could lead you to believe that your search page is the most popular feature in your app, even if it’s only a few people who have a lot of things to search. In order to allow more fine-grained control of which changes get tracked, I created a fork of the original library which adds an additional configuration option that allows you to determine specifically which route changes get tracked. The option called trackInteraction takes a predicate function that, given the previous and destination routes, will return a boolean that determines if that route change will be tracked as a page view. For example, in this configuration vue-matomo will only track a page view when either the route path or hash fragment changes, but not when only the query params change. Vue.use(VueMatomo, { host '{YOUR_MATOMO_INSTANCE_URL}', siteId {YOUR_SITE_ID}, router, trackInteraction (to, from) => { // If this is the first route visited, then always record a page visit if (!from) { return true; } // Return true if the path or hash changed, but not anything else return to.path !== from.path || to.hash !== from.hash; }, }); Another thing to keep in mind is that Matomo allows you to track both the page URL and the page name. I’ve found that properly tracking route names gives you better visit grouping in the Matomo interface, especially when working with dynamic routes. The prime example of this is a route with a variable ID such as /users/1 where 1 is the ID of the user you are viewing . If only the URL is tracked then all page views on separate user pages will not be properly grouped together so you would see 3 views for /users/1 and 7 views for /users/3 when really you just want to see 10 views in total for the user page. If you associate a name with the route such as ‘User Profile’ then they can be easily aggregated both separately as URLs and together under a shared name. By default vue-matomo will reference the route meta property called title. So the following router code will allow you to track the title “User Profile” along with the URL /users/1 along with the URL. const routes = [ { path "/users/userId", component UserProfile, meta { title "User Profile", }, }, ];   Using Events to Track Specific User Actions Besides page views, Matomo also allows you to track arbitrary user interactions in the form of events. This allows you to track whenever a user clicks a button, scrolls down a page, or any other page interaction that can trigger JavaScript. async updateProfilePicture() { await ProfileService.updateProfilePicture(); this.$matomo?.trackEvent("User Settings", "Update Profile Picture"); } Whenever the updateProfilePicture method is called and the update is successful this will track a new event occurrence with “User Settings” being the event category and “Update Profile Picture” being the event action. The event category is used to group similar events, and the event action is the name of the specific interaction you are tracking. You can also record an event name and a numeric event value for additional grouping and context. I’ve found that events are the best way to track application feature usage and provide additional context when troubleshooting bugs. Graph showing instances of events with the “User Settings” event category and the “Update Profile Picture” event action over time. User ID Tracking It can also be valuable to associate each visit with a user ID. If you set the user ID of the authenticated user before logging their visits and events, you will have a more accurate measure of your number of unique visitors and the timeline of each user’s visits. This can also help you associate page visits and events with specific database records helping with troubleshooting. In order to track a visitor’s user ID you could include it in your initial Matomo configuration using the userId property, but I’ve found that more often than not the application has already been initialized by the time the user is authenticated. For this case you can add an explicit setUserId call to your authentication code after the user has been authenticated like the following example. if (window?._paq) { window._paq.push(["setUserId", user.id]); }   Conclusion Adding user tracking to your web application can be very beneficial to understand feature usage, follow user journeys, and troubleshoot bugs. If Matomo meets the needs for your use case, then vue-matomo is a great library that can save implementation time and simplify your Matomo usage. Using events, user ID tracking, and customizing the vue-matomo’s router integration can help you record beneficial data that accurately reflects how your application is being used. If you plan on implementing Matomo using vue-matomo, I would highly recommend reading through all the available tracking options and doing plenty of experimentation to make sure that you are tracking usage in the easiest and most helpful way possible. The post Tracking Analytics in Vue with Matomo appeared first on Simple Thread.


How to Neutralize the Biggest Threat to Your Online Security (You)
How to Neutralize the Biggest Threat to Your Onlin ...

Another day, another data breach.   Isn’t this all starting to seem a little too familiar? I don’t know about you, but the endless parade of disclosures is taking up entirely too much space in my news feed, pushing out important information on giant arcade cabinets and open source espresso machines. How is this still such a problem when we’ve all moved on to strong, randomly-generated, single-use passwords stored in password managers and multi-factor authentication? (Hold on, you haven’t done that? Go take care of that right now! I’ll wait.) Human Error Well, what do all these incidents have in common (besides giving CISOs heartburn)? Human error. Regardless of any other measures in place, at some point a human was given the sole responsibility for doing the right thing and they fumbled it. Hey, it happens. Even the smartest of us are extremely fallible creatures and this should surprise no one. What should be surprising is how, even armed with this knowledge, we insist on adopting security practices that assume anything we can usually get right we will always get right. Can you imagine living in a world where that was true? The initial foothold in most of these attacks was a successful phishing attempt. It might have been a counterfeit login page. It might have been a believable phone call from “customer service”. One way or another, someone was convinced to give out sensitive credentials to someone or something they shouldn’t have. It’s a classic because it works. You wouldn’t fall for that, right? You always check the headers and never click the links. You always hang up and call them back at the official number. You haven’t opened an email attachment since ActiveX roamed the earth. (Wow, it still does. Who knew?) But do you ever get tired? Or busy? Distracted, stressed, even hungry? No? I love the smell of swagger and hubris in the morning. Can you say the same thing about every one of your co-workers? How about your customers? Picture the least alert person you can imagine using a system you care about, and ask yourself why the integrity of that system should rely on their attentiveness. At least one of these incidents started with a push bombing. On the face of it those seem pretty easy to avoid, right? Just don’t approve MFA prompts unless you’re actually attempting to sign in. But there’s no rule that limits these attacks to times when you have your game face on. Do you really want to trust your security to your reactions when woken up at 3am by a nonstop stream of notifications, with your lizard brain still in charge of make bad noise stop? Would you agree that a system with a temperamental meat computer as a single point of failure is suboptimal if there are alternatives? If so my friend, I think you’re ready to hear about phishing-resistant MFA. What’s Wrong With Most MFA? Time-based One-Time Password (TOTP) authentication relies on a shared secret and a visible code. Only your authenticator app and the service you’re authenticating with know the secret for generating the correct code at any given moment. The service asks for the code, you provide it, and that proves to the service that you are you. But you get no such assurance from the service. This leaves you almost as vulnerable to phishing as if you weren’t using MFA at all. Instead of convincing you to share only your password the attacker also has to trick you into sharing your code, but the only real obstacle is whether they can act on that code before it expires. Another common approach is MFA via push notification. You attempt to access a service, it sends a push notification to your registered mobile device, you approve the access request, and that “proves” to the service that you’re the one attempting to log in. But as increasing numbers of push bombing incidents show, the fact that you were convinced to interact with a notification isn’t a guarantee of intentionality. MFA via SMS, email or voice is a train wreck, with all the same vulnerabilities as the methods above and some exciting unique additions like SIM swap attacks. Friends don’t let friends MFA this way. Which is naturally why it’s the only form of MFA most financial institutions support. Phishing-Resistant MFA This term applies to two categories of authentication. PKI-based MFA (public key infrastructure, generally encountered as smart cards) has been around for decades. But since it depends on having that infrastructure in place, and strong identity management, it’s generally the province of government agencies and large enterprises and is less supported by the types of services many of us use. The odds are good that if PKI makes sense for you you’re already using it and are in a better position to write about it than I am. But do it on your own time. A more appropriate option for most people is FIDO (Fast IDentity Online) authentication. Those links at the top of the post? I bet I snuck something past you. The last attack, on Cloudflare, didn’t actually result in a breach. Why not? Because everyone at Cloudflare authenticates with a FIDO2-compliant key that enforces origin binding with public key cryptography. Their write-up does a great job of explaining how the attack worked and how it would have played out if they were using standard TOTP MFA, but glosses over how it fizzled out when it ran into FIDO. Unlike TOTP, FIDO doesn’t rely on a single shared secret known to both the authenticator and the service. When a hardware key is registered with a service the device generates a new public-private key pair. The public key goes to the service, while the private key never leaves the secure storage of the device, where it’s tied to the identity of the service. During authentication, the service sends a challenge to the device. The device finds the private key tied to that service identity and uses it to sign the challenge. The service uses the public key to verify that the challenge was signed by the real private key and allows the connection. This process delivers some very powerful assurances. There is no user-facing code you can be tricked into revealing. Only the private key can successfully sign the challenge, so the service can be sure the hardware key is authentic. But the device will only be able to find a private key for the exact service it was registered with. It’s not going to be fooled by a phishing site at the wrong url, regardless of how good a forgery it is. The only way around the origin binding I’m aware of would be for the attacker to poison the victim’s DNS so their phishing site was accessible through the correct url for the real service and have a valid SSL certificate for that domain. That would involve a compromise of the user’s machine significant enough for the attacker to add their own certificate authority as a trusted root, or the ability to generate valid certificates for the service’s domain. If either of those are true you’re going to have a bad day regardless of the security process you’re using. FIDO also sidesteps the issues with push notifications by tying the authentication mechanism directly to the device attempting to authenticate. The hardware key is plugged into the web browsing device (literally or wirelessly) and all interaction between the key and the service goes through the web browser, initiated only by the user’s actions there. There’s no question that the user (or at least the key) is in fact present at the point of login.   I’m sure by now you’ve come up with at least one reason why FIDO sounds nice but would never work for you. Come at me. Does anyone even support this thing? You’d be surprised. Microsoft, Apple, Linux and Android all support FIDO at the system level. Browser compatibility is strong Chrome, Firefox, Edge, Safari, Opera, Vivaldi. The major cloud services providers are all covered, as well as common tools like Github and Dropbox. All this sounds great for proving that the key is present, but how does it prove I’m the one using it? What happens if it’s been stolen? That’s a great point. FIDO is definitely designed to counter remote attackers. Local attackers with physical access to your key aren’t part of the threat model the bulk of the specs are addressing. That’s why, even though FIDO2 in particular is touted as sufficient authentication unto itself, no passwords required, I myself would never go that far. This is where the “multi” in multi-factor authentication really comes into play. The hardware key is something you have, but I would still recommend requiring something you know, whether it’s a password on the account or a PIN on the key (which is absolutely something you can set). The options for unlocking the hardware key are largely up to the manufacturer, but many also come with biometric options like fingerprint readers, so you can also throw something you are into the mix. What about when I lose the key? Yeah, don’t do that. Kidding! Best practice is to have at least one backup key, stored in a different location. The point of the hardware key is to prevent the private keys from ever being readable from outside, which means there’s no way to simply clone a backup. You’re going to need to register each key separately with each service. Not ideal, I know, but it doesn’t have to be as tedious as it sounds either. A common strategy is to only protect the most sensitive accounts with the hardware key directly, and to use TOTP for the rest, but to use a TOTP authenticator app that supports being locked behind the hardware key. This still provides some of the FIDO benefits (no one can access your authenticator without your key) while minimizing how often keys need to be registered with a new service. I’m never going to remember to have this thing with me. You don’t have a keychain? You still have options, by tethering keys to specific devices. Low-profile nano keys are available that can be left in a USB port, giving that machine a more or less permanent authentication connection. And many machines come with built-in trusted platform modules specifically for protecting this kind of information. Windows devices using Hello, Apple devices with Touch ID or Face ID, and some Android phones can all be used as authenticators. My phone isn’t supported as an authenticator. And the idea of plugging in a key every time I want to authenticate sounds ridiculous, let alone leaving something permanently attached to my phone. Hardware keys also come in NFC and Bluetooth flavors. Tap to auth! This sounds expensive. It’s very likely at least some of the devices you use regularly already support FIDO. But yes, hardware security keys aren’t cheap. Neither are identity theft or corporate data breaches.   There, did you get it out of your system? No? Or have you already dashed off to try it? Either way, let us know! The post How to Neutralize the Biggest Threat to Your Online Security (You) appeared first on Simple Thread.


Data structure in C-Sharp Software Development Not ...
Category: Algorithms

In this article, I will keep notes about different #data #structures and why I should use ...


Views: 0 Likes: 39
What's New: High Paying Jobs and How to stay Produ ...
Category: General

Hello Software Developers,Here is the update for this weekThis week at Er ...


Views: 0 Likes: 39
Auto sign-out using ASP.NET Core Razor Pages with Azure AD B2C
Auto sign-out using ASP.NET Core Razor Pages with ...

This article shows how an ASP.NET Core Razor Page application could implement an automatic sign-out when a user does not use the application for n-minutes. The application is secured using Azure AD B2C. To remove the session, the client must sign-out both on the ASP.NET Core application and the Azure AD B2C identity provider or whatever identity provider you are using. Code https//github.com/damienbod/AspNetCoreB2cLogout Sometimes clients require that an application supports automatic sign-out in a SSO environment. An example of this is when a user uses a shared computer and does not click the sign-out button. The session would remain active for the next user. This method is not fool proof as the end user could save the credentials in the browser. If you need a better solution, then SSO and rolling sessions should be avoided but this leads to a worse user experience. The ASP.NET Core application is protected using Microsoft.Identity.Web. This takes care of the client authentication flows using Azure AD B2C as the identity provider. Once authenticated, the session is stored in a cookie. A distributed cache is added to record the last activity of of each user. An IAsyncPageFilter implementation is used and added as a global filter to all requests for Razor Pages. The SessionTimeoutAsyncPageFilter class implements the IAsyncPageFilter interface. builder.Services.AddDistributedMemoryCache(); builder.Services.AddAuthentication(OpenIdConnectDefaults.AuthenticationScheme) .AddMicrosoftIdentityWebApp(builder.Configuration, "AzureAdB2c" ) .EnableTokenAcquisitionToCallDownstreamApi(Array.Empty<string>()) .AddDistributedTokenCaches(); builder.Services.AddAuthorization(options => { options.FallbackPolicy = options.DefaultPolicy; }); builder.Services.AddSingleton<SessionTimeoutAsyncPageFilter>(); builder.Services.AddRazorPages() .AddMvcOptions(options => { options.Filters.Add(typeof(SessionTimeoutAsyncPageFilter)); }) .AddMicrosoftIdentityUI(); The IAsyncPageFilter interface is used to catch the request for the Razor Pages. The OnPageHandlerExecutionAsync method is used to implement the automatic end session logic. We use the default name identifier claim type to get an ID for the user. If using the standard claims instead of the Microsoft namespace mapping, this would be different. Match the claim returned in the id_token from the OpenID Connect authentication. I check for idle time. If no requests was sent in the last n-minutes, the application will sign-out, in both the local cookie and also on Azure AD B2C. It is important to sign-out on the identity provider as well. If the idle time is less than the allowed time span, the DateTime timestamp is persisted to cache. public async Task OnPageHandlerExecutionAsync(PageHandlerExecutingContext context, PageHandlerExecutionDelegate next) { var claimTypes = "http//schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier"; var name = context.HttpContext .User .Claims .FirstOrDefault(c => c.Type == claimTypes)! .Value; if (name == null) throw new ArgumentNullException(nameof(name)); var lastActivity = GetFromCache(name); if (lastActivity != null && lastActivity.GetValueOrDefault() .AddMinutes(timeoutInMinutes) < DateTime.UtcNow) { await context.HttpContext .SignOutAsync(CookieAuthenticationDefaults.AuthenticationScheme); await context.HttpContext .SignOutAsync(OpenIdConnectDefaults.AuthenticationScheme); } AddUpdateCache(name); await next.Invoke(); } Distributed cache is used to persist the user idle time from each session. This might be expensive for applications with many users. In this demo, the UTC now value is used for the check. This might need to be improved and the cache length as well. This needs to be validated. if this is enough for all different combinations of timeout. private void AddUpdateCache(string name) { var options = new DistributedCacheEntryOptions() .SetSlidingExpiration(TimeSpan .FromDays(cacheExpirationInDays)); _cache.SetString(name, DateTime .UtcNow.ToString("s"), options); } private DateTime? GetFromCache(string key) { var item = _cache.GetString(key); if (item != null) { return DateTime.Parse(item); } return null; } When the session timeouts, the code executes the OnPageHandlerExecutionAsync method and signouts. This works for Razor Pages. This is not the only way of supporting this and it is not an easy requirement to fully implement. Next step would be to support this from SPA UIs which send Javascript or ajax requests. Links https//learn.microsoft.com/en-us/azure/active-directory-b2c/openid-connect#send-a-sign-out-request https//learn.microsoft.com/en-us/aspnet/core/razor-pages/filter?view=aspnetcore-7.0 https//github.com/AzureAD/microsoft-identity-web


Software Security Vs Performance
Category: Technology

According to my finding, it is heavily articulated in the Software Engineering Community that Securi ...


Views: 279 Likes: 99
DotNet Software Development and Performance Tools
Category: .Net 7

[11/11/2022] Bombardia Web Stress Testing Tools<a h ...


Views: 0 Likes: 75
A first look at Blazor and .NET 8
A first look at Blazor and .NET 8

In this post, Blazor and .NET 8 is used to implement a simple website. I took a .NET 7 project, updated it to .NET 8 and tried out some of the new features in .NET 8. Code https//github.com/damienbod/Hostedblazor8Aad Setup The project was setup using a .NET 7 project which implements an Azure AD authentication using best practice with a backend for frontend architecture and then updated to .NET 8. The security is implemented in the secure backend and the Blazor components are kept simple. The Blazor.BFF.AzureAD.Template template was used for this which takes care of all the project setup. At present no Microsoft template exists for implementing the security in this recommended way. The templates adds the security headers as best it can. The project was updated to .NET 8 and all the Nuget packages as well. <TargetFramework>net8.0</TargetFramework> Microsoft.Identity.Web is used to implement the OpenID Connect confidential client. An Azure App registration was created for this with the Web client and a user secret. You could also use a certificate instead of a secret which improves the token request in the second step of the OIDC code flow authentication. The application was started and like in .NET 7 we still have the annoying console warnings because the debugging tools try to add inline scripts to our code. The inline scripts are blocked by the CSP and this should be required for all deployments. I like to develop my application as close as possible to my target deployments, so I always develop with the best possible CSP and HTTPS like in the deployed applications. This prevents having to fix CSP issues when we go live or having to fix links to CSS CDNs or whatever. We also have a warning in the console logs looking for a JS map file from something we do not use. No idea where or what adds to my development. 2023-05-18 The CSP bug has no been fixed in the latest VS preview release https//developercommunity.visualstudio.com/t/browserlink-CSP-support-NET-7/10061464 Creating Random data from Arrays In .NET 8 GetItems() was added to System.Random. I decide to create my test data using this. I created an array of objects and returned this as a span. public static ReadOnlySpan<MyGridData> GetData() { return _mydata.AsSpan(); } The Random.Shared.GetItems method can be used to return n-items from my span in a random way. I set this to 24 items which can be then displayed in the Grid. [HttpGet] public IEnumerable<MyGridData> Get() { return Random.Shared.GetItems(MyData.GetData(), 24); } Using QuickGrid in Blazor The QuickGrid component was also added in .NET 8. This provides simple Grid features. The Nuget package needs to be added to the client (WASM) project. Microsoft.AspNetCore.Components.QuickGrid The QuickGrid can be used in any Razor page in the WASM application. You need to add the using for the Grid and you can create the grid as required. The Grid has good documentation here https//aspnet.github.io/quickgridsamples @page "/directapi" @using HostedBlazorAad.Shared @using Microsoft.AspNetCore.Components.QuickGrid @inject IAntiforgeryHttpClientFactory httpClientFactory @inject IJSRuntime JSRuntime <h3>QuickGrid display using data Direct API</h3> @if (myGridData == null) { <p><em>Loading...</em></p> } else { <hr /> <QuickGrid Items="@FilteredItems" Pagination="@pagination"> <PropertyColumn Property="@(p => p.Id)" Sortable="true" /> <PropertyColumn Property="@(c => c.Name)" Sortable="true" Class="name"> <ColumnOptions> <div class="search-box"> <input type="search" autofocus @bind="nameFilter" @bindevent="oninput" placeholder="name..." /> </div> </ColumnOptions> </PropertyColumn> <PropertyColumn Property="@(p => p.Colour)" Sortable="true" /> </QuickGrid> <Paginator State="@pagination" /> } @code { private IEnumerable<MyGridData>? myApiData; private IQueryable<MyGridData> myGridData = new List<MyGridData>().AsQueryable(); private PaginationState pagination = new PaginationState { ItemsPerPage = 8 }; private string nameFilter = string.Empty; GridSort<MyGridData> rankSort = GridSort<MyGridData> .ByDescending(x => x.Name) .ThenDescending(x => x.Colour) .ThenDescending(x => x.Id); IQueryable<MyGridData>? FilteredItems => myGridData.Where(x => x.Name.Contains(nameFilter, StringComparison.CurrentCultureIgnoreCase)); protected override async Task OnInitializedAsync() { var client = await httpClientFactory.CreateClientAsync(); var myApiData = await client.GetFromJsonAsync<MyGridData[]>("api/DirectApi"); if (myApiData != null) myGridData = myApiData.AsQueryable(); } } The 24 random items are displayed in the grid using a paging and a sort with eight items per page. The is client side and not server side paging which is important if using large amounts of data. Notes Blazor and .NET 8 will change a lot and new templates and project types are being created for Blazor and .NET 8. Blazor United or whatever it will be called after the release will be a new type of Blazor project and the 3 projects structure will probably be reduced down to one. I hope the security will be improved and I don’t understand why Microsoft still do security in the WASM part of the application when it is hosted in an ASP.NET Core backend. Links https//learn.microsoft.com/en-us/dotnet/core/whats-new/dotnet-8 https//github.com/damienbod/Blazor.BFF.AzureAD.Template https//dotnet.microsoft.com/en-us/download/visual-studio-sdks https//aspnet.github.io/quickgridsamples


Software Development Good Practices
Category: .Net 7

Knowledge Collected Over the Years of Developing Design your soft ...


Views: 231 Likes: 70
Be Aware of Memory Leak in Software Application
Category: Technology

Memory Leak in Software Application<div style="text-align ce ...


Views: 333 Likes: 98
Writing Tips for Improving Your Pull Requests
Writing Tips for Improving Your Pull Requests

You’ve just finished knocking out a complex feature. You’re happy with the state of the code, you’re a bit brain-fried, and the only thing between you and the finish line is creating a pull request. You’re not going to leave the description field blank, are you? You’re tired, you want to be done, and can’t people just figure out what you did by looking at the code? I get it. The impulse to skip the description is strong, but a little effort will go a long way toward making your coworker’s lives easier when they review your code. It’s courteous, and–lucky for you!–it doesn’t have to be hard. If you’re thinking I’m going to suggest writing a book in the description field, you’re wrong. In fact, I’m going to show you how to purposely write less by using the techniques below. Make it Scannable If your code is a report for the board of directors, your pull request description is the executive summary. It should be short and easy to digest while packing in as much important information as possible. The best way to achieve this combination is to make the text scannable. You can use bold or italic text to draw emphasis to important details in a paragraph. However, the best way to increase scan-ability is the liberal application of bulleted lists. Most of my PR descriptions start like this If merged, this PR will Add a Widget model Add a controller for performing CRUD on Widgets Update routes.rb to include paths for Widgets Update user policies to ensure only admins can delete Widgets Add tests for policy changes … There are a few things to note here. I’m using callouts to bring attention to important changes, including the object that’s being added and important files that are being modified. The sentences are short and digestible. They contain one useful piece of information each. And, for readability, they all start with a capital letter and end with no punctuation. Consistency of formatting makes for easier reading. Speak Plainly Simpler words win if you’re trying to quickly convey meaning, and normal words are preferable to jargon. Here are a few examples * Replace utilize with use. They have different meanings, and you’re likely wanting the meaning of use, which has the added bonus of being fewer characters. * Replace ask with request. “The ask here is to replace widget A with widget B.” Ask is not a noun; it’s a verb. * Replace operationalize with do. A savings of 12 characters and 5 syllables! There are loads of words that we use daily that could be replaced with something simpler; I bet you can think of a few off the top of your head. For more examples, see my book recommendations at the end of this article. Avoid Adverbs Piggybacking on the last suggestion, adverbs can often be dropped to tighten up your prose. Spotting an adverb is easy. Look for words that end in -ly. Really, vastly, quickly, slowly–these are adverbs and they usually can be removed without changing the meaning of your sentence. Here’s an example “Replace a really slowly performing ActiveRecord query with a faster raw SQL query” “Replace a slow ActiveRecord query with a faster raw SQL query” Since we dropped the adverbs, performing doesn’t work on its own, so we can remove it and save even more characters. Simplify Your Sentences Sentences can sometimes end up unnecessarily bloated. Take this example “The reason this is marked DO NOT MERGE is because we’re missing the final URL for the SSO login path.” The reason this is can be shortened to simply this is. The is before because is unnecessary and can be removed. And the last part of the sentence can be rejiggered to be more direct while eliminating an unnecessary prepositional phrase. The end result is succinct “This is marked DO NOT MERGE because we’re missing the SSO login path’s production URL.” Bonus Round Avoid Passive Voice Folks tend to slip into passive voice when talking about bad things like bugs or downtime. Uncomfortable things make people want to ensure they’re dodging–or not assigning–blame. I’m not saying you should throw someone under the bus for a bug, but it helps to be direct when writing about your code. “We were asked to implement the feature that caused this bug by the sales team.” The trouble here is were asked. This makes the sentence sound weak. Luckily, a rewrite is easy “The sales team asked us to implement the feature that caused this bug.” By moving the subject from the end of the sentence to the beginning, we ditch an unnecessary prepositional phrase by the sales team, shorten the sentence, and the overall meaning is now clear and direct. There’s More! But we can’t cover it all here. If you want to dig deeper, I recommend picking up The Elements of Style. It’s a great starting point for improving your writing. Also, Junk English by Ken Smith is a fun guide for spotting and avoiding jargon, and there’s a sequel if you enjoy it. The post Writing Tips for Improving Your Pull Requests appeared first on Simple Thread.


Reset user account passwords using Microsoft Graph and application permissions in ASP.NET Core
Reset user account passwords using Microsoft Graph ...

This article shows how to reset a password for tenant members using a Microsoft Graph application client in ASP.NET Core. An Azure App registration is used to define the application permission for the Microsoft Graph client and the User Administrator role is assigned to the Azure Enterprise application created from the Azure App registration. Code https//github.com/damienbod/azuerad-reset Create an Azure App registration with the Graph permission An Azure App registration was created which requires a secret or a certificate. The Azure App registration has the application User.ReadWrite.All permission and is used to assign the Azure role. This client is only for application clients and not delegated clients. Assign the User Administrator role to the App Registration The User Administrator role is assigned to the Azure App registration (Azure Enterprise application pro tenant). You can do this by using the User Administrator Assignments and and new one can be added. Choose the Azure App registration corresponding Enterprise application and assign the role to be always active. Create the Microsoft Graph application client In the ASP.NET Core application, a new Graph application can be created using the Microsoft Graph SDK and Azure Identity. The GetChainedTokenCredentials is used to authenticate using a managed identity for the production deployment or a user secret in development. You could also use a certificate. This is the managed identity from the Azure App service where the application is deployed in production. using Azure.Identity; using Microsoft.Graph; namespace SelfServiceAzureAdPasswordReset; public class GraphApplicationClientService { private readonly IConfiguration _configuration; private readonly IHostEnvironment _environment; private GraphServiceClient? _graphServiceClient; public GraphApplicationClientService(IConfiguration configuration, IHostEnvironment environment) { _configuration = configuration; _environment = environment; } /// <summary> /// gets a singleton instance of the GraphServiceClient /// </summary> public GraphServiceClient GetGraphClientWithManagedIdentityOrDevClient() { if (_graphServiceClient != null) return _graphServiceClient; string[] scopes = new[] { "https//graph.microsoft.com/.default" }; var chainedTokenCredential = GetChainedTokenCredentials(); _graphServiceClient = new GraphServiceClient(chainedTokenCredential, scopes); return _graphServiceClient; } private ChainedTokenCredential GetChainedTokenCredentials() { if (!_environment.IsDevelopment()) { // You could also use a certificate here return new ChainedTokenCredential(new ManagedIdentityCredential()); } else // dev env { var tenantId = _configuration["AzureAdGraphTenantId"]; var clientId = _configuration.GetValue<string>("AzureAdGraphClientId"); var clientSecret = _configuration.GetValue<string>("AzureAdGraphClientSecret"); var options = new TokenCredentialOptions { AuthorityHost = AzureAuthorityHosts.AzurePublicCloud }; // https//docs.microsoft.com/dotnet/api/azure.identity.clientsecretcredential var devClientSecretCredential = new ClientSecretCredential( tenantId, clientId, clientSecret, options); var chainedTokenCredential = new ChainedTokenCredential(devClientSecretCredential); return chainedTokenCredential; } } } Reset the password Microsoft Graph SDK 4 Once the client is authenticated, Microsoft Graph SDK can be used to implement the logic. You need to decide if SDK 4 or SDK 5 is used to implement the Graph client. Most applications must still use Graph SDK 4 but no docs exist for this anymore. Refer to Stackoverflow or try and error. The application has one method to get the user and a second one to reset the password and force a change on the next authentication. This is ok for low level security, but TAP with a strong authentication should always be used if possible. using Microsoft.Graph; using System.Security.Cryptography; namespace SelfServiceAzureAdPasswordReset; public class UserResetPasswordApplicationGraphSDK4 { private readonly GraphApplicationClientService _graphApplicationClientService; public UserResetPasswordApplicationGraphSDK4(GraphApplicationClientService graphApplicationClientService) { _graphApplicationClientService = graphApplicationClientService; } private async Task<string> GetUserIdAsync(string email) { var filter = $"startswith(userPrincipalName,'{email}')"; var graphServiceClient = _graphApplicationClientService .GetGraphClientWithManagedIdentityOrDevClient(); var users = await graphServiceClient.Users .Request() .Filter(filter) .GetAsync(); return users.CurrentPage[0].Id; } public async Task<string?> ResetPassword(string email) { var graphServiceClient = _graphApplicationClientService .GetGraphClientWithManagedIdentityOrDevClient(); var userId = await GetUserIdAsync(email); if (userId == null) { throw new ArgumentNullException(nameof(email)); } var password = GetRandomString(); await graphServiceClient.Users[userId].Request() .UpdateAsync(new User { PasswordProfile = new PasswordProfile { Password = password, ForceChangePasswordNextSignIn = true } }); return password; } private static string GetRandomString() { var random = $"{GenerateRandom()}{GenerateRandom()}{GenerateRandom()}{GenerateRandom()}-AC"; return random; } private static int GenerateRandom() { return RandomNumberGenerator.GetInt32(100000000, int.MaxValue); } } Reset the password Microsoft Graph SDK 5 Microsoft Graph SDK 5 can also be used to implement the logic to reset the password and force a change on the next signin. using Microsoft.Graph; using Microsoft.Graph.Models; using System.Security.Cryptography; namespace SelfServiceAzureAdPasswordReset; public class UserResetPasswordApplicationGraphSDK5 { private readonly GraphApplicationClientService _graphApplicationClientService; public UserResetPasswordApplicationGraphSDK5(GraphApplicationClientService graphApplicationClientService) { _graphApplicationClientService = graphApplicationClientService; } private async Task<string?> GetUserIdAsync(string email) { var filter = $"startswith(userPrincipalName,'{email}')"; var graphServiceClient = _graphApplicationClientService .GetGraphClientWithManagedIdentityOrDevClient(); var result = await graphServiceClient.Users.GetAsync((requestConfiguration) => { requestConfiguration.QueryParameters.Top = 10; if (!string.IsNullOrEmpty(email)) { requestConfiguration.QueryParameters.Search = $"\"userPrincipalName{email}\""; } requestConfiguration.QueryParameters.Orderby = new string[] { "displayName" }; requestConfiguration.QueryParameters.Count = true; requestConfiguration.QueryParameters.Select = new string[] { "id", "displayName", "userPrincipalName", "userType" }; requestConfiguration.QueryParameters.Filter = "userType eq 'Member'"; // onPremisesSyncEnabled eq false requestConfiguration.Headers.Add("ConsistencyLevel", "eventual"); }); return result!.Value!.FirstOrDefault()!.Id; } public async Task<string?> ResetPassword(string email) { var graphServiceClient = _graphApplicationClientService .GetGraphClientWithManagedIdentityOrDevClient(); var userId = await GetUserIdAsync(email); if (userId == null) { throw new ArgumentNullException(nameof(email)); } var password = GetRandomString(); await graphServiceClient.Users[userId].PatchAsync( new User { PasswordProfile = new PasswordProfile { Password = password, ForceChangePasswordNextSignIn = true } }); return password; } private static string GetRandomString() { var random = $"{GenerateRandom()}{GenerateRandom()}{GenerateRandom()}{GenerateRandom()}-AC"; return random; } private static int GenerateRandom() { return RandomNumberGenerator.GetInt32(100000000, int.MaxValue); } } Any Razor page can use the service and update the password. The Razor Page requires protection to prevent any user or bot updating any other user account. Some type of secret is required to use the service or an extra id which can be created from an internal IT admin. DDOS protection and BOT protection is also required if the Razor page is deployed to a public endpoint and a delay after each request must also be implemented. Extreme caution needs to be taken when exposing this business functionality. private readonly UserResetPasswordApplicationGraphSDK5 _userResetPasswordApp; [BindProperty] public string Upn { get; set; } = string.Empty; [BindProperty] public string? Password { get; set; } = string.Empty; public IndexModel(UserResetPasswordApplicationGraphSDK5 userResetPasswordApplicationGraphSDK4) { _userResetPasswordApp = userResetPasswordApplicationGraphSDK4; } public void OnGet(){} public async Task<IActionResult> OnPostAsync() { if (!ModelState.IsValid) { return Page(); } Password = await _userResetPasswordApp .ResetPassword(Upn); return Page(); } The demo application can be started and a password from a local member can be reset. The https//mysignins.microsoft.com/security-info url can be used to test the new password and add MFA or whatever. Notes You can use this solution for applications with no user. If using an administrator or a user to reset the passwords, then a delegated permission should be used with different Graph SDK methods and different Graph permissions. Links https//aka.ms/mysecurityinfo https//learn.microsoft.com/en-us/graph/api/overview?view=graph-rest-1.0 https//learn.microsoft.com/en-us/graph/sdks/paging?tabs=csharp https//learn.microsoft.com/en-us/graph/api/authenticationmethod-resetpassword?view=graph-rest-1.0&tabs=csharp


An error occurred during the compilation of a reso ...
Category: .Net 7

Question Why is this error happening? "An error occurred during the compilation of a resource re ...


Views: 0 Likes: 33
How do I turn Black and White Movie into Color
Category: Research

Turning a black and white movie into color can be a challenging task, but it is definitely possi ...


Views: 0 Likes: 39
Provision Azure IoT Hub devices using DPS and X.509 certificates in ASP.NET Core
Provision Azure IoT Hub devices using DPS and X.50 ...

This article shows how to provision Azure IoT hub devices using Azure IoT hub device provisioning services (DPS) and ASP.NET Core. The devices are setup using chained certificates created using .NET Core and managed in the web application. The data is persisted in a database using EF Core and the certificates are generated using the CertificateManager Nuget package. Code https//github.com/damienbod/AzureIoTHubDps Setup To setup a new Azure IoT Hub DPS, enrollment group and devices, the web application creates a new certificate using an ECDsa private key and the .NET Core APIs. The data is stored in two pem files, one for the public certificate and one for the private key. The pem public certificate file is downloaded from the web application and uploaded to the certificates blade in Azure IoT Hub DPS. The web application persists the data to a database using EF Core and SQL. A new certificate is created from the DPS root certificate and used to create a DPS enrollment group. The certificates are chained from the original DPS certificate. New devices are registered and created using the enrollment group. Another new device certificate chained from the enrollment group certificate is created per device and used in the DPS. The Azure IoT Hub DPS creates a new IoT Hub device using the linked IoT Hubs. Once the IoT hub is running, the private key from the device certificate is used to authenticate the device and send data to the server. When the ASP.NET Core web application is started, users can create new certificates, enrollment groups and add devices to the groups. I plan to extend the web application to add devices, delete devices, and delete groups. I plan to add authorization for the different user types and better paging for the different UIs. At present all certificates use ECDsa private keys but this can easily be changed to other types. This depends on the type of root certificate used. The application is secured using Microsoft.Identity.Web and requires an authenticated user. This can be setup in the program file or in the startup extensions. I use EnableTokenAcquisitionToCallDownstreamApi to force the OpenID Connect code flow. The configuration is read from the default AzureAd app.settings and the whole application is required to be authenticated. When the enable and disable flows are added, I will add different users with different authorization levels. builder.Services.AddDistributedMemoryCache(); builder.Services.AddAuthentication( OpenIdConnectDefaults.AuthenticationScheme) .AddMicrosoftIdentityWebApp( builder.Configuration.GetSection("AzureAd")) .EnableTokenAcquisitionToCallDownstreamApi() .AddDistributedTokenCaches(); Create an Azure IoT Hub DPS certificate The web application is used to create devices using certificates and DPS enrollment groups. The DpsCertificateProvider class is used to create the root self signed certificate for the DPS enrollment groups. The NewRootCertificate from the CertificateManager Nuget package is used to create the certificate using an ECDsa private key. This package wraps the default .NET APIs for creating certificates and adds a layer of abstraction. You could just use the lower level APIs directly. The certificate is exported to two separate pem files and persisted to the database. public class DpsCertificateProvider { private readonly CreateCertificatesClientServerAuth _createCertsService; private readonly ImportExportCertificate _iec; private readonly DpsDbContext _dpsDbContext; public DpsCertificateProvider(CreateCertificatesClientServerAuth ccs, ImportExportCertificate importExportCertificate, DpsDbContext dpsDbContext) { _createCertsService = ccs; _iec = importExportCertificate; _dpsDbContext = dpsDbContext; } public async Task<(string PublicPem, int Id)> CreateCertificateForDpsAsync(string certName) { var certificateDps = _createCertsService.NewRootCertificate( new DistinguishedName { CommonName = certName, Country = "CH" }, new ValidityPeriod { ValidFrom = DateTime.UtcNow, ValidTo = DateTime.UtcNow.AddYears(50) }, 3, certName); var publicKeyPem = _iec.PemExportPublicKeyCertificate(certificateDps); string pemPrivateKey = string.Empty; using (ECDsa? ecdsa = certificateDps.GetECDsaPrivateKey()) { pemPrivateKey = ecdsa!.ExportECPrivateKeyPem(); FileProvider.WriteToDisk($"{certName}-private.pem", pemPrivateKey); } var item = new DpsCertificate { Name = certName, PemPrivateKey = pemPrivateKey, PemPublicKey = publicKeyPem }; _dpsDbContext.DpsCertificates.Add(item); await _dpsDbContext.SaveChangesAsync(); return (publicKeyPem, item.Id); } public async Task<List<DpsCertificate>> GetDpsCertificatesAsync() { return await _dpsDbContext.DpsCertificates.ToListAsync(); } public async Task<DpsCertificate?> GetDpsCertificateAsync(int id) { return await _dpsDbContext.DpsCertificates.FirstOrDefaultAsync(item => item.Id == id); } } Once the root certificate is created, you can download the public pem file from the web application and upload it to the Azure IoT Hub DPS portal. This needs to be verified. You could also use a CA created certificate for this, if it is possible to create child chained certificates. The enrollment groups are created from this root certificate. Create an Azure IoT Hub DPS enrollment group Devices can be created in different ways in the Azure IoT Hub. We use a DPS enrollment group with certificates to create the Azure IoT devices. The DpsEnrollmentGroupProvider is used to create the enrollment group certificate. This uses the root certificate created in the previous step and chains the new group certificate from this. The enrollment group is used to add devices. Default values are defined for the enrollment group and the pem files are saved to the database. The root certificate is read from the database and the chained enrollment group certificate uses an ECDsa private key like the root self signed certificate. The CreateEnrollmentGroup method is used to set the initial values of the IoT Hub Device. The ProvisioningStatus is set to enabled. This means when the device is registered, it will be enabled to send messages. You could also set this to disabled and enable it after when the device gets used by an end client for the first time. A MAC or a serial code from the device hardware could be used to enable the IoT Hub device. By waiting till the device is started by the end client, you could choose a IoT Hub optimized for this client. public class DpsEnrollmentGroupProvider { private IConfiguration Configuration { get;set;} private readonly ILogger<DpsEnrollmentGroupProvider> _logger; private readonly DpsDbContext _dpsDbContext; private readonly ImportExportCertificate _iec; private readonly CreateCertificatesClientServerAuth _createCertsService; private readonly ProvisioningServiceClient _provisioningServiceClient; public DpsEnrollmentGroupProvider(IConfiguration config, ILoggerFactory loggerFactory, ImportExportCertificate importExportCertificate, CreateCertificatesClientServerAuth ccs, DpsDbContext dpsDbContext) { Configuration = config; _logger = loggerFactory.CreateLogger<DpsEnrollmentGroupProvider>(); _dpsDbContext = dpsDbContext; _iec = importExportCertificate; _createCertsService = ccs; _provisioningServiceClient = ProvisioningServiceClient.CreateFromConnectionString( Configuration.GetConnectionString("DpsConnection")); } public async Task<(string Name, int Id)> CreateDpsEnrollmentGroupAsync( string enrollmentGroupName, string certificatePublicPemId) { _logger.LogInformation("Starting CreateDpsEnrollmentGroupAsync..."); _logger.LogInformation("Creating a new enrollmentGroup..."); var dpsCertificate = _dpsDbContext.DpsCertificates .FirstOrDefault(t => t.Id == int.Parse(certificatePublicPemId)); var rootCertificate = X509Certificate2.CreateFromPem( dpsCertificate!.PemPublicKey, dpsCertificate.PemPrivateKey); // create an intermediate for each group var certName = $"{enrollmentGroupName}"; var certDpsGroup = _createCertsService.NewIntermediateChainedCertificate( new DistinguishedName { CommonName = certName, Country = "CH" }, new ValidityPeriod { ValidFrom = DateTime.UtcNow, ValidTo = DateTime.UtcNow.AddYears(50) }, 2, certName, rootCertificate); // get the public key certificate for the enrollment var pemDpsGroupPublic = _iec.PemExportPublicKeyCertificate(certDpsGroup); string pemDpsGroupPrivate = string.Empty; using (ECDsa? ecdsa = certDpsGroup.GetECDsaPrivateKey()) { pemDpsGroupPrivate = ecdsa!.ExportECPrivateKeyPem(); FileProvider.WriteToDisk($"{enrollmentGroupName}-private.pem", pemDpsGroupPrivate); } Attestation attestation = X509Attestation.CreateFromRootCertificates(pemDpsGroupPublic); EnrollmentGroup enrollmentGroup = CreateEnrollmentGroup(enrollmentGroupName, attestation); _logger.LogInformation("{enrollmentGroup}", enrollmentGroup); _logger.LogInformation("Adding new enrollmentGroup..."); EnrollmentGroup enrollmentGroupResult = await _provisioningServiceClient .CreateOrUpdateEnrollmentGroupAsync(enrollmentGroup); _logger.LogInformation("EnrollmentGroup created with success."); _logger.LogInformation("{enrollmentGroupResult}", enrollmentGroupResult); DpsEnrollmentGroup newItem = await PersistData(enrollmentGroupName, dpsCertificate, pemDpsGroupPublic, pemDpsGroupPrivate); return (newItem.Name, newItem.Id); } private async Task<DpsEnrollmentGroup> PersistData(string enrollmentGroupName, DpsCertificate dpsCertificate, string pemDpsGroupPublic, string pemDpsGroupPrivate) { var newItem = new DpsEnrollmentGroup { DpsCertificateId = dpsCertificate.Id, Name = enrollmentGroupName, DpsCertificate = dpsCertificate, PemPublicKey = pemDpsGroupPublic, PemPrivateKey = pemDpsGroupPrivate }; _dpsDbContext.DpsEnrollmentGroups.Add(newItem); dpsCertificate.DpsEnrollmentGroups.Add(newItem); await _dpsDbContext.SaveChangesAsync(); return newItem; } private static EnrollmentGroup CreateEnrollmentGroup(string enrollmentGroupName, Attestation attestation) { return new EnrollmentGroup(enrollmentGroupName, attestation) { ProvisioningStatus = ProvisioningStatus.Enabled, ReprovisionPolicy = new ReprovisionPolicy { MigrateDeviceData = false, UpdateHubAssignment = true }, Capabilities = new DeviceCapabilities { IotEdge = false }, InitialTwinState = new TwinState( new TwinCollection("{ \"updatedby\"\"" + "damien" + "\", \"timeZone\"\"" + TimeZoneInfo.Local.DisplayName + "\" }"), new TwinCollection("{ }") ) }; } public async Task<List<DpsEnrollmentGroup>> GetDpsGroupsAsync(int? certificateId = null) { if (certificateId == null) { return await _dpsDbContext.DpsEnrollmentGroups.ToListAsync(); } return await _dpsDbContext.DpsEnrollmentGroups .Where(s => s.DpsCertificateId == certificateId).ToListAsync(); } public async Task<DpsEnrollmentGroup?> GetDpsGroupAsync(int id) { return await _dpsDbContext.DpsEnrollmentGroups .FirstOrDefaultAsync(d => d.Id == id); } } Register a device in the enrollment group The DpsRegisterDeviceProvider class creates a new device chained certificate using the enrollment group certificate and creates this using the ProvisioningDeviceClient. The transport ProvisioningTransportHandlerAmqp is set in this example. There are different transport types possible and you need to chose the one which best meets your needs. The device certificate uses an ECDsa private key and stores everything to the database. The PFX for windows is stored directly to the file system. I use pem files and create the certificate from these in the device client sending data to the hub and this is platform independent. The create PFX file requires a password to use it. public class DpsRegisterDeviceProvider { private IConfiguration Configuration { get; set; } private readonly ILogger<DpsRegisterDeviceProvider> _logger; private readonly DpsDbContext _dpsDbContext; private readonly ImportExportCertificate _iec; private readonly CreateCertificatesClientServerAuth _createCertsService; public DpsRegisterDeviceProvider(IConfiguration config, ILoggerFactory loggerFactory, ImportExportCertificate importExportCertificate, CreateCertificatesClientServerAuth ccs, DpsDbContext dpsDbContext) { Configuration = config; _logger = loggerFactory.CreateLogger<DpsRegisterDeviceProvider>(); _dpsDbContext = dpsDbContext; _iec = importExportCertificate; _createCertsService = ccs; } public async Task<(int? DeviceId, string? ErrorMessage)> RegisterDeviceAsync( string deviceCommonNameDevice, string dpsEnrollmentGroupId) { int? deviceId = null; var scopeId = Configuration["ScopeId"]; var dpsEnrollmentGroup = _dpsDbContext.DpsEnrollmentGroups .FirstOrDefault(t => t.Id == int.Parse(dpsEnrollmentGroupId)); var certDpsEnrollmentGroup = X509Certificate2.CreateFromPem( dpsEnrollmentGroup!.PemPublicKey, dpsEnrollmentGroup.PemPrivateKey); var newDevice = new DpsEnrollmentDevice { Password = GetEncodedRandomString(30), Name = deviceCommonNameDevice.ToLower(), DpsEnrollmentGroupId = dpsEnrollmentGroup.Id, DpsEnrollmentGroup = dpsEnrollmentGroup }; var certDevice = _createCertsService.NewDeviceChainedCertificate( new DistinguishedName { CommonName = $"{newDevice.Name}" }, new ValidityPeriod { ValidFrom = DateTime.UtcNow, ValidTo = DateTime.UtcNow.AddYears(50) }, $"{newDevice.Name}", certDpsEnrollmentGroup); var deviceInPfxBytes = _iec.ExportChainedCertificatePfx(newDevice.Password, certDevice, certDpsEnrollmentGroup); // This is required if you want PFX exports to work. newDevice.PathToPfx = FileProvider.WritePfxToDisk($"{newDevice.Name}.pfx", deviceInPfxBytes); // get the public key certificate for the device newDevice.PemPublicKey = _iec.PemExportPublicKeyCertificate(certDevice); FileProvider.WriteToDisk($"{newDevice.Name}-public.pem", newDevice.PemPublicKey); using (ECDsa? ecdsa = certDevice.GetECDsaPrivateKey()) { newDevice.PemPrivateKey = ecdsa!.ExportECPrivateKeyPem(); FileProvider.WriteToDisk($"{newDevice.Name}-private.pem", newDevice.PemPrivateKey); } // setup Windows store deviceCert var pemExportDevice = _iec.PemExportPfxFullCertificate(certDevice, newDevice.Password); var certDeviceForCreation = _iec.PemImportCertificate(pemExportDevice, newDevice.Password); using (var security = new SecurityProviderX509Certificate(certDeviceForCreation, new X509Certificate2Collection(certDpsEnrollmentGroup))) // To optimize for size, reference only the protocols used by your application. using (var transport = new ProvisioningTransportHandlerAmqp(TransportFallbackType.TcpOnly)) //using (var transport = new ProvisioningTransportHandlerHttp()) //using (var transport = new ProvisioningTransportHandlerMqtt(TransportFallbackType.TcpOnly)) //using (var transport = new ProvisioningTransportHandlerMqtt(TransportFallbackType.WebSocketOnly)) { var client = ProvisioningDeviceClient.Create("global.azure-devices-provisioning.net", scopeId, security, transport); try { var result = await client.RegisterAsync(); _logger.LogInformation("DPS client created {result}", result); } catch (Exception ex) { _logger.LogError("DPS client created {result}", ex.Message); return (null, ex.Message); } } _dpsDbContext.DpsEnrollmentDevices.Add(newDevice); dpsEnrollmentGroup.DpsEnrollmentDevices.Add(newDevice); await _dpsDbContext.SaveChangesAsync(); deviceId = newDevice.Id; return (deviceId, null); } private static string GetEncodedRandomString(int length) { var base64 = Convert.ToBase64String(GenerateRandomBytes(length)); return base64; } private static byte[] GenerateRandomBytes(int length) { var byteArray = new byte[length]; RandomNumberGenerator.Fill(byteArray); return byteArray; } public async Task<List<DpsEnrollmentDevice>> GetDpsDevicesAsync(int? dpsEnrollmentGroupId) { if(dpsEnrollmentGroupId == null) { return await _dpsDbContext.DpsEnrollmentDevices.ToListAsync(); } return await _dpsDbContext.DpsEnrollmentDevices.Where(s => s.DpsEnrollmentGroupId == dpsEnrollmentGroupId).ToListAsync(); } public async Task<DpsEnrollmentDevice?> GetDpsDeviceAsync(int id) { return await _dpsDbContext.DpsEnrollmentDevices .Include(device => device.DpsEnrollmentGroup) .FirstOrDefaultAsync(d => d.Id == id); } } Download certificates and use The private and the public pem files are used to setup the Azure IoT Hub device and send data from the device to the server. A HTML form is used to download the files. The form sends a post request to the file download API. <form action="/api/FileDownload/DpsDevicePublicKeyPem" method="post"> <input type="hidden" value="@Model.DpsDevice.Id" id="Id" name="Id" /> <button type="submit" style="padding-left0" class="btn btn-link">Download Public PEM</button> </form> The DpsDevicePublicKeyPemAsync method implements the file download. The method gets the data from the database and returns this as pem file. [HttpPost("DpsDevicePublicKeyPem")] public async Task<IActionResult> DpsDevicePublicKeyPemAsync([FromForm] int id) { var cert = await _dpsRegisterDeviceProvider .GetDpsDeviceAsync(id); if (cert == null) throw new ArgumentNullException(nameof(cert)); if (cert.PemPublicKey == null) throw new ArgumentNullException(nameof(cert.PemPublicKey)); return File(Encoding.UTF8.GetBytes(cert.PemPublicKey), "application/octet-stream", $"{cert.Name}-public.pem"); } The device UI displays the data and allows the authenticated user to download the files. The CertificateManager and the Microsoft.Azure.Devices.Client Nuget packages are used to implement the IoT Hub device client. The pem files with the public certificate and the private key can be loaded into a X509Certificate instance. This is then used to send the data using the DeviceAuthenticationWithX509Certificate class. The SendEvent method sends the data using the IoT Hub device Message class. var serviceProvider = new ServiceCollection() .AddCertificateManager() .BuildServiceProvider(); var iec = serviceProvider.GetService<ImportExportCertificate>(); #region pem var deviceNamePem = "robot1-feed"; var certPem = File.ReadAllText($"{_pathToCerts}{deviceNamePem}-public.pem"); var eccPem = File.ReadAllText($"{_pathToCerts}{deviceNamePem}-private.pem"); var cert = X509Certificate2.CreateFromPem(certPem, eccPem); // setup deviceCert windows store export var pemDeviceCertPrivate = iec!.PemExportPfxFullCertificate(cert); var certDevice = iec.PemImportCertificate(pemDeviceCertPrivate); #endregion pem var auth = new DeviceAuthenticationWithX509Certificate(deviceNamePem, certDevice); var deviceClient = DeviceClient.Create(iotHubUrl, auth, transportType); if (deviceClient == null) { Console.WriteLine("Failed to create DeviceClient!"); } else { Console.WriteLine("Successfully created DeviceClient!"); SendEvent(deviceClient).Wait(); } Notes Using certificates in .NET and windows is complicated due to how the private keys are handled and loaded. The private keys need to be exported or imported into the stores etc. This is not an easy API to get working and the docs for this are confusing. This type of device transport and the default setup for the device would need to be adapted for your system. In this example, I used ECDsa certificates but you could also use RSA based keys. The root certificate could be replaced with a CA issued one. I created long living certificates because I do not want the devices to stop working in the field. This should be moved to a configuration. A certificate rotation flow would make sense as well. In the follow up articles, I plan to save the events in hot and cold path events and implement device enable, disable flows. I also plan to write about the device twins. The device twins is a excellent way of sharing data in both directions. Links https//github.com/Azure/azure-iot-sdk-csharp https//github.com/damienbod/AspNetCoreCertificates Creating Certificates for X.509 security in Azure IoT Hub using .NET Core https//learn.microsoft.com/en-us/azure/iot-hub/troubleshoot-error-codes https//stackoverflow.com/questions/52750160/what-is-the-rationale-for-all-the-different-x509keystorageflags/52840537#52840537 https//github.com/dotnet/runtime/issues/19581 https//www.nuget.org/packages/CertificateManager Azure IoT Hub Documentation | Microsoft Learn


Technical Project Manager
Category: Jobs

"IMMEDIATE REQUIREMENT" Please share the suitableprofile to&nbsp;<a href="mailtoelly.jack ...


Views: 0 Likes: 32
Why Software Design and Architecture is very impor ...
Category: Computer Programming

Thorough System Analysis becomes vital t ...


Views: 0 Likes: 31
Senior Software Engineer (2 roles, React OR Pytho ...
Category: Jobs

Hello!&nbsp;My name is Joe Conjerti and I'm the founder of Retain - a human-centric r ...


Views: 0 Likes: 38
Management Structure of the U.S. Bulk Electric System
Management Structure of the U.S. Bulk Electric Sys ...

Simple Thread is a digital product agency with a focus in the electric power industry. The power and electric utility industry is absolutely fascinating in its scale and complexity and we love sharing all of the interesting things we have learned. The topics may vary from facts about the grid, to green energy, energy sustainability, basic electrical engineering, the future of the grid, and everything in between. If you’d like to hear more, keep checking back! ———————————————- The United States power grid is quite possibly the most complex machine ever devised. As we discussed in the post A Tale of Two Grids the continental US power grid is actually made up of three separate synchronous grids called interconnections. These interconnections are The Eastern Interconnection The Western Interconnection The Texas Interconnection You can tell that clever names weren’t high on the todo list. Federal Energy Regulatory Commission These interconnections are massively complex, and managing, operating, and regulating them is a monumental task. At the top of the regulatory pyramid is FERC, the Federal Energy Regulatory Commission. FERC was formed in 1977 as a result of the Department of Energy Organization Act. The Department of Energy Organization Act abolished the previously created Federal Power Commission (FPC) and transferred its responsibilities to FERC. FERC is the federal agency that regulates the transmission and sale of electricity across state lines. However its powers extend far beyond the electric grid to regulate other forms of energy such as natural gas and oil. It also has the responsibility of ensuring the reliability and security of the nation’s bulk power system. It does this in part through the oversight and approval of reliability standards for the U.S. bulk electric system (any part of the grid operating at 100kV or higher). North American Electric Reliability Corporation FERC doesn’t actually create these standards though, it does this through a partnership with an international nonprofit called the North American Electric Reliability Corporation (NERC). NERC is the successor to the North American Electric Reliability Council (Also NERC), which was a voluntary industry association formed in the aftermath of the Northeast Blackout of 1965. The current NERC was formed out of the Energy Policy Act of 2005 in the aftermath of the 2003 Northeast blackout. The Energy Policy Act of 2005 mandated the creation of an “Electric Reliability Organization” (ERO) within the United States in order to enforce reliability standards on the bulk power grid. In 2006 FERC approved the newly overhauled NERC to be the Electric Reliability Organization (ERO) for the United States. NERC develops and oversees the enforcement of mandatory reliability standards across the United States, Canada, and parts of Mexico. It works with a variety of stakeholders including utility companies and other regulators to establish standards and best practices for operating the grid. It is also responsible for monitoring the grid’s performance, identifying risks, and performing audits to ensure compliance with its standards. Regional Entities You might have noticed that I said “oversees the enforcement of mandatory reliability standards” in the previous paragraph. Yes, NERC doesn’t actually enforce the standards, but instead delegates enforcement of their standards to six Regional Entities (REs). These Regional Entities are responsible for ensuring compliance with NERCs mandatory reliability standards within a specific region. They audit, assess, and investigate utilities and other electric grid participants for compliance with those reliability standards. They also work with NERC to develop additional regional reliability standards, if there are needs specific to where they operate. Reliability Coordinators Sitting alongside Regional Entities is another group of organizations known as Reliability Coordinators (RCs). Reliability Coordinators are certified by NERC and are the highest level organization responsible for the reliable functioning of the bulk electric system. RCs are primarily responsible for the real-time management of a specific area of the grid. They have a wide-area and real-time view of the grid and they monitor things like grid conditions, generation output, and transmission line statuses. RCs ensure that generation and demand is balanced, and is also responsible for issuing reliability alerts or implementing emergency procedure directives. For example, they might tell a particular utility to reduce load on a particular transmission line in response to a situation on the grid. Reliability Coordinators can be RTOs/ISOs (discussed below), other regional entities such as the Tennessee Valley Authority, or a single utility such as Southern Company. Balancing Authorities Balancing Authorities (BAs) are entities certified by NERC that are responsible for maintaining the balance between electricity supply and demand within a geographical area. There are currently 66 Balancing Authorities within the United States that range from large multi-state areas to small chunks of single states. BAs implement real-time grid operations such as dispatching generation, controlling electrical interchange with neighbors, and frequency regulation. Balancing Authorities can be RTOs/ISOs (discussed below), other regional entities such as the Tennessee Valley Authority, or a single utility such as Southern Company. RTOs and ISOs As if all of this wasn’t complicated enough, there are also nine organizations known as either Regional Transmission Organizations (RTOs) or Independent System Operators (ISOs). These organizations are responsible for the operation, planning, and management of the transmission grid and electricity markets within their respective regions. The main difference between ISOs and RTOs is that ISOs usually operate within a single state, while RTOs are larger regional entities. The naming is less than clear though, since both ISO New England (ISO-NE) and the Midcontinent ISO (MISO) are both RTOs. The reason for this is that ISOs were formed in 1996 as part of FERC Orders 888 and 889, while RTOs were not formed until FERC Order 2000 in 1999. MISO was made an ISO in 1998, but later also became the first RTO in 2001. RTOs/ISOs and Regional Entities may seem redundant at first glance, but they serve very different purposes. Regional Entities are responsible for enforcing NERCs reliability standards, including enforcing those standards against RTOs and ISOs. RTOs and ISOs can be audited by Regional Entities and be penalized for non-compliance. RTOs and ISOs have to implement those standards, but they are more concerned with coordinating, controlling, monitoring, and managing the grid and electricity markets within their region. One other very important distinction is that RTOs and ISOs are voluntary, and not all of the United States is covered by either type of organization. Parts of the Southeast and Northwest are two of the largest regions that are not covered by ISOs or RTOs. Therefore utilities within those regions must form agreements with other power companies or ISOs/RTOs they want to interconnect with or to trade power with. RTOs can also be ISOs, and they can also be Regional Entities, Reliability Coordinators, and Balancing Authorities! For example, PJM is the RTO for most of the mid-Atlantic and is also the region’s Regional Entity, Reliability Coordinator, and Balancing Authority. Transmission Operators And finally we get to the companies that do the work of actually transmitting the electricity! We call these Transmission Operators (TOPs). Transmission Operators are responsible for the maintenance, monitoring, and operation of the part of the transmission grid they own. The list of responsibilities these organizations have is a mile long, but at the end of the day they own a piece of the transmission grid and are responsible for it. There are organizations that sit around them that do things like enforce compliance and manage markets, but Transmission Operators are the folks that show up if there is an actual problem with the grid and fix it. They monitor their piece of the grid in real-time and communicate with balancing authorities and regional entities to ensure that everything is running smoothly. A Quick Example This is all a bit complex, so to clarify things a bit, let’s look at an example. Simple Thread is headquartered in Richmond, Virginia on the east coast of the United States and so we are physically located within the eastern interconnection and the hierarchy of entities that oversees this area is this FERC – The federal agency that regulates the transmission and sale of electricity across state lines. Approves and oversees the creation of standards that are created and enforced by NERC. NERC – The nonprofit organization responsible for establishing reliability standards for the North American power grid. SERC – The SERC Reliability Corporation (which originally stood for Southeastern Electric Reliability Council) is the Regional Entity for all of the southeast that NERC has delegated authority to in order to enforce NERC reliability standards. Virginia is split between two Regional Entities SERC and ReliabilityFirst. PJM – PJM Interconnection Inc. is the RTO that Virginia is located within. The acronym originally stood for Pennsylvania-New Jersey-Maryland, but its footprint is much larger now. PJM oversees a part of the bulk electric system in parts of 13 states and Washington DC that has more than 185 gigawatts of generation capacity. It is also the regional Reliability Coordinator and Balancing Authority for its region. Dominion Energy – The power company which is the local Transmission Operator (TOP), meaning that they actually build, maintain, and operate all of the local pieces of the bulk electric system that is overseen by PJM. Summary The management structure of the Bulk Power System in the United States is a complex system of organizations each with different responsibilities and authority. I hope that this article gives you a little better insight into the different organizations that exist and what they are responsible for. The post Management Structure of the U.S. Bulk Electric System appeared first on Simple Thread.


Use Azure AD Access Packages to onboard users in an Azure DevOps project
Use Azure AD Access Packages to onboard users in a ...

This post looks at onboarding users into an Azure DevOps team or project using Azure AD access packages. The Azure AD access packages are part of the Microsoft Entra Identity Governance and provide a good solution for onboarding internal or external users into your tenant with access to the defined resources. Flow for onboarding Azure DevOps members Sometimes we develop large projects with internal and external users which need access to an Azure DevOps project for a fixed length of time which can be extended if required. These users only need access to the the Azure DevOps project and should be automatically removed when the contract or project is completed. Azure AD access packages are a good way to implement this. Use an Azure AD group The access to the Azure DevOps can be implemented by using an Azure security group in Azure AD. This security will be used to add team members for the Azure DevOps project. Azure AD access packages are used to onboard users into the Azure AD group and the Azure DevOps project uses the security group to define the members. The “azure-devops-project-access-packages” security group was created for this. Setup the Azure DevOps A new Azure DevOps project was created for this demo. The project has an URL on the dev.azure.com domain. The Azure DevOps needs to be attached to the Azure AD tenant. Only an Azure AD member with the required permissions can add a security group to the Azure DevOps project. My test Azure DevOps project was created with the following URL. You can only access this if you are a member. https//dev.azure.com/azureadgroup-access-packages/use-access-packages The project team can now be onboarded. Create the Azure AD P2 Access packages To create an Azure AD P2 Access package, you can use the Microsoft Entra admin center. The access package can be created in the Entitlement management blade. Add the security group from the Azure AD which you use for adding or removing users to the Azure DevOps project. Add the users as members. The users onboarded using the access package are given a lifespan in the tenant for the access and can be extended or not as needed. The users can be added using an access package link, or you can get an admin to assign users to the package. I created a second access package to assign any users to the package which can then be approved or rejected by the Azure DevOps project manager. The Azure DevOps administrator can approve the access package and the Azure DevOps team member can access the Azure DevOps project using the public URL. The new member is added to the Azure security group using the access package. An access package link would look something like this https//myaccess.microsoft.com/@damienbodsharepoint.onmicrosoft.com#/access-packages/b5ad7ec0-8728-4a18-be5b-9fa24dcfefe3 Links https//learn.microsoft.com/en-us/azure/active-directory/governance/entitlement-management-access-package-create https//learn.microsoft.com/en-us/azure/devops/organizations/accounts/faq-user-and-permissions-management?view=azure-devops#q-why-cant-i-find-members-from-my-connected-azure-ad-even-though-im-the-azure-ad-global-admin https//entra.microsoft.com/


iPad Uses: 20+ Creative Things to Do With Your Apple Tablet!
iPad Uses 20+ Creative Things to Do With Your App ...

This post may contain paid links to my personal recommendations that help to support the site! Do you have an iPad that’s sitting around, and you’ve no idea what to use it for? Or do you feel like you’re missing out on all the amazing possibilities your iPad has to offer? There’s no doubt that iPads have become an integral part of our lives, but have you ever stopped to think about the many ways your Apple device can be used? From productivity to leisure, here are 20+ creative, interesting, and innovative iPad uses that will revolutionize how you use your tablet! Read on for a full list of these ideas! Best Uses for iPad 1. A Second Display If you like working with multiple monitors, you can also turn your iPad into a second display. With the help of Sidecar on Mac, you can easily mirror and extend your main screen to the iPad. If you’re planning to use both the functions of the iPad and your Mac, you can also use Universal Control. It allows you to drag and drop files between the two devices. 2. As a Photo Frame Turn your iPad into a digital photo frame that displays all your best memories! All you need to do is download apps that help you turn your iPad into a digital photo frame, and you’ll be able to easily scroll through all of your photos on the iPad’s display. 3. Learning iPads can also be used for educational purposes too! For example, students can take down notes using an Apple Pencil on their iPads using the built-in Notes app. Additionally, iPads can be used to record lectures and take notes in class. Here are some common apps used by students for taking notes Apple Notes Notability GoodNotes Notion 4. Gaming iPads can be used for gaming, whether you’re an avid gamer or just looking to have some fun. There are tons of games available on the App Store, and many of them are free to download. Plus, most iPads have powerful hardware that can make playing games a great experience. 5. Entertainment The iPad is also a great way to stay entertained while on the go! Whether watching movies or streaming music, your iPad can be a fully-fledged entertainment device. Listen to podcasts and audiobooks, catch up on the latest episodes of your favorite TV shows, and watch movies wherever you are. Plus, with the help of AirPlay or Chromecast, you can cast what’s playing on your iPad to another screen. 6. Video Calling a Loved One We all have called our loved ones before, and we tend to do it on our phones for convenience. However, have you considered using the iPad for video calls instead? This is especially useful when having video calls with a group of people. iPads have larger displays, making it easier to see and talk with multiple people at the same time. Also, you’ll be able to use the iPad’s built-in camera and microphone for high-quality video calls. To start using these calls, you can simply start with the built-in FaceTime app or download third-party apps like WhatsApp and Telegram. Just make sure you have a good internet connection for video calls with no lag or disruption! FaceTime calls have been the best call quality for me when calling my loved ones so far, so if you’re calling an Apple device, you should consider that. iPad Uses for Work For those who are wondering if your iPad can also be put to good use for work, here are some ideas 1. Video Editing If you’re a professional video editor, the iPad is your perfect companion for editing while you’re away from your main Mac or PC workstation. Thanks to its powerful hardware and apps likeLumaFusion, you can do some serious work on the go. For more casual video editing, you can try out iMovie for a decent video editing experience. You can edit videos and even add effects with ease using the touchscreen on an iPad. However, one of the limitations you’ll encounter is the lack of ports for drives to handle all your large videos. Not to worry, though! Just make sure you get a USB hub and a reliable solid-state drive to ensure you have enough space for editing, and you’ll do fine! 2. Project Management With project management apps like Trello and Asana, you can easily manage your team’s tasks from anywhere. You can create boards and assign tasks to your team members, add comments, and keep track of progress with just a few taps on the iPad. What I like about these apps is that you’ll get timely notifications for deadlines or changes made to the project. This makes your iPad a very good dashboard to have to monitor work progress. I’d place my iPad next to my desktop to the side and use it to track how work is going along. Since the iPad is also very portable, you can easily pick it up and take it with you to meetings without having to lug around your laptop. 3. Taking Minutes Talking about meetings the iPad is also a handy tool to have for meetings as well. You can use it to take notes and record what’s said in the meeting. Evernote is a great app you can use to easily take down notes while in meetings, and you can also record audio or video of the meeting as well. This makes taking meeting minutes much easier than taking down manual notes on a notepad. If you’re looking for a simpler app, just use the record function within the Apple Notes app to record any discussions. I’m pretty sure you’ll thank me for speeding up your minute-taking workflow for this! iPad Uses for Students And, of course, I won’t be leaving out the students who want to use iPads to help them too. The iPad has become an essential tool for student learning for its portability. 1. Note Taking As mentioned earlier, there are several apps you can use for note-taking on an iPad. Whether you prefer handwritten notes, typed notes, or even audio and video recordings of lectures, the iPad has something that you’ll be able to use for studying. Additionally, with a keyboard case, magic keyboard, or Apple Pencil, the iPad can be used to replace a laptop entirely! The addition of a keyboard can also help you type essays quickly. 2. Working on Math Calculations As a math, science, or engineering student, you’ll most likely require some practice in solving problems. Thanks to the iPad, you can now use note-taking apps to write down and practice working on the problem questions. This makes things much easier, since you won’t require so much physical paper just to work out a question. Moreover, since they are taken down digitally, you’ll be able to refer to them quickly. 3. Art and Design Projects For art and design students, the iPad is a great tool to have for sketching ideas down quickly and for serious project work. With apps like ProCreate and Photoshop Sketch, you can easily draw illustrations or sketches with your Apple Pencil. The app also offers several features, like a wide variety of brushes and mediums that let you easily create stunning digital artwork. Not only that, but it’s also great for sharing your artwork with friends or classmates. To get the best drawing experience, you’ll need a screen cover that can provide more friction to simulate actual drawing. The Paperlike iPad Screen Protector is a good option I recommend. Best Uses for iPad Pro The iPad Pro really packs a punch, with the latest models equipped with the latest M2 chip. To make the best use of your iPad pro, I’ve listed some great ideas below. 1. Photo Editing With the iPad Pro, you have the largest iPad screen size in the whole lineup. This can be useful for editing photos. With a powerful processor and beautiful display, you can bring out the best of your images with apps like Lightroom or Photoshop Express. The bright display also lets you edit photos with ease, even when you’re outside working in a dark space. 2. Video Editing As for video editing, you can use apps like iMovie or LumaFusion to edit videos quickly and easily. You’ll be able to work on multiple layers of video, add music and effects, and even export it in 4K resolution. 3. AR/VR Experiences Virtual Reality (VR) and Augmented Reality (AR) experiences can be used for educational or entertainment purposes. The iPad Pro has a powerful processor and graphics card that will make these types of experiences run smoother than ever before. Plus, with the new LiDAR scanner Apple introduced on the 2020 models of iPad Pro, you can experience AR/VR content in a whole new way. You can even use the iPad as a measurement tool using AR through the Measure app. 4. Drawing and Animation Most people know that the iPad Pro is a great tool for note-taking, but they don’t realize that drawing on the iPad Pro is amazing too. With apps like Adobe Fresco, you can create digital art easily with your Apple Pencil. It also features animation tools so you can animate your artwork as well. 5. Data Analysis One lesser-known use for the iPad Pro is for data analysis. With a powerful processor and graphics, you can use the iPad Pro to analyze data quickly. With apps like MATLAB or Microsoft Excel, you’ll be able to work on complex calculations and analytics while on the move. 6. Gaming Making full use of the ProMotion (high-frequency refresh rate) in an iPad Pro, you’ll be able to enjoy your favorite games at a much smoother experience! The iPad Pro can support the latest graphics-heavy game titles that offer great visuals and performance. Moreover, you can even pair an Xbox or PS4 controller with it for a console-like gaming experience. Best Uses for iPad Air Some of the best uses for iPad Air include 1. Reading & Writing The iPad Air’s bright, vibrant display makes it great for reading and writing. You can use apps like Kindle or even comics to read books. You can also write essays or take down notes with the help of a keyboard attachment. 2. Social Media The iPad Air is also great for social media because of its decently sized screen. Enjoying your favorite TikToks on your iPad Air’s 10.9″ display will be incredibly immersive, and you can even comment or like videos with ease. Plus, the iPad Air has great battery life, so you won’t have to worry about your device dying on you during a long scrolling session! 3. Streaming & Media Consumption If you want to watch movies and shows with the best picture quality, an iPad Air is perfect for you. With the latest models of the iPad Air, you can even stream in 2K resolution. Best Uses for iPad Mini The iPad Mini is one of Apple’s unique products, having an 8.3″ screen size that’s perfect for one-handed use. Here are some of the best uses for it 1. Consuming Social Media The iPad mini is great for consuming social media, especially if you have small hands. Thanks to its compact size, it can fit perfectly in your hand without feeling too big or heavy. Plus, it has a beautiful display that will make looking at pictures and videos more enjoyable. 2. Reading & Writing The iPad Mini is also great for reading books or writing essays. With its small size and lightweight, it can easily fit into your pocket or bag, allowing you to take it with you wherever you go. Uses for an Old iPad If you have an old iPad just lying around with no use, here are some ways to make the most out of it 1. A Secondary Device The old iPad can be used as a secondary device for when you’re on the go. You can use it to check emails, browse the web, or even watch movies. 2. Digital Picture Frame You can also turn your old iPad into a digital picture frame! I’d recommend creating a slideshow of your favorite pictures taken during your last vacation and having them displayed on your iPad. 3. Gaming Console If you still want to play games on an old iPad, you can use it as a gaming console. You can download classic titles or retro games onto the device and enjoy some nostalgic gaming. How to Use An iPad in Everyday Life Here’s a simple guide I made to help you understand how to use an iPad in everyday life Start by including your iPad in your workflow by using it to take notes while you take calls at work. You can also jot down a quick daily to-do list. If you’re a student, consider using it for studying and recording notes in lectures. Another way to include the iPad in your life is to start journaling on your iPad. This can be done when you pen down your reflections in a journaling app on your tablet at the end of the day. These steps should be a good way to get you started in being a power iPad user. Related Questions Here are some commonly asked questions you might have. I’ll answer them below! What is the purpose of having an iPad? The purpose of having an iPad is to use it as a device for entertainment, productivity, or both. You can stream movies, play games, take notes, keep track of your schedule, and more. It’s also great for taking pictures and videos with its advanced cameras. iPads have excellent battery life, so you won’t have to worry about them dying on you during an important task. What are the main uses of an iPad? The main uses of an iPad are note-taking, reading, media consumption, video editing, and gaming. What is better, an iPad or a laptop? An iPad is the better choice for a lightweight and versatile option, but the laptop is better for typing and heavier computing work. Also, if you need more processing power or want access to traditional desktop PC applications, then a laptop is the way to go. Ultimately, it comes down to what tasks you need in your workflow. Can iPad be used as a laptop? Yes, an iPad can be used as a laptop with the help of a keyboard and an external mouse. The iPadOS operating system supports these accessories, which makes it possible to use the device for more intensive work like typing up documents or coding. Why do I need an iPad if I have a laptop? An iPad can be a great supplement for your laptop if you need to do light work on the go. It’s portable, easy to carry around, and has long battery life. It also offers access to apps that might not be available on laptops which makes it perfect for tasks like drawing or playing games. It also provides an external device dedicated to excellent digital drawing and note-taking. If you already own iOS devices, having an iPad is also great for tight integrations through the Apple Ecosystem. Final Thoughts The iPad is a versatile tool that can be used for many different tasks. Whether you use it as a notebook, gaming console, or digital picture frame, you’ll find that this device can make your life more productive and enjoyable. I hope I was able to help you understand how to use an iPad in everyday life and make good use of that iPad that’s just doing nothing but collecting dust in your home. The post iPad Uses 20+ Creative Things to Do With Your Apple Tablet! appeared first on Any Instructor.


Network Security Video
Category: Network

Introduction to Network Security [Video], be aware and protect your company's data. Vid ...


Views: 242 Likes: 87
How to Automate Income for a small Business in 202 ...
Category: Research

Diversifying income streams is a smart strategy for small businesses to reduce risk and explore ...


Views: 0 Likes: 6
Bootstrapping a Digital Product
Bootstrapping a Digital Product

Every Digital Product Starts in the Same Place—With an Idea An idea that sticks in your craw (as we say here in the South), and won’t let go. An idea that strikes in a moment of clarity, or one that slowly takes shape over time. An idea that stands up to endless poking and prodding, questions asked and answered, and examined from a multitude of angles. After years of working in startups and mentoring folks in the startup world, I can tell you that the best ideas are born out of problems. Maybe it’s a small problem in your day-to-day life (i.e., always hunting for your partner’s keys brought us a product like Tile), or a problem that if resolved would greatly impact your company’s efficiency and collaboration (such as cloud-based file-sharing, i.e., Dropbox). Intentionally looking for and digging into the pain points in your daily life, your community, or your industry can be a rich place for digital product ideas. No matter how great you think your idea is, just having an idea isn’t enough to pull-off a product build. You need at least some of the aforementioned people and processes to build a product. And in order to pull together the resources and expertise that you need, you’re going to have to communicate your idea and your plan in a compelling way.   This Isn’t a Guide About How to Hone Your Elevator Pitch to Land Venture Capitalists There’s a lot of ground to cover between initial idea and elevator pitch—practically an entire ecosystem of information, concepts, and strategies such as innovation accounting, cohort analysis, growth engines, actionable metrics, customer validation, validated learning, value propositions, BHAGS, OKRs, MVPs… the list goes on and on and each one has their own line of books, podcasts, articles, blogs, and Twitter experts to sift through. It is overwhelming and the sheer wealth of information could be enough to make you want to hurl your idea into the ocean and run far, far away. But if that idea keeps coming back to you, or never really leaves. If you are convinced there is something there. If you’re ready to take on the monumental task of building a digital product. Then I am here for you. My goal with this article is to help you navigate the startup landscape, avoid analysis paralysis, and give you a framework for forward progress. There are some great concepts and processes out there that you should explore as you cultivate your idea and we’ll pick and choose from some of the best ones so you can keep moving towards bringing your idea to reality. The concept is to build something that you can test, test it, and then learn from those tests to allow you to make decisions on what you need to build next. However, if taken too literally, this process can be extremely expensive, an ineffective use of resources, and could crush your entire endeavor before you really get off the ground. Every Product’s Goal Is to Reach a Solid Product/Market Fit. Your idea might be amazing, and might solve a real problem for you, but it is just a hypothesis. Many entrepreneurs fall in love with their solution, spend a bunch of precious resources designing, building, and releasing a product onto an unsuspecting world and find that it falls flat on its face. Unfortunately this is the fate of most startups. Instead of this big bang approach, you’ll want to take smaller steps, and slowly adjust to reach a good product/market fit. It might feel like this process would take longer, especially if you’re convinced that your product is a perfect fit for the market, but it is the constant learning and adjustments that ensure that you don’t move too quickly in the wrong direction. Below is a diagram of what it looks like when you build out a product, but are moving in the wrong direction, and then have to pivot. It can be a very costly mistake as opposed to moving in small and measured increments. The Roadmap As we’ve noted before, there’s a lot of ground to cover on our way to a solid digital product, and many ways to get there. There are a lot of flashy billboards along the way that promise quick results, immediate feedback, and guaranteed success. We’re going to avoid those — they’re tourist traps. But do feel free to take a pit stop if there’s an idea that’s presented in this next section that seems really intriguing to you — a little more exploration in that area may be just what you need to follow through on your journey. If you’re familiar with the concept of “Design Thinking,” then you probably recognized these as roughly the same steps that you’ll see outlined as the phases of design thinking. I am going to walk through these different steps and talk about some of the approaches that I think can help you keep moving through the process.   Find Your Problem As we mentioned earlier, the best ideas for digital products are born out of problems—those pain points in our processes or daily life that repeatedly slow us down or cause irritation. Look around at your own life, your family, your commute, your schedule, your work, your hobbies. Consider your organization, your industry, your conferences, your networking opportunities. Where are the trouble spots? When do nerves get frayed and who does it affect the most? What systems run smoothly while others just can’t get off the ground? Is there a pattern? I really wish I could offer more advice on this topic. But until you’ve found a problem that is urgent, and that truly solves a problem for someone, you should probably stop here. However, I have a strong feeling that if you’re reading this, you are already well aware of a problem that is just dying to be solved. Onward! Be intentional about looking for problems in your day to day life and you’ll likely find plenty of ideas! Define the Problem Maybe this has happened to you You’re pouring out your heart to a close friend or partner about a particularly difficult situation and before you even finish talking, they’re cutting you off so they can tell you what you should do (“What you need is…”, “Let me tell you about the time I…”, “You need to talk to my friend who sells…”) As supportive as they’re trying to be, they’re jumping to offer a quick-fix, uninformed solution when what you really need is someone to just listen and understand. In startups, as in relationships, jumping ahead to the solution without doing the work of listening to the problem is where things can start to veer off course. So this is where we begin the work of defining the problem that you’re trying to solve — and that’s not the same thing as solving the problem! The goal at this point is to start learning. In order to start learning, we have to start with a hypothesis about what problems we are trying to solve. Write down a few of them, and don’t get too hung up on whether they are right or wrong for now. A few examples of the hypotheses could be Finding recipes and making dinner when working parents get home from work is hard. It is too challenging for coworkers to share large files with each other in a secure way. It is so hard to regularly keep in touch with your extended network of friends. Remember, we’re not looking for solutions, just the problems. Stay open to possibilities, and as the saying goes… fall in love with the problem, not the solution. If you’re feeling challenged with your problems, or you just want to dig in a little deeper into the topic, here are some helpful tools available that you may want to explore. Lean Canvas does a great job of helping people refine a product’s business model. Check out the “Problems” section as a jumping off point (see below). Five Whys might be a helpful framework for trying to get to the root causes of your problem. Try your hand at writing a formal Problem Statement, and see if that helps you identify and explain your problem. Empathize With Others The next step in this process is to shift our perspective a bit and think about the problem from the viewpoint of possible customers. This could look like a simple list of target customers and users, but I think it’s worth digging a little deeper to form a clearer picture of your users, because from here on out “User Needs” are the source of truth that we’ll come back to over and over throughout our journey. It will serve as the “north star” for our future decision making. To start off, just list out the possible groups of users that you think might want to use your product. You can call these “customer segments” or “user archetypes” or whatever you want, but the point is to get to a list like this Lawyers in large natural gas companies Owners of pet grooming businesses Fraud analysts working in medium-sized financial services companies You should try to be as specific as possible with these customers. Having a list of the top four or five is usually good, because you can only focus on so many people’s needs. You’ll want users that are specific enough that you can imagine a person that would fit into those roles. As you proceed with the process, it can be incredibly useful to actually think of imaginary people that represent these roles. People in the UX industry call these Personas. Personas are a tool for defining the major groups of users for your product, and allow you to design your product with a particular set of users and needs in mind, which is incredibly valuable. Being able to design some functionality and then test that functionality against the needs of your personas allows you to quickly gut-check your designs. However, creating true personas requires finding potential users and interviewing those users, which can be time and resource-intensive. If you are able to do that, then awesome, go do it! But, to quickly bootstrap this process, Jeff Gothelf coined the term “Proto-Personas,” which is simply the process of creating hypothetical personas that you can use to launch your process and then refine later. There are thousands of blog posts, training courses, and books on creating personas, but here are some basic attributes, and examples, to consider as you get started Don’t get too hung up on figuring out the “right way” to put this initial version together. The value here is in thinking deeply about the types of customers who could be using your application, and why they would get value out of it. Defining these proto-personas will allow you to dig deeper and really start to think about whether the problems you came up with in the previous step really were the set of problems you’re setting out to solve. If, after you’ve defined your proto-personas, you’re still confident in the problems you defined, then feel free to continue on—otherwise go back and spend some more time with them until you feel good about them. If you feel like you’re completely making things up, and you don’t feel good about your proto-personas at all, then maybe it is time to get out and go talk to some folks that might be users of your product. The value here is in thinking deeply about the types of customers who could be using your application, and why they would get value out of it. Generate Solutions Time to dream. Congratulations! You’ve finally made it to the part where you can begin to think about possible solutions to your problem. This is usually where most entrepreneurs start, so you can pat yourself on the back for forcing yourself to back up a bit and really start at the beginning. So let’s dig in What do you think it would take to solve the problems that you’ve defined? At this point, we’re still keeping it very high-level—we’re zoomed out, at 30,000 feet, and thinking big picture. We’ll break it down later on. For instance, if your problem was “Sharing large files is hard,” then your solution could be “Software to allow files to be sent to friends with one click.” Be sure to come up with a high-level solution for each part of the problem you defined earlier. Once you have your list of solutions, quickly check that list against your proto-personas. Do the solutions you’re thinking of meet the needs of the proto-personas you defined? If not, then think about whether you need to go back and define your problem even more, or refine what you already have. Let’s Be Honest, Did You Have a Solution in Mind When You Started? Is that the same solution as what you just wrote down? Do you feel like you refined your solution at all by thinking about your possible customers? If not, then there are two possibilities here Your solution is amazing, you’re perfectly on target. You’re really caught up on your solution, and it is clouding how you think about your problems and your customers. I know which one I think is most likely. But in either case, it is always a good idea to go out and find people who might be your customers and talk to them. Try not to ask leading questions, try not to suggest the solutions you already have in your head. Just ask them what their problems are. If you can’t get them to talk about their problems, then tell them your problems. Ask them if the problems you’re describing are actually a problem for them. Usually this will get them talking. Only once you’ve really exhausted the discussion around the problems they are facing should you broach the idea of your solution. Just remember, everyone is going to tell you that your idea is good. No one likes to shoot down someone else’s idea. Genuinely ask them how your proposed solution might help them. Try to get them to be specific. Tell them it won’t hurt your feelings if they don’t like your proposed solution (even if it is a bit of a fib). Ask them if they will pay money for it. Do whatever you can to elicit an honest response from them. Turning Solutions Into a Product If you’re feeling pretty good about the problems you’ve defined, your set of customers, and about your proposed solutions, then you’re probably ready to move forward. Our next step is to take those high level solutions and break them down into high level features. Sometimes a helpful tool here is what design folks call a user flow. At its most basic level, a user flow is the series of steps a person must perform in order to accomplish a task. Going through this process step-by-step for each of our high-level solutions is how we come to understand what we really want to build—it’s all about learning and understanding. A small example user flow for a food delivery app You don’t necessarily have to create a graphical representation of a user flow. You can either sketch them out on paper, or you can just start by creating a list. The point is to start thinking about how a person might accomplish one of their goals within your solution. As soon as you create one of these flows, you can start to think about what are the individual features that you’ll have to build to let a user complete the flow. Try to capture the high level features that you think your customers will really care about. You can spend a ton of time here, and you can build out a feature list that is a mile long, but that usually isn’t very helpful at this stage. If you do end up with a very long feature list, then you’ll need to think a bit about how to whittle your list down. The features you define will look something like this Search a list of restaurants View a restaurant’s menu Add an item to your cart from the restaurant’s menu Refine Your Solutions One of the easiest tools that I have found for whittling down an initial feature-set is to plot all of your features based on impact vs cost. This is also a great conversation to have with people in your life who may be possible users (family, friends, co-workers), because the primary value here is the conversations about the impact of different features, and the associated difficulty with implementing them. And in order to get the most value out of this process, it’s important to talk with an experienced software engineer who can help you estimate the effort of implementing the features you’ve come up with. Clearly, the best possible set of features is a grouping that is almost exclusively high impact and low cost — you definitely want the most bang for your buck. However, it’s incredibly rare to put together an initial prototype from all high impact/low cost features. You can expect that you’ll need to include a few high impact/high cost features as well as those wonderful high impact/low cost features. But don’t get too worried if you have a lot of high impact/high cost features, just because something is high cost doesn’t mean that you have to implement it right out of the gate. You can always use manual methods to fake features until you’re ready to invest more effort. People often refer to these as a “Wizard of Oz” Prototype. While maximum effectiveness at minimum cost is important when it comes to narrowing down your feature list, you don’t want to lose sight of the “complete experience.” Take a look at this now famous drawing by Henrik Kniberg The idea here is to build a small but useful product, and then keep making it larger and more feature-filled until you have a product that your customers love. You can’t just build an isolated feature and continue to add other fully fleshed-out features until your product finally works. You won’t learn anything or gain any customers along the way. As an example, pretend you’re building a file sharing product. You couldn’t just build out the file uploading piece without building a sharing mechanism, the users wouldn’t get any value from this and so it wouldn’t provide you with any learning. In order to stress this point, some people in the industry have started using the phrase Minimum Loveable Product instead of Minimum Viable Product to bring focus to the idea that you need to build something that a small number of users really love, rather than setting out to build something that a large number of users merely like. Trying to build out a series of features to tackle a really big market might seem like a great idea, but you’re almost always better off tackling a smaller market and then moving upstream. Prototype Have we finally traveled far enough on this journey that it’s time to jump out and go build something? Well, no. This happens far too often, especially from startup founders who are software engineers. I’ve been there. Engineers want to build things, and they see building something as a way to test out an idea. If you’re an engineer, and you just want to build something, then by all means go and build it. Just don’t pretend that it is always the fastest and easiest way to test and iterate on your idea. For many types of digital startup products, prototypes are the way to go. You may have heard the terms wireframe, mockup, prototype, etc… and the differences can be a bit confusing. They are all visual representations of your future product, just at different levels of fidelity. There isn’t a ton of agreement, and so you’ll sometimes hear people use these terms interchangeably, particularly mockup and prototype. I like to think of a wireframe as a very low-fidelity view, a mockup as a static high-fidelity view, and a prototype as an interactive version of a mockup. There are a lot of amazing tools out there to help you build out your prototype (we particularly enjoy Figma), and if you have the skills to use those tools, you should absolutely pursue that. However, for most people just getting started, it isn’t realistic to go learn one of these tools just to start this process. Instead just grab a whiteboard, or a piece of paper and a pen! What’s the most important thing to get out of a prototype? Yep, you guessed it…learning! A hand-drawn prototype might not be as slick as a digital prototype, but if you’re able to create something that will let you and your prospective customers imagine what it would be like to use your product, then you can learn quite a bit! While it is true that digital prototypes allow you to iterate faster and help your users feel like they are using a real product, don’t get stuck feeling like you absolutely have to create one. Just remember to start small. I can’t say enough how important it is to start simple and slowly ramp things up. Don’t go big at the beginning and build out a large and detailed prototype before you start putting it in front of people. If you do this you’ll fall into the sunk cost trap, and once you get attached to it, you won’t be able to throw it away…even if your users are telling you it isn’t meeting their needs. Test, Then Iterate Start small, start simple. Create a quick and dirty prototype, show it to people, do some learning, and then iterate on it. Keep doing this until you feel good about your prototype. When building out your prototype, you’ll probably get to a point pretty quickly where you’ll want to show it to someone. And please do! Show it to someone as early as possible. Don’t let perfect be the enemy of good! If your instinct is to hide it away until you’re “done,” you’re going to have to force yourself to overcome this. This is the fear of ridicule or failure biting at you, and if you want to launch a product, you’re going to have to find a way to start referring to failure as learning, because it really is the same thing. When you’re asking for feedback on your prototype from potential customers, be sure you’re focused on doing real learning and not just looking for affirmation or reinforcement of your own assumptions. As I mentioned earlier, most people won’t want to offend when giving feedback, so you might not hear that your idea is bad or your product is unusable. Be willing to read between the lines of their feedback to get to their true feelings. Ask open-ended questions that allow your customer to consider the product from their own perspective, and not through your own, highly invested lens. Allow your potential customer to work their own way through your prototype without any of your input and make sure you’re asking questions to try to prove your assumptions wrong. You want your product to hold up to scrutiny, so as uncomfortable as it may be to put your baby out there, know that this process is meant for good—either your product will hold up and flourish, or you’ll know that it’s not ready yet and you can walk it back and iterate again. One critical piece of information to gain from this part of the process is whether people will pay for your product. As soon as you’ve refined your prototype to the point that it is starting to feel real, ask people to put down real money in order to be a customer when it launches. Lots of people will tell you your idea is great, and that they love it. But when you ask for money you’ll quickly start to hear their objections. But that’s not failure—it’s learning! If people are finding reasons not to pay, find out what those are, and iterate again. Even an objection as innocuous as “our company policy won’t let us pay for products ahead of time,” can be a learning opportunity. If your product can be the solution to a painful problem they are having, they are probably going to be brainstorming ways to get around company policies, rather than using them as an excuse to end the conversation. So now you’ve put your prototype in front of a few people, hopefully potential customers, and you’ve started to get feedback. Based on that feedback how does it affect your initial assumptions? Does it change how you think about your users? Does it change the problems you’re trying to solve? Does it change the solutions you’re proposing? If you still feel like you’re headed in the right direction on the items listed above, then maybe it just changes the implementation of your solution? If so, go back and update your prototype and check in with the same folks again. After a few rounds of this you can expand and iterate on your prototype until you’re in a spot where you feel like you have a prototype of an MVP that you can get excited about! Maybe it doesn’t have all of the features you want, maybe it feels a little basic, but that can be a good thing. As long as you’re excited, and your potential customers are excited, you’re probably in a good spot to move on to the next step. What’s Next? There are a few directions you can go from here Build it The first, and most obvious option would be to start building out your MVP. If you’re able to build it yourself, or you can get someone to build it for you, then this might make sense. Just be realistic about what you can accomplish, and aim for getting something minimal to market that people will love. And try to do that in a really short timeframe. Raise money Friends or family that are willing to chip in is how many startups get their initial funding. Others will find investors who are willing to invest in their idea, but this is much rarer unless you have a ton of specialized expertise, relationships, experience, or are a smooth talker. Don’t build it, fake it! Some startups are ripe for what we referred to earlier as a “Wizard of Oz” experiment or what people call a “Concierge MVP.” If you start small, you might be able to manually make the whole thing work until you have some customers under your belt. For certain types of ideas, you can get pretty far with a Google Form wired up to Zapier. Once you have some learnings and a few paying customers, finding funding can be a much easier prospect. Just remember, once you start building, fundraising, or experimenting you’ll still need to pay careful attention to the learning that occurs along the way. Don’t get so wrapped up in the direction you’re heading that you ignore what investors or customers are telling you. You need to continue to focus on the build, measure, learn cycle and keep prototyping, testing, and iterating as you go. Great Software Is About People, Not Code Thank you for taking the time to read through this guide. We hope that you took something away from it that can help you in your product journey. For us, building digital products is about understanding the people involved, what they need, and what drives them. If you always keep your focus on that, you’ll go far. If you ever need help, feel free to reach out, we would love to hear from you. The post Bootstrapping a Digital Product appeared first on Simple Thread.


Reset passwords in ASP.NET Core using delegated permissions and Microsoft Graph
Reset passwords in ASP.NET Core using delegated pe ...

This article shows how an administrator can reset passwords for local members of an Azure AD tenant using Microsoft Graph and delegated permissions. An ASP.NET Core application is used to implement the Azure AD client and the Graph client services. Code https//github.com/damienbod/azuerad-reset Setup Azure App registration The Azure App registration is setup to authenticate with a user and an application (delegated flow). An App registration “Web” setup is used. Only delegated permissions are used in this setup. This implements an OpenID Connect code flow with PKCE and a confidential client. A secret or a certificate is required for this flow. The following delegated Graph permissions are used Directory.AccessAsUser.All User.ReadWrite.All UserAuthenticationMethod.ReadWrite.All ASP.NET Core setup The ASP.NET Core application implements the Azure AD client using the Microsoft.Identity.Web Nuget package and libraries. The following packages are used Microsoft.Identity.Web Microsoft.Identity.Web.UI Microsoft.Identity.Web.GraphServiceClient (SDK5) or Microsoft.Identity.Web.MicrosoftGraph (SDK4) Microsoft Graph is not added directly because the Microsoft.Identity.Web.MicrosoftGraph or Microsoft.Identity.Web.GraphServiceClient adds this with a tested and working version. Microsoft.Identity.Web uses the Microsoft.Identity.Web.GraphServiceClient package for Graph SDK 5. Microsoft.Identity.Web.MicrosoftGraph uses Microsoft.Graph 4.x versions. The official Microsoft Graph documentation is already updated to SDK 5. For application permissions, Microsoft Graph SDK 5 can be used with Azure.Identity. Search for users with Graph SDK 5 and resetting the password The Graph SDK 5 can be used to search for users and reset the user using a delegated scope and then to reset the password using the Patch HTTP request. The Graph QueryParameters are used to find the user and the HTTP Patch is used to update the password using the PasswordProfile. using System.Security.Cryptography; using Microsoft.Graph; using Microsoft.Graph.Models; namespace AzureAdPasswordReset; public class UserResetPasswordDelegatedGraphSDK5 { private readonly GraphServiceClient _graphServiceClient; public UserResetPasswordDelegatedGraphSDK5(GraphServiceClient graphServiceClient) { _graphServiceClient = graphServiceClient; } /// <summary> /// Directory.AccessAsUser.All User.ReadWrite.All UserAuthenticationMethod.ReadWrite.All /// </summary> public async Task<(string? Upn, string? Password)> ResetPassword(string oid) { var password = GetRandomString(); var user = await _graphServiceClient .Users[oid] .GetAsync(); if (user == null) { throw new ArgumentNullException(nameof(oid)); } await _graphServiceClient.Users[oid].PatchAsync( new User { PasswordProfile = new PasswordProfile { Password = password, ForceChangePasswordNextSignIn = true } }); return (user.UserPrincipalName, password); } public async Task<UserCollectionResponse?> FindUsers(string search) { var result = await _graphServiceClient.Users.GetAsync((requestConfiguration) => { requestConfiguration.QueryParameters.Top = 10; if (!string.IsNullOrEmpty(search)) { requestConfiguration.QueryParameters.Search = $"\"displayName{search}\""; } requestConfiguration.QueryParameters.Orderby = new string[] { "displayName" }; requestConfiguration.QueryParameters.Count = true; requestConfiguration.QueryParameters.Select = new string[] { "id", "displayName", "userPrincipalName", "userType" }; requestConfiguration.QueryParameters.Filter = "userType eq 'Member'"; // onPremisesSyncEnabled eq false requestConfiguration.Headers.Add("ConsistencyLevel", "eventual"); }); return result; } private static string GetRandomString() { var random = $"{GenerateRandom()}{GenerateRandom()}{GenerateRandom()}{GenerateRandom()}-AC"; return random; } private static int GenerateRandom() { return RandomNumberGenerator.GetInt32(100000000, int.MaxValue); } } Search for users SDK 4 The application allows the user administration to search for members of the Azure AD tenant and finds users using a select and a filter definition. The search query parameter would probably return a better user experience. public async Task<IGraphServiceUsersCollectionPage?> FindUsers(string search) { var users = await _graphServiceClient.Users.Request() .Filter($"startswith(displayName,'{search}') AND userType eq 'Member'") .Select(u => new { u.Id, u.GivenName, u.Surname, u.DisplayName, u.Mail, u.EmployeeId, u.EmployeeType, u.BusinessPhones, u.MobilePhone, u.AccountEnabled, u.UserPrincipalName }) .GetAsync(); return users; } The ASP.NET Core Razor page supports an auto complete using the OnGetAutoCompleteSuggest method. This returns the found results using the Graph request. private readonly UserResetPasswordDelegatedGraphSDK4 _graphUsers; public string? SearchText { get; set; } public IndexModel(UserResetPasswordDelegatedGraphSDK4 graphUsers) { _graphUsers = graphUsers; } public async Task<ActionResult> OnGetAutoCompleteSuggest(string term) { if (term == "*") term = string.Empty; var usersCollectionResponse = await _graphUsers.FindUsers(term); var users = usersCollectionResponse!.ToList(); var usersDisplay = users.Select(user => new { user.Id, user.UserPrincipalName, user.DisplayName }); SearchText = term; return new JsonResult(usersDisplay); } The Razor Page can be implemented using Bootstrap or whatever CSS framework you prefer. Reset the password for user X using Graph SDK 4 The Graph service supports reset a password using a delegated permission. The user is requested using the OID and a new PasswordProfile is created updating the password and forcing a one time usage. /// <summary> /// Directory.AccessAsUser.All /// User.ReadWrite.All /// UserAuthenticationMethod.ReadWrite.All /// </summary> public async Task<(string? Upn, string? Password)> ResetPassword(string oid) { var password = GetRandomString(); var user = await _graphServiceClient.Users[oid] .Request().GetAsync(); if (user == null) { throw new ArgumentNullException(nameof(oid)); } await _graphServiceClient.Users[oid].Request() .UpdateAsync(new User { PasswordProfile = new PasswordProfile { Password = password, ForceChangePasswordNextSignIn = true } }); return (user.UserPrincipalName, password); } The Razor Page sends a post request and resets the password using the user principal name. public async Task<IActionResult> OnPostAsync() { var id = Request.Form .FirstOrDefault(u => u.Key == "userId") .Value.FirstOrDefault(); var upn = Request.Form .FirstOrDefault(u => u.Key == "userPrincipalName") .Value.FirstOrDefault(); if(!string.IsNullOrEmpty(id)) { var result = await _graphUsers.ResetPassword(id); Upn = result.Upn; Password = result.Password; return Page(); } return Page(); } Running the application When the application is started, a user password can be reset and updated. It is important to block this function for non-authorized users as it is possible to reset any account without further protection. You could PIM this application using an azure AD security group or something like this. Notes Using Graph SDK 4 is hard as almost no docs now exist, Graph has moved to version 5. Microsoft Graph SDK 5 has many breaking changes and is supported by Microsoft.Identity.Web using the Microsoft.Identity.Web.GraphServiceClient package. High user permissions are used in this and it is important to protection this API or the users that can use the application. Links https//aka.ms/mysecurityinfo https//learn.microsoft.com/en-us/graph/api/overview?view=graph-rest-1.0 https//learn.microsoft.com/en-us/graph/sdks/paging?tabs=csharp https//github.com/AzureAD/microsoft-identity-web/blob/jmprieur/Graph5/src/Microsoft.Identity.Web.GraphServiceClient/Readme.md https//learn.microsoft.com/en-us/graph/api/authenticationmethod-resetpassword?view=graph-rest-1.0&tabs=csharp


How to send JSON data in HTTP Request Body to a do ...
Category: Databases

Knowing how to send raw JSON data to an API End-Point is very important for a Junior Software Dev ...


Views: 1409 Likes: 95


For peering opportunity Autonomouse System Number: AS401345 Custom Software Development at ErnesTech Email Addresspeering@ernestech.com