Rachel Wright

  • Resume
  • AWS
  • General
  • How-tos
  • Security

Rethink what you think you know about .Net

Published: Thu 31 March 2022
By Rachel Wright

In General.

AWS recently announced support for .Net 6 as a native-language option for Lambda functions (previously, only .Net Core 3.1 was supported). This is exciting because the newly released Net 6.0 and C# 10 between them have some pretty cool changes that might make you rethink what you know about .Net. Let's just look at just a few that I think are pretty intriguing.

Implicit and Global Usings

You're probably familiar with C# or Typescript files with dozens of using lines at the head of the file. Not anymore - C# 10 introduce implicit and global usings. Using these two features together means you can move all of your using statements into a single, global usings file. In fact, you don't need to add system libraries to the usings file, only libraries you've added beyond the template. So even your GlobalUsings file can remain pretty tidy.

Top-Level Statements and the Minimal Hosting Model

Previous versions of C# had a lot of boilerplate in every file. In C# 9, support was introduced for top-level statements. So something like this:

using System;
class Program
{
    static void Main()
    {
        Console.WriteLine("Hello World!");
    }
}

Can now be written like this:

Console.WriteLine("Hello, World!");

That's right - a one-line Program.cs file. No Main method required! The top-level statements will definitely simplify your code, especially if you also take advantage of .Net's new, minimal hosting model. In the past, separate steps were required for setting up the host builder and configuring services. Now, these steps can all be done in a single file. Below is what the web api template generated before:

    public class Program
    {
        public static void Main(string[] args)
        {
            CreateHostBuilder(args).Build().Run();
        }

        public static IHostBuilder CreateHostBuilder(string[] args) =>
            Host.CreateDefaultBuilder(args)
                .ConfigureWebHostDefaults(webBuilder =>
                {
                    webBuilder.UseStartup<Startup>();
                });
    }    
    public class Startup
    {
        public Startup(IConfiguration configuration)
        {
            Configuration = configuration;
        }

        public IConfiguration Configuration { get; }

        // This method gets called by the runtime. Use this method to add services to the container.
        public void ConfigureServices(IServiceCollection services)
        {
            services.AddControllers();
        }

        // This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
        public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
        {
            app.UseHttpsRedirection();

            app.UseRouting();

            app.UseAuthorization();

            app.UseEndpoints(endpoints =>
            {
                endpoints.MapControllers();
            });
        }
    }

Oof. That's a lot of boilerplate code and I haven't even added any routes or services. With top-level statements and the minimal hosting model, this empty web api project now becomes this:

var builder = WebApplication.CreateBuilder(args);

builder.Services.AddControllers();
var app = builder.Build();

app.UseHttpsRedirection();
app.UseAuthorization();
app.MapControllers();

app.Run();

Now, if we could just get rid of those semicolons...

But Wait, There's More!

Actually, there's a lot more - .Net 6 boasts some significant performance improvements, and combined with C# 10 there are a whole range of language improvements and improved cross-platform support. I'm still digging through the changes and looking forward to reducing the bloat in my .Net projects, and I'll write about other interesting features I find. I personally love the top-level statements and minimal hosting model because I like concise, easy to read code that doesn't force me to scroll all over the place to figure out what's going on.

References

To learn more about .Net 6 and C# 10, check out these posts:

.Net 6 Release announcement

C# 10 Release announcement

The 5 Rs of "as Code"

Published: Mon 03 June 2019
By Rachel Wright

In General.

Everything old is new again

A buzzword you may have heard is "as code" -- things like Infrastructure-as-Code, Configuration-as-Code, Network-as-Code. What does it mean, and why should you care? The "as code" movement is a shift away from executing manual setup steps to writing code that will perform all the steps automatically. In a way, it's going back to the days before there were wizards and graphical user interfaces (GUIs), when doing anything meant issuing text-based commands. The wizards and GUIs simplified individual setup processes, but the steps remained essentially manual. This was fine when an organization bought new hardware or software relatively infrequently, but one of the many benefits of virtualization (whether in the cloud or on premise) is that dropping and recreating resources is pretty easy, so now setting up hardware and software is happening much more frequently. In this new paradigm, there are five benefits to converting these processes to code.

  • Reliable: When a process is written, stored, and executed as code, there is no risk of an operator accidentally skipping a step, or mistyping a command. If an error occurs, the code can be corrected once, and that error won't recur.
  • Repeatable: No matter how often the process runs, it will always do the same thing.
  • Regularized: A coded process can be designed to comply with internal or external standards and can ensure consistency of outcomes.
  • Recorded: The coded process is self-documenting, because it is documented before it happens. The documentation doesn't depend on an operator transcribing their actions.
  • Ready to Go: In what is probably the biggest benefit to practitioners, coded processes are ready to go. An operator needs simply to get the code (ideally, from a version control repository), and execute.

Though the rise of DevOps may have made it more visible, "as-code" isn’t really new. There is architecture as code, in the form of template projects and scaffolding tools that set up baseline application programming environments. Major cloud service providers (CSPs) provide infrastructure as code templates that can be used as-is or customized. Numerous software tools store configuration as code, making it much simpler to provide a consistent experience across an organization.

Technical Debt

Published: Mon 05 November 2018
By Rachel Wright

In General.

In software development, the term “technical debt" (according to The Software Engineering Institute at Carnegie Mellon University) is a concept that “conceptualizes the tradeoff between the short-term benefit of rapid delivery and long-term value." Technical debt is a consequence of the balance between developing software quickly versus developing code that conforms to all best practices. The phrase was coined in the 1990s and remained largely a developer-centric concept, but interest in technical debt and its impacts has become important for even non-developers to understand.

Although it may have negative connotations, not all debt is bad debt. Technical debt can be incurred deliberately or inadvertently, and if it accumulates, it can negatively affect your ability to operate and maintain your code. And just like any other kind of debt, you can't make it go away by ignoring it—you need a plan to handle it.

In the Agile software delivery model, which emphasizes delivering the minimum viable product (MVP) as quickly as possible, development teams often deliberately incur technical debt. A simple example is a drop-down list with hard-coded values for the MVP release; the technical debt incurred is the work it will take to go back in later iterations and make that list editable. In this example, the trade-off is a relatively simple decision, as it is unlikely that architectural changes will be required to support the future change. The decisions may not always be so obvious. Which is why having a plan matters! The best way to manage technical debt is to track it the same way you track all other development work. When a trade-off decision is made, the related technical debt should be added to your Agile work tracking system (such as Visual Studio Team Services or Jira) and then managed, prioritized, estimated and planned just like any other work item or feature.

Unintentional technical debt is incurred when the organization attempts to build an MVP on a weak architecture. This is usually the result of inadequate or incomplete design. In this situation, the MVP may still be successful, but the product will reach a point where adding new features will require a significant investment in re-factoring the architecture. This is the “Goldilocks" of software design—you want an architecture that is sufficient to support future growth, but not overdesigned and difficult to work with. The best way to achieve this is to ensure you thoroughly understand the business domain before you start designing. The better you understand the end user's needs, the less likely you will be surprised by “unexpected" requirements.

Occasionally, you may hear “technical debt" used to describe vast quantities of legacy code, ill-conceived, or poorly written code. This is a misnomer. Technical debt is not an excuse for sloppy code, it refers to code and design that is sound, but may require some level of re-factoring to meet the ongoing needs of the project. Code that is buggy, poorly structured, or poorly written is not technical debt—there is an appropriately different term for that: “bad code."

Data Visualization

Published: Sat 27 October 2018
By Rachel Wright

In General.

Data is exploding at an incredible rate. According to Forbes, there are 2.5 quintillion bytes of data created each day at our current pace, and that pace is only accelerating. To put that into perspective, if you laid 2.5 quintillion pennies out flat, they would cover the Earth five times. This creates the significant challenge of making sense of such large datasets. We’ve all heard of Data Scientists, who are able to use mathematical, statistical and computational methods to perform data processing and analysis on this massive amount of data. However, an equally important “last mile” skill is creating visualizations that communicate the insights that data scientists help unearth. Producing good data visualizations does not require sophisticated mathematical skills, but rather a solid understanding of your data.

What makes a good data visualization?

Like good design, you know when you see it. And similar to good design, good data visualization doesn’t just happen – it requires thought and planning, and knowledge of your data and the information it represents can’t be replaced by purely mathematical skills. Understanding the underlying subject matter is key to creating meaningful visualizations, visualizations that tell a story. Fortunately, modern tools democratize the process of creating reports: creating visualizations in Excel, Qlik, or Tableau is not technically difficult, so subject matter experts are less beholden to report developers for new visualizations. So have fun and try some visualizations for your next report or presentation! Here’s some things to keep in mind to take your visualizations to the next level:

  • What do you want to communicate? Before creating a visualization, have a specific message or idea that you want to get across. Then when you create a graphic, go back and ask, “is that clearly communicated”?
  • Think about the medium and the audience. Is this visualization for a presentation or for self-paced consumption? How quickly does the visualization need to communicate its message? If viewers need to absorb a message quickly, it may be necessary to use multiple graphics. What is the audience’s level of familiarity with the information represented by the data? Just because a graphic many not be meaningful to a layman doesn’t mean it needs to be redesigned, as long as the audience will understand it.
  • Finally, iterate. If your first visualization doesn’t clearly communicate your message, make changes – remove data not needed for the message, simplify busy graphics, or try different formats. The flexibility of the tools and the ease of generation gives you the opportunity to refine your visualizations quickly and easily.

For further reading, check out some of these articles:

https://www.forbes.com/sites/brentdykes/2016/03/31/data-storytelling-the-essential-data-science-skill-everyone-needs/#3ebd219c52ad

https://news.nationalgeographic.com/2015/09/150922-data-points-visualization-eye-candy-efficiency/

https://hbr.org/2016/06/visualizations-that-really-work

Proudly powered by Pelican, which takes great advantage of Python. Hosted by AWS Amplify