As you take software development more seriously, Application Lifecycle Management (ALM) becomes more important.
Application Lifecycle Management
Application Lifecycle Management isn’t something that stands on its own. After years of developing apps, you can’t just implement ALM overnight. It is important to put some processes in place.
In this post I’m looking at a number of things that you should consider.
- Get started with environments and solutions
- Deployment automation
- Look after you low code in Azure DevOps
The Power Platform has come a long way, from a pure citizen development tool, more and more pro development has influenced the way we use the tools. A while back multiple environment has become standard to separate development and production.
As it is more common to use multiple environments the application lifecycle management is becoming more important.
In short, how do we get our development efforts into production without risking, or at least reduce the risk of breaking something.
Get Started with solutions
I’m going to assume for now that you already have created a development, test and production environment. Within the Development environment we will need to create a solution within Power Apps or Power Automate and then add a couple of things to the solutions. I my case I added a flow and a canvas app to my solution.
We can now save and publish these changes in development. Do some basic testing to ensure that all is working. But now comes the big test. Will this work in the test environment and even more important will this work in production, without potentially breaking previous versions of the solution?
And what if all your apps and flows were created by many people within your organisation in the default environment? Even worse, you might not even know what you have, or who is using it all. Wouldn’t it be helpful if all development was done within solutions? Wouldn’t it be good if all your citizen developers behaved like pro developers?
There is a fine balance between letting citizen developers create the exact solutions that they need and the management of solutions. A lot of citizen developers will complain that ALM is over engineering things.
Quite often I leave apps that are used by small groups of people where they are. Apps that have a greater impact on the business often need to be managed better however.
Get Started with DevOps
For all of my projects I create a new project within DevOps. Projects within DevOps let you manage tasks, code and releases of your products.
Within this post I will focus on the code management and the release management. Specifically, I’m going to have a look at how this all works with the Power Platform’s solutions.
Manage Repos in Azure DevOps
Once we have created a project we will need to initialize the repository that will hold the code. We can click on the initialize button near the bottom of the below screenshot. This will add a readme file to the repository.
So what is this code going to be? And why do we want to put that in repositories.
Code in the Power Platform
The solutions that were mentioned earlier in this post will bundle all elements of your project. This can be a canvas app, a flow or definition of a Dataverse table, but there is a lot more. All of these elements are described in small text files. These text files are bundles together in a single zip file, also called a solution file. When you unzip the solution file you will find a large collection of files in various formats.
These files we want to store in our Azure DevOps repository. One of the major benefits of doing this is so that we can see what has changed during any specific period of time.
Taking this all one step further, how about exporting the solutions/apps/flows developed on a regular basis. So that any changes made are tracked and they could even become your backups!
Within the Azure Devops project there is an option to create Pipelines. Pipelines are a set of steps that are executed. (Yes, it is a bit like flows in Power Automate, but then different).
To create a Pipeline select pipelines in the left hand navigation, then click on the Create Pipeline button.
We can now select that we want to store our code in an Azure Repository. But if you want to you could of course use github, gitbucket or subversion. But i’m going to use the classic editor using the link at the bottom.
Now the following dialog will appear, where the repository related to the pipeline can be selected. In my project I only have one repository but you could potentially have more than one.
Using all the detault options, I just press the continue button and my Pipeline will be created:
Then click on the Agent Job 1 and add a few steps. I will need 4 steps in total. The first 3 steps are all related the the Power Platform. Typing Power in the search box helps a bit:
Within no time I’m adding the following 4 steps:
- Power Platform Tool Installers
- Power Platform Export Solution
- Power Platform Unpack Solution
- Command Line Script
It’s probably worth it to explain a bit about what is going on here.
Our pipeline will first install some tools that look after the export and unpacking of solutions. Then it will export the solution from our development environment. Then the solution that was picked up will be unpacked (unzipping that solutions file so that we get mainly text files). then the last step will upload the files to our repository.
Is it really that simple? No!
Now, we will need to configure the 4 steps to get the first part of our Application Lifecycle Management solution in place.
Power Platform Tool Installer
The is the easy step. Nothing to configure here.
Power Platform Export Solution
The Export Solution step is slightly more complicated. Especially the Authentication bit. You could supply user name and password here. But really, the Service Principal/client secret option is better.
In the service connection you can supply the Tenant ID, Application ID and a secret as you would have to configure within an app registration within Azure.
Within your app registration in Azure make sure that you have the right API permissions configured. The Permissions should look like this:
Once all of the above is done we will see that the pipeline can create us the zip file for our managed or unmanaged solution. In our case we want it to be unmanaged. I’ve given the solution file the name PV Test solution.zip but you can call it whatever is suitable for your project.
Power Platform Unpack Solution
Now that we have a zip file, we want to unpack the zip file. Having zip files in repositories might give us a backup, but it doesn’t give us the option to compare code.
The unpack solution is easier to configure. The only thing to look out for is that you have to type the solution input file name. While we develop the pipeline the solution file doesn’t exist yet, therefore it can’t be selected.
Command Line Script
Then now the final step, uploading all our code to our repository.
So far the exporting and unpacking have created a set of files within the storage of the pipeline. This storage is clean at the start of each pipeline run. So creating these files will do nothing to the repository.
To make the last step work, we will need to run a script.
Using the following line of code, the Command line will just print “commit all changes”. this isn’t to exciting, but it can be very helpful to know where in your script things are going wrong.
echo commit all changes
Then the next two lines will tell the script which account to use to commit changes with
git config user.email "email@example.com" git config user.name "Pieter Veenstra"
Then I’m adding a few commands that will show me the files that I have. These aren’t 100% needed, but when changes don’t come through in my repository it will be useful to know if the unpack actually create my files.
dir /w dir .\solutions\*
Now the real parts of the code update. First the code from the repository is checked out. Then all our changes are added. then the changes are committed. and finally there is a complicated looking push command.
git checkout main git add --all git commit -m "solution" echo push code to repo git push --all https://firstname.lastname@example.org/SharePains/PV%%20Test%%20Solution%%20Project/_git/PV%%20Test%%20Solution%%20Project
That Push command needs a bit more of an explanation. First of all there are the %%20 parts in the path. Because I have spaces in my project name I would have expected to use %20 to encode the spaces. But that doesn’t work. The % signed need to be escaped. This escaping of the % sign is done using %%.
Then there is that long string of random characters at the front. This is called the PAT or personal access token.
The PAT can be generated and used in the script.
Now the total result is the following script ( make sure that you update the email, name and PAT!)
echo commit all changes git config user.email "email@example.com" git config user.name "Pieter Veenstra" dir /w dir .\solutions\* git checkout main git add --all git commit -m "solution" echo push code to repo git push --all https://firstname.lastname@example.org/SharePains/PV%%20Test%%20Solution%%20Project/_git/PV%%20Test%%20Solution%%20Project
Running the Pipeline
Now we can finally run the pipeline by selecting the Save & queue option.
This, if all is done correctly will result in a successful run:
Now all we have to do is double check the repository for our files and we can find our files back.
The next steps
The following post about Application Lifecycle Management with the Power Platform will follow soon:
Importing the solution into test/production