Dropping Database Objects During DACPAC Deployments Can Get Weird
Background
Consider the following scenario:
- You’re a software developer on a team with a SQL Server database needing incremental updates.
- Your team is both mature and with the times, so instead of giving a few people
db_owner
and trusting them with the arduous error-prone task of continuous updates via ad hoc scripts, you instead manage changes programmatically via a SQL Server Database Project. - To deploy the DACPAC generated by your SQL Server Database Project, you utilize SQLPackage.
- You encounter a scenario where you need to drop an object from your database. You remove the object from your data project and kick off your deployment process that utilizes SQLPackage.
- The deployment fails with the error message
Rows were detected. The schema update is terminating because data loss might occur.
But you’re not worried. You know others must have run into this issue before. You copy that error message and plug it into Google, or maybe ChatGPT, and find the solution. All you have to do is set the SQLPackage parameters BlockOnPossibleDataLoss
to false and DropObjectsNotInSource
to true and that should handle things. You’ve now told SQLPackage to drop objects (tables, views, stored procedures, columns, indexes, etc.) that don’t exist in your DACPAC (which was compiled from the source code of your data project) and to not worry about data loss since you definitely know what you’re doing and don’t need any pesky guardrails.
Filled with optimism, you kick off another deployment that — much to your chagrin — fails again. What makes matters worse is that you’re getting the same error. How can that be? You’ve got BlockOnPossibleDataLoss=False
and DropObjectsNotInSource=True
. Why then does SQLPackage keep blocking the deployment? Why does it refuse to drop objects from the database that you’ve deleted from your data project?
You end up continuing your research and discover database project pre-deployment scripts. Even though the parameters aren’t working, surely manually scripting the drop operations via a pre-deployment script would do the trick right? Wrong! Even after adding a pre-deployment script that directly contains your DROP [Object Type]
commands, the deployment is still failing. At this point, you’re lost. How can this be? Where are reason, logic, law, and order amidst this chaos? How can it be so hard to drop objects from your database using SQLPackage? Why won’t it let the past die?
I’ve been the developer in this scenario and found myself scouring the Internet in search of answers. Microsoft, the creators of SQLPackage, DACPAC’s, and SQL Server data projects, don’t have much to say about this issue. But thankfully, the global developer community does. Members of the worldwide beloved hodgepodge of geeks, nerds, programmers, developers, hackers, engineers, data scientists, data analysts, data wizards, and all-around tech enthusiasts came through for me. Thanks to them, I was able to get past this issue. Here’s what I learned.
How DACPAC Deployments Work
The reason a DACPAC deployment might refuse to drop objects despite having BlockOnPossibleDataLoss=False
, DropObjectsNotInSource=True
, and a pre-deployment script is because of how its internal stages are organized. The order of operations is key. I learned this from an unsung Stack Overflow hero who answered this question with details that should seriously exist somewhere on Microsoft’s official documentation site. Here’s how DACPAC deployments work.
- An empty database is built from the DACPAC’s schema for use in comparisons. This is the “source” during the deployment. It contains all changes, including any objects that have been deleted, from the source data project. The term database is used loosely here. It’s more of a schematic template, devoid of any real data. We’ll call this the
sourceDatabase
. - A comparison between the
sourceDatabase
and the “live” database, which we’ll calltargetDatabase
, takes place. From this comparison, an “apply changes” script is generated. Let’s call itdeploy.sql
. This script contains all necessary operations for changing thetargetDatabase
such that it matches thesourceDatabase
schematically. - Immediately after generating
deploy.sql
, validations are run to ensure the operations within are sufficiently “safe”. The script itself is not being executed. Instead, the system is ensuring that if it were to execute this script, it wouldn’t “mess up” thetargetDatabase
in a way that causes data loss or dependency errors. For example, ifdeploy.sql
has a command to drop a table that’s still being referenced in a view, that’s flagged as a problem, and the deployment is halted. Or more commonly, ifdeploy.sql
has commands that would drop objects containing data, that’s flagged as a problem (“data loss may occur”) and the deployment is halted. It’s important to note thatdeploy.sql
is being evaluated here, not executed. It’s almost like reviewing a pull request. The code hasn’t merged in yet, but reviewers are making sure if it did, nothing bad would happen. It’s also important to note that the parameters passed to SQLPackage are irrelevant to this evaluation.BlockOnPossibleDataLoss
,DropObjectsNotInSource
, and any other parameters passed do nothing here. To me, that doesn’t make sense, but that’s how it is. - If one exists, the pre-deployment script is executed.
- The
deploy.sql
script is executed. This is where parameters passed to SQLPackage are evaluated and respected. Consider this the actual deployment step. This is where the changes are applied and thetargetDatabase
is updated to match thesourceDatabase
schematically. - If one exists, the post-deployment script is executed.
Looking at the steps above, it becomes clear why my deployment was failing no matter what I tried. The evaluation in Step 3 is the problem. It doesn’t respect parameters like BlockOnPossibleDataLoss
and DropObjectsNotInSource
, and it runs before the pre-deployment script where I had tried manually handling object drops. In my case, I was trying to drop a column from an existing table. The column had data, but I knew it was okay to lose it. The column and data within had been deprecated and I was under a mandate to do away with both. However, this “evaluation” step wouldn’t let me. It insisted on protecting me from what it deemed to be a dangerous operation, which I fully understand. I’d be more concerned if it didn’t stop and alert users to the potential for data loss. What’s frustrating though is that there’s no way to get this evaluation to “chill out” in certain scenarios. It’s not configurable at all, and as such, must be worked around. I learned as much while searching online for a solution to this problem.
Solving the Issue
As discussed above, the pre-deployment script built into SQL Server data projects doesn’t run before the potentially blocking schema comparison and evaluation step. Furthermore, parameters like BlockOnPossibleDataLoss
and DropObjectsNotInSource
don’t affect that step, so they’re useless for overcoming a blocking “data loss may occur” problem.
To solve this, what’s needed is a SQL script that runs before the DACPAC deployment. Within this script, database object drops should be handled, with the knowledge that there are no guardrails. Any commands issued within will be evaluated regardless of possible data loss. If you go this route, proceed with caution, and test the script extensively before you run it on a production database.
Gert Drapers, a former partner-level software engineer at Microsoft, dubbed this solution a “pre-pre-deployment script”.
You can read more about the solution on this Stack Overflow question thread. There are also relevant GitHub threads here and here where people are running into this issue of SQLPackage not handling drops correctly.
That being said, how you get your pre-pre-deployment script to run before your DACPAC depends on your CI/CD process. One easy solution, if you’re using a SQL Server database project, is to add the script to its output by having the following snippet in your *.csproj
.
<ItemGroup>
<None Include="DeploymentScripts\Pre-Pre-Deployment.sql">
<CopyToOutputDirectory>Always</CopyToOutputDirectory>
</None>
</ItemGroup>
Then, when your project artifacts are uploaded to wherever you place them for deployments, the script will be there ready for use. That’s the solution I’m using for an Azure Pipelines deployment process with Azure Bicep for Azure Resource Manager (ARM) templates. The pre-pre-deployment script runs using a SqlAzureDacpacDeployment@1
task. The task can be used to run plain SQL scripts in addition to full-on DACPAC deployments. Here’s an example of what that could look like.
- task: SqlAzureDacpacDeployment@1
displayName: "Pre-Pre-Deployment Script"
inputs:
azureSubscription: "Game-of-Thrones"
serverName: "Westeros"
databaseName: "Winterfell"
sqlUsername: "aryastark"
sqlPassword: "theNorthRemembers!@#9392"
deployType: sqlTask
sqlFile: "/Path/To/Pre-Pre-Deployment.sql"
Wrapping Things Up
And so, there you have it. Writing this article was a cathartic experience for me. The amount of time I spent trying to figure out what was wrong, only to learn there’s no native way to resolve it, was more than I’d like to admit. The ideal solution to this problem would be for the schema comparison step to be configurable. The fact that parameters passed to SQLPackage are ignored during that step isn’t cool at all. That being said, if you’re reading this right now, I pray that all your DACPAC deployments finish without a hitch. Godspeed.
Comments