Registration is open for PASS Data Community Summit!

After two long years, the data platform community will meet again in person in Seattle. Can’t make it to Seattle? No problem! You can also attend virtually.

An image showing PASS Data Community Summit 2022. November 15 - 18. Join the global data community at the in-person and online Summit conference. Register now.
Posted in Life in IT | Leave a comment

Intelligent Query Processing Session Follow Up

Last week, I presented a session on Intelligent Query Processing for the first ever Virtual PASS Summit. This summit had a mix of live and pre-recorded session. During the pre-recorded sessions, the speaker could hang out with the attendees in the discussion room and join a virtual chat room at the end.  My session was live, so I answered questions a few questions during the session. There were a couple of questions that I couldn’t answer fully during the session, but all the questions were interesting, so I’ll go through them here.

Q: Is the memory grant feedback feature available in 2019?

A: This one is available in 2017 for queries involving at least one columnstore index. In 2019, it’s also available with rowstore queries, in other words, no columnstore index required.

Q: Is the approx distinct count function only in SQL 2019?

A: Yes, it’s a 2019 feature, but you don’t need the database in 2019 compatibility mode to use it. It’s also the only one that requires a code change.

Q: “Even though there’s only one statement, it’s still a multi-statement table valued function.” How?

I showed an example of an inline table valued function and how easy it is to turn it into a multi-statement function. The inline table valued function must have this format which you can get by right-clicking the table valued functions in SSMS and selecting “New inline table valued function…”

     -- Add the parameters for the function here
     <@param1, sysname, @p1> , 
     <@param2, sysname, @p2> 
     -- Add the SELECT statement with parameter references here
     SELECT 0

Notice that the RETURNS statement must only say TABLE, and there can be just a SELECT statement in the body. In my example, I added the table definition to the RETURNS and inserted the rows into it in the body. Even though my function had just one statement, it was a multi-line function since it didn’t follow the rules for an inline function.

Q. Is it always better to go for expressions rather than the scalar UDF in SELECT?

A. I’m going to say that it is better most of the time to type out the expression instead of using a UDF. In 2019, Microsoft has added UDF Inlining that takes care of the performance hit some of the time. There is a long list of exceptions in the documentation, so be careful!

Q. On Table variable deferred, if I use option (recompile) during second of third run will fix the estimation?

A. This is something I hadn’t tried, so I didn’t answer during the session. think that if you are going to be adding recompiles, you may as well just use a temp table.

By adding OPTION(RECOMPILE) to the SELECT query using a table variable, you can get an accurate cardinality estimate. This could help in some situations, but I think that the new feature is better since you don’t get the recompiles.

Q. Does this functionality change in SQL 2019 where Table Value Functions defaults to the assumption of 100 rows and not stats are generated?

A. This is the interleaved execution feature added in 2017. The functionality is the same. It doesn’t default to 100, and stats are still not generated. It applies when static data is passed in, not columns.

I ended up drawing a blank during one of the demos in this section. This was the example using a CROSS APPLY with a column from the outer query. What threw me was the information on the operator. If I had just looked at the property tooltip, I would have seen that the estimate was just 100 as expected.

Q. On Memory Grant Feedback, is it possible to fix a minimum memory grant using Query Store or other tool. I am imagining a case where the cardinality changes too much and it is better to do not adjust too much.

A. This is another question that I deferred. During the keynote presentation on Wednesday, Bob Ward and Connor Cunningham announced the ability to add hints to Query Store. Otherwise, you could just add a hint to a query if you wish. Here’s a post from Erin Stellato on the topic.

Q. Is interleaved execution also only available in Enterprise Edition? IS IQP as a whole, only in Enterprise Edition?

A. Interleaved execution is available in other lower editions. Some of the features are Enterprise only, and I’ve found that those typically involve the word “batch” in the name or description.

Thanks to all who attended the session!

Posted in T-SQL | Leave a comment

Move a VM with Azure Resource Mover

I use Azure VMs quite often for trying things out, especially when tech reviewing articles for my day job as editor of Simple Talk. Frequently, a special purpose VM will live for just a couple of days or even a few hours because they are easy enough to throw away when I’m done. I appreciate the ability to create a VM in a Resource Groups (RG) so that all the resources associated with the VM like storage and the IP address can be destroyed with one command.

It’s also easy to move a VM to another RG or switch a RG including the VM to another subscription.

Another task I may want to perform is to move a VM to another region. I found this set of steps that involves using Azure Recovery Services Vault that seems a bit complex. Fortunately, I recently heard about a new, much easier way to move VMs and other resources called Azure Resource Mover (in preview). It was announced today.

To try this out, I created a new VM called vmtestmove in the Central US Region.  I then searched for Azure Resource Mover.

The first step was to specify from the regions for the move.

In the next step, I specified that I wanted to move my VM.

The next screen explained the process.

It took a couple of minutes to add the resource. Note that the VM must be running to do the move. Once it was done, I clicked the notification and landed on the Across Regions screen. The next step is to validate dependencies. Obviously, I’m missing several resources that the VM needs.

Some dependencies were found, so I clicked Add Dependencies.

It found a network interface and the Resource Group. I selected all and clicked Add dependencies.

I then realized it would have been easier if I had just selected all the resources in the RG instead of choosing only the VM. Eventually, I refreshed the screen and saw that all the resources were in place. Notice that a new RG will be created in the target region.

At this point, I clicked the Destination Configuration of the VM and made some changes. I gave the VM a new name and upgraded to a bigger VM.

The new RG must be prepared, initiated, and moved before the other resources. I just performed the steps as I was prompted in the status and refreshed after each one.

Once I refreshed and saw that the RG was in Delete source pending status, I was ready to move the VM.  I selected everything else and clicked Prepare.

It takes a few minutes for each step, and I had to click Refresh after each one and make sure that the status changed for each item. I was notified that the step completed before it was completed for the VM. Once all the steps were complete, I saw the ‘Delete source’ pending status on each item.

At this point, I could see that the new VM was running in the East US region.

Now, I had to decide if I wanted to keep both VMs or to delete the one running in the Central region. I decided to delete it, and just manually deleted the RG to remove it and the other resources the way I would normally delete a VM. There are a couple of other items created to facilitate the move that must be deleted. Be sure to check the documentation for more information.

You also must remove the resources from the Azure Resource Mover page. This is especially important if you want to move any additional VMs or other objects in the future. Just select all the resources and click Remove.

I also decided to try moving an Azure SQL Database using this new service. Check the documentation if you need more information. The main thing that I found is that you must have a SQL server resource in place in the target region before proceeding. Otherwise, it was as straightforward as moving the VM.

The process for moving the VM to another region was simple, and I was glad that it found all the required resources for me. This functionality will be quite useful for many organizations who rely on Azure VMs for their production and downstream environments.


Posted in Cloud computing | 1 Comment