Snowflake is one of the newest platforms available for data warehouse development.

Snowflake is one of the newest platforms available for data warehouse development.
Cursors are not the best solution when talking about performance but,
If you run into issues with another coding technique and need to get something done quickly, using a cursor may be a viable alternative. It may take longer to process the data, but the coding time might be much less. If you have a one-time process or nightly processing, this could do the trick.
This article contains a good description, example and alternatives to cursors.
https://www.mssqltips.com/sqlservertip/1599/sql-server-cursor-example/
Using the Execution Plan in SSMS can answer many questions about why the query you just made runs so slowly. A well constructed T-SQL script should execute reasonably quickly if the database is indexed properly. Since most users have no ability to create nor administer indexes on the databases, the best way to improve performance is to improve the T-SQL script.
Here is a link that tests your ability to use the Execution Plan in SSMS.
It also includes a link to the book that I recommended last year.
https://www.red-gate.com/library/sql-server-execution-plans-2nd-edition
We need to do this all the time. This is just one example of a way to load multiple Excel data files into a table.
http://www.sqlservercentral.com/articles/OPENROWSET/163645/
The same concept could be used to load flat files.
Scalar Functions are very useful in SQL Server but they need to be used correctly. You can find a simple query takes much too long to run and the only problem is that the function needs to calculate on every row returned. There are many ways to write queries and this article will show you why scalar functions can cause you headaches.
http://www.sqlservercentral.com/articles/T-SQL/135321/?utm_source=SSC&utm_medium=pubemail
The OUTPUT clause in T-SQL allows you to see what was actually inserted or deleted. This can be used for auditing or archiving purposes.
http://www.sqlservercentral.com/articles/T-SQL/155154/?utm_source=SSC&utm_medium=pubemail
While DISTINCT and GROUP BY are identical in a lot of scenarios, here is one case where the GROUP BY approach definitely leads to better performance.
The DISTINCT variation took 4X as long, used 4X the CPU, and almost 6X the reads when compared to the GROUP BY variation.
https://sqlperformance.com/2017/01/t-sql-queries/surprises-assumptions-group-by-distinct
By joining two data-sets together by their common keys, we cause the EXCEPT statement to compare the remaining columns. This approach to check row differences can reduce errors by removing the tedious hand-coding of column-by-column comparisons as well as required null-handling logic.
http://www.sqlservercentral.com/articles/EXCEPT/107520/?utm_source=SSC&utm_medium=pubemail
The use of wildcard character searches in SQL Server scripts is common for all of us. This article makes some good points on how to do it correctly and what to avoid.
Please read this article. It may help you better understand your options when it comes to string searches.
http://www.sqlservercentral.com/articles/T-SQL/130558/?utm_source=SSC&utm_medium=pubemail
This is an issue that we had to work on recently. I thought that this might be useful as we frequently work with FTP sites.
Using C# script and the free WinSCP library makes this a fairly easy task in Integration Services.
https://www.timmitchell.net/post/2017/01/20/retrieve-a-list-of-files-from-ftp-using-ssis/