Skip to content
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
uid: script-add-databricks-metadata-descriptions
title: Add Databricks Metadata Descriptions
author: Johnny Winter
updated: 2025-09-04
updated: 2026-04-08
applies_to:
products:
- product: Tabular Editor 2
Expand All @@ -16,7 +16,8 @@ applies_to:
This script was created as part of the Tabular Editor x Databricks series. In Unity Catalog it is possible provide descriptive comments for tables and columns. This script can re-use this information to automatically populate table and column descriptions in your semantic model.
<br></br>
> [!NOTE]
> This script requires the Simba Spark ODBC Driver to be installed (download from https://www.databricks.com/spark/odbc-drivers-download)
> This script requires a Databricks ODBC driver. We recommend the new [Databricks ODBC Driver](https://www.databricks.com/spark/odbc-drivers-download), which replaces the legacy Simba Spark ODBC Driver. The script auto-detects which driver is installed and uses it accordingly.

Each run of the script will prompt the user for a Databricks Personal Access Token. This is required to authenticate to Databricks.
The script utilises the information_schema tables in Unity Catalog to retrieve relationship information, so you may need to double check with your Databricks administrator to make sure you have permission to query these tables.
<br></br>
Expand All @@ -37,7 +38,8 @@ The script utilises the information_schema tables in Unity Catalog to retrieve r
* For each table processed, a message box will display the number of descriptions updated.
* Click OK to continue to the next table.
* Notes:
* - This script requires the Simba Spark ODBC Driver to be installed (download from https://www.databricks.com/spark/odbc-drivers-download)
* - This script requires the Databricks ODBC Driver (recommended) or legacy Simba Spark ODBC Driver to be installed (download from https://www.databricks.com/spark/odbc-drivers-download)
* - The script auto-detects which driver is installed
* - Each run of the script will prompt the user for a Databricks Personal Access Token
*/
#r "Microsoft.VisualBasic"
Expand Down Expand Up @@ -376,6 +378,37 @@ do
// toggle the 'Running Macro' spinbox
ScriptHelper.WaitFormVisible = true;

// auto-detect Databricks ODBC driver
string driverPath;
string newDriverPath = @"C:\Program Files\Databricks ODBC Driver";
string legacyDriverPath = @"C:\Program Files\Simba Spark ODBC Driver";

if (System.IO.Directory.Exists(newDriverPath))
{
driverPath = newDriverPath;
}
else if (System.IO.Directory.Exists(legacyDriverPath))
{
driverPath = legacyDriverPath;
}
else
{
ScriptHelper.WaitFormVisible = false;
Interaction.MsgBox(
@"No Databricks ODBC driver found.

Please install the Databricks ODBC Driver from:
https://www.databricks.com/spark/odbc-drivers-download

Expected installation paths:
" + newDriverPath + @"
" + legacyDriverPath,
MsgBoxStyle.Critical,
"ODBC Driver Not Found"
);
return;
}

//for each selected table, get the Databricks connection info from the partition info
foreach (var t in Selected.Tables)
{
Expand All @@ -391,11 +424,11 @@ foreach (var t in Selected.Tables)
string tableName = connectionInfo.TableName;
//set DBX connection string
var odbcConnStr =
@"DSN=Simba Spark;driver=C:\Program Files\Simba Spark ODBC Driver;host="
@"Driver=" + driverPath + ";Host="
+ serverHostname
+ ";port=443;httppath="
+ ";Port=443;HTTPPath="
+ httpPath
+ ";thrifttransport=2;ssl=1;authmech=3;uid=token;pwd="
+ ";SSL=1;ThriftTransport=2;AuthMech=3;UID=token;PWD="
+ dbxPAT;

//test connection
Expand All @@ -409,15 +442,13 @@ foreach (var t in Selected.Tables)
// toggle the 'Running Macro' spinbox
ScriptHelper.WaitFormVisible = false;
Interaction.MsgBox(
@"Connection failed
@"Connection failed (using driver: " + driverPath + @")

Please check the following prequisites:
Please check the following prerequisites:

- you must have the Simba Spark ODBC Driver installed
- you must have the Databricks ODBC Driver installed
(download from https://www.databricks.com/spark/odbc-drivers-download)

- the ODBC driver must be installed in the path C:\Program Files\Simba Spark ODBC Driver

- check that the Databricks server name "
+ serverHostname
+ @" is correct
Expand Down Expand Up @@ -557,7 +588,7 @@ Either:
}
```
### Explanation
The script uses WinForms to prompt for a Databricks personal access token, used to authenticate to Databricks. For each selected table, the script retrieves the Databricks connection string information and schema and table name from the M query in the selected table's partition. Using the Spark ODBC driver it then sends a SQL query to Databricks that queries the information_schema tables to return the table description that is defined in Unity Catalog. This is then updated on the table description in the semantic model. A second SQL Query using the DESCRIBE command is also sent to the selected table to get column descriptions. The results of this are looped through, with descriptions added in the model. Once the script has run on each selected table, a dialogue box is displayed to show the number of descriptions updated.
The script uses WinForms to prompt for a Databricks personal access token, used to authenticate to Databricks. It auto-detects whether the new Databricks ODBC Driver or the legacy Simba Spark ODBC Driver is installed. For each selected table, the script retrieves the Databricks connection string information and schema and table name from the M query in the selected table's partition. Using the detected ODBC driver it then sends a SQL query to Databricks that queries the information_schema tables to return the table description that is defined in Unity Catalog. This is then updated on the table description in the semantic model. A second SQL Query using the DESCRIBE command is also sent to the selected table to get column descriptions. The results of this are looped through, with descriptions added in the model. Once the script has run on each selected table, a dialogue box is displayed to show the number of descriptions updated.

## Example Output

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
uid: script-create-databricks-relationships
title: Create Databricks Relationships
author: Johnny Winter
updated: 2025-09-04
updated: 2026-04-08
applies_to:
products:
- product: Tabular Editor 2
Expand All @@ -16,7 +16,8 @@ applies_to:
This script was created as part of the Tabular Editor x Databricks series. In Unity Catalog it is possible to define primary and foreign key relationships between tables. This script can re-use this information to automatically detect and create relationships in Tabular Editor. Whilst importing the relationships, the script will also hide primary and foreign keys and set IsAvailableInMDX to false (with the exception of DateTime type primary keys). Primary keys are also marked as IsKey = TRUE in the semantic model.
<br></br>
> [!NOTE]
> This script requires the Simba Spark ODBC Driver to be installed (download from https://www.databricks.com/spark/odbc-drivers-download)
> This script requires a Databricks ODBC driver. We recommend the new [Databricks ODBC Driver](https://www.databricks.com/spark/odbc-drivers-download), which replaces the legacy Simba Spark ODBC Driver. The script auto-detects which driver is installed and uses it accordingly.

Each run of the script will prompt the user for a Databricks Personal Access Token. This is required to authenticate to Databricks.
The script utilises the information_schema tables in Unity Catalog to retrieve relationship information, so you may need to double check with your Databricks administrator to make sure you have permission to query these tables.
<br></br>
Expand All @@ -41,7 +42,8 @@ The script utilises the information_schema tables in Unity Catalog to retrieve r
For each table processed, a message box will display the number of relationships created.
* Click OK to continue to the next table.
* Notes:
* - This script requires the Simba Spark ODBC Driver to be installed (download from https://www.databricks.com/spark/odbc-drivers-download)
* - This script requires the Databricks ODBC Driver (recommended) or legacy Simba Spark ODBC Driver to be installed (download from https://www.databricks.com/spark/odbc-drivers-download)
* - The script auto-detects which driver is installed
* - Each run of the script will prompt the user for a Databricks Personal Access Token
*/
#r "Microsoft.VisualBasic"
Expand Down Expand Up @@ -380,6 +382,37 @@ do
// toggle the 'Running Macro' spinbox
ScriptHelper.WaitFormVisible = true;

// auto-detect Databricks ODBC driver
string driverPath;
string newDriverPath = @"C:\Program Files\Databricks ODBC Driver";
string legacyDriverPath = @"C:\Program Files\Simba Spark ODBC Driver";

if (System.IO.Directory.Exists(newDriverPath))
{
driverPath = newDriverPath;
}
else if (System.IO.Directory.Exists(legacyDriverPath))
{
driverPath = legacyDriverPath;
}
else
{
ScriptHelper.WaitFormVisible = false;
Interaction.MsgBox(
@"No Databricks ODBC driver found.

Please install the Databricks ODBC Driver from:
https://www.databricks.com/spark/odbc-drivers-download

Expected installation paths:
" + newDriverPath + @"
" + legacyDriverPath,
MsgBoxStyle.Critical,
"ODBC Driver Not Found"
);
return;
}

//for each selected table, get the Databricks connection info from the partition info
foreach (var t in Selected.Tables)
{
Expand Down Expand Up @@ -433,11 +466,11 @@ foreach (var t in Selected.Tables)

//set DBX connection string
var odbcConnStr =
@"DSN=Simba Spark;driver=C:\Program Files\Simba Spark ODBC Driver;host="
@"Driver=" + driverPath + ";Host="
+ serverHostname
+ ";port=443;httppath="
+ ";Port=443;HTTPPath="
+ httpPath
+ ";thrifttransport=2;ssl=1;authmech=3;uid=token;pwd="
+ ";SSL=1;ThriftTransport=2;AuthMech=3;UID=token;PWD="
+ dbxPAT;

//test connection
Expand All @@ -451,15 +484,13 @@ foreach (var t in Selected.Tables)
// toggle the 'Running Macro' spinbox
ScriptHelper.WaitFormVisible = false;
Interaction.MsgBox(
@"Connection failed
@"Connection failed (using driver: " + driverPath + @")

Please check the following prequisites:
Please check the following prerequisites:

- you must have the Simba Spark ODBC Driver installed
- you must have the Databricks ODBC Driver installed
(download from https://www.databricks.com/spark/odbc-drivers-download)

- the ODBC driver must be installed in the path C:\Program Files\Simba Spark ODBC Driver

- check that the Databricks server name "
+ serverHostname
+ @" is correct
Expand Down Expand Up @@ -585,7 +616,7 @@ Please check the following prequisites:
}
```
### Explanation
The script uses WinForms to prompt for a Databricks personal access token, used to authenticate to Databricks. For each selected table, the script retrieves the Databricks connection string information and schema and table name from the M query in the selected table's partition. Using the Spark ODBC driver it then sends a SQL query to Databricks that queries the information_schema tables to find any foreign key relationships for the table that are defined in Unity Catalog. For each row returned in the SQL query, the script looks for matching table and column names in the model and where a relationship does not already exist, a new one is created. For role playing dimensions, where the same table might have multiple foreign keys relating to a single table, the first relationship detected will be the active one, and all other subsequent relationships are created as inactive. The script will also hide primary and foreign keys and set IsAvailableInMDX to false (with the exception of DateTime type primary keys). Primary keys are also marked as IsKey = TRUE in the semantic model. After the script has run for each selected table, a dialogue box will appear showing how many new relationships were created.
The script uses WinForms to prompt for a Databricks personal access token, used to authenticate to Databricks. It auto-detects whether the new Databricks ODBC Driver or the legacy Simba Spark ODBC Driver is installed. For each selected table, the script retrieves the Databricks connection string information and schema and table name from the M query in the selected table's partition. Using the detected ODBC driver it then sends a SQL query to Databricks that queries the information_schema tables to find any foreign key relationships for the table that are defined in Unity Catalog. For each row returned in the SQL query, the script looks for matching table and column names in the model and where a relationship does not already exist, a new one is created. For role playing dimensions, where the same table might have multiple foreign keys relating to a single table, the first relationship detected will be the active one, and all other subsequent relationships are created as inactive. The script will also hide primary and foreign keys and set IsAvailableInMDX to false (with the exception of DateTime type primary keys). Primary keys are also marked as IsKey = TRUE in the semantic model. After the script has run for each selected table, a dialogue box will appear showing how many new relationships were created.

## Example Output

Expand Down
17 changes: 8 additions & 9 deletions content/features/Semantic-Model/direct-lake-sql-model.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
uid: direct-lake-sql-model
title: Direct Lake on SQL Semantic Models
author: Morten Lønskov
updated: 2024-08-22
updated: 2026-03-27
applies_to:
products:
- product: Tabular Editor 2
Expand All @@ -24,14 +24,13 @@ Direct Lake on SQL semantic models connect directly to data sources stored in [O
> As of [Tabular Editor 3.22.0](../../references/release-notes/3_22_0.md), Tabular Editor 3 supports Direct Lake on OneLake, which is recommended in most scenarios. See our [Direct Lake guidance](xref:direct-lake-guidance) article for more information.

Tabular Editor 3 can create and connect to this type of model. For a tutorial on this please refer to our blog article: [Direct Lake semantic models: How to use them with Tabular Editor](https://blog.tabulareditor.com/2023/09/26/fabric-direct-lake-with-tabular-editor-part-2-creation/).
Tabular Editor 3 can create direct lake semantic models with both the Lakehouse and Datawarehouse SQL Endpoint.
Tabular Editor 3 can create Direct Lake semantic models with both the Lakehouse and Datawarehouse SQL Endpoint.

Tabular Editor 2 can connect to Direct Lake semantic models, but does not have any built in functionality to create new tables or direct lake semantic models. This needs to be done manually or with a C# script.
Tabular Editor 2 can connect to Direct Lake semantic models, but does not have any built-in functionality to create new tables or Direct Lake semantic models. This needs to be done manually or with a C# script.

<div class="NOTE">
<h5>Direct Lake limitations</h5>
There are several limitations to the changes that can be made to a Direct Lake model: <a href="https://learn.microsoft.com/en-us/power-bi/enterprise/directlake-overview#known-issues-and-limitations">Direct Lake Known Issues and Limitations</a> We recommend <a "https://www.sqlbi.com/blog/marco/2024/04/06/direct-lake-vs-import-mode-in-power-bi/"> this article by SQLBI</a> for a initial overview of choosing between Direct Lake and Import mode.
</div>
> [!NOTE]
> **Direct Lake limitations**
> There are several limitations to the changes that can be made to a Direct Lake model. See [Direct Lake Considerations and Limitations](https://learn.microsoft.com/en-us/fabric/fundamentals/direct-lake-overview#considerations-and-limitations) for the full list. See also [this article by SQLBI](https://www.sqlbi.com/blog/marco/2024/04/06/direct-lake-vs-import-mode-in-power-bi/) for an overview of choosing between Direct Lake and Import mode.

## Creating a Direct Lake on SQL model in Tabular Editor 3

Expand All @@ -43,7 +42,7 @@ Using the checkbox ensures that Direct Lake specific properties and annotations

> [!NOTE]
> Direct Lake on SQL models currently use a collation that is different from regular Power BI import semantic models. This may lead to different results when querying the model, or when referencing object names in DAX code.
For more information please see this blog post by Kurt Buhler: [Case-sensitive models in Power BI: consequences & considerations](https://data-goblins.com/power-bi/case-specific)
> For more information, see this blog post by Kurt Buhler: [Case-sensitive models in Power BI: consequences & considerations](https://data-goblins.com/power-bi/case-specific).

> [!IMPORTANT]
> As of [Tabular Editor 3.22.0](../../references/release-notes/3_22_0.md), the Direct Lake checkbox has been removed from the New Model dialog. You must [manually set the collation on your model to match that of your Fabric Warehouse](xref:direct-lake-guidance#collation) if using Direct Lake on SQL.
Expand All @@ -62,7 +61,7 @@ The top title bar of Tabular Editor shows which type of model is open in that in

## Converting a Direct Lake model to Import Mode

The below C# script converts and existing model into 'Import Mode'. This can be useful if the data latency requirements of your model does not require Direct Lake or you want to avoid the limitations of a Direct Lake model but have already started building one inside Fabric.
The below C# script converts an existing model into Import mode. This can be useful if the data latency requirements of your model does not require Direct Lake or you want to avoid the limitations of a Direct Lake model but have already started building one inside Fabric.

Running the script is possible when Tabular Editor is connected to a semantic model through the XMLA endpoint. However, saving changes directly back to the Power BI/Fabric workspace is not supported by Microsoft. To circumvent this, the recommended approach is to use the "Model > Deploy..." option. This allows for the deployment of the newly converted model as a new entity in a workspace.

Expand Down
Loading
Loading