FABRIC SDK Notebook Analytics Plugin
This package is identified as a plugin for the FABRIC SDK, specifically designed for use within Microsoft Fabric's online Spark/Python Notebook environments and Spark Job Definitions (SJDs). It appears to be an internal component that enables or integrates analytics capabilities within the broader Fabric platform, rather than a library intended for direct user-level imports and interactions. Microsoft Fabric is an end-to-end data analytics platform offering data engineering, data science, data warehousing, and real-time analytics. The current version is 0.0.3.post4.
Warnings
- gotcha The `fabric-analytics-notebook-plugin` package appears to be an internal Microsoft Fabric component. Direct user-level imports or explicit quickstart guides for this specific package are not available in public documentation. Its role is likely to provide underlying functionality for analytics within the Fabric notebook environment.
- breaking The `mssparkutils` package, a common utility in Fabric and Synapse notebooks, has been officially renamed to `notebookutils`. While existing `mssparkutils` code remains backward compatible, the `mssparkutils` namespace will be retired in the future. All new features are exclusively supported under `notebookutils`.
- gotcha Microsoft Fabric notebook sessions can time out due to inactivity or capacity limits, leading to lost work if not saved or if long-running operations are interrupted.
Install
-
pip install fabric-analytics-notebook-plugin -
%pip install fabric-analytics-notebook-plugin
Quickstart
import pyspark.sql.functions as F
# This package is an internal plugin for the Fabric Notebook environment.
# Direct import of 'fabric_analytics_notebook_plugin' is not typically done by users.
# Instead, users interact with the Fabric environment and built-in utilities like notebookutils.
# Example of typical analytics operation in a Fabric Python Notebook
# Read data from a Lakehouse table (replace 'YourLakehouse' and 'YourTable' with actual names)
# Assume 'spark' object is pre-initialized in Fabric notebooks
# You might need to attach a Lakehouse to your notebook first.
# Example: Reading a Delta table from the default Lakehouse
try:
df = spark.read.format("delta").load("Files/YourTable")
print(f"DataFrame schema: {df.schema.simpleString()}")
df.show(5)
# Example using built-in Fabric notebook utilities (formerly mssparkutils)
# from notebookutils import mssparkutils # Older syntax, still compatible for now
from notebookutils import notebook, fs
# List files in the default Lakehouse's 'Files' section
print("\nListing files in 'Files/' directory:")
files = fs.ls("Files/")
for file_info in files:
print(f" - {file_info.name} (size: {file_info.size} bytes)")
except Exception as e:
print(f"Error during quickstart execution: {e}")
print("Please ensure you are running this in a Microsoft Fabric Notebook")
print("and have a Lakehouse attached with a 'YourTable' Delta table and some files.")