FABRIC SDK Notebook Analytics Plugin

0.0.3.post4 · active · verified Sat Apr 11

This package is identified as a plugin for the FABRIC SDK, specifically designed for use within Microsoft Fabric's online Spark/Python Notebook environments and Spark Job Definitions (SJDs). It appears to be an internal component that enables or integrates analytics capabilities within the broader Fabric platform, rather than a library intended for direct user-level imports and interactions. Microsoft Fabric is an end-to-end data analytics platform offering data engineering, data science, data warehousing, and real-time analytics. The current version is 0.0.3.post4.

Warnings

Install

Quickstart

This quickstart demonstrates common patterns for interacting with data and built-in utilities within a Microsoft Fabric Python Notebook environment, as the `fabric-analytics-notebook-plugin` itself is not designed for direct user-level imports. It shows how to read data using PySpark and how to use the `notebookutils` (formerly `mssparkutils`) package to interact with the file system, which are core operations in Fabric analytics workflows.

import pyspark.sql.functions as F

# This package is an internal plugin for the Fabric Notebook environment.
# Direct import of 'fabric_analytics_notebook_plugin' is not typically done by users.
# Instead, users interact with the Fabric environment and built-in utilities like notebookutils.

# Example of typical analytics operation in a Fabric Python Notebook
# Read data from a Lakehouse table (replace 'YourLakehouse' and 'YourTable' with actual names)
# Assume 'spark' object is pre-initialized in Fabric notebooks
# You might need to attach a Lakehouse to your notebook first.

# Example: Reading a Delta table from the default Lakehouse
try:
    df = spark.read.format("delta").load("Files/YourTable")
    print(f"DataFrame schema: {df.schema.simpleString()}")
    df.show(5)
    
    # Example using built-in Fabric notebook utilities (formerly mssparkutils)
    # from notebookutils import mssparkutils # Older syntax, still compatible for now
    from notebookutils import notebook, fs

    # List files in the default Lakehouse's 'Files' section
    print("\nListing files in 'Files/' directory:")
    files = fs.ls("Files/")
    for file_info in files:
        print(f"  - {file_info.name} (size: {file_info.size} bytes)")

except Exception as e:
    print(f"Error during quickstart execution: {e}")
    print("Please ensure you are running this in a Microsoft Fabric Notebook")
    print("and have a Lakehouse attached with a 'YourTable' Delta table and some files.")

view raw JSON →