BKO AI’s Common Model combines market and trading data with plant maintenance, inventory and operational data.

By Shaun Wright.

Brief description:

It’s important for every application used by an energy supplier to use common asset and configuration definitions. This is critical where enterprise-wide simulations are using the massive volumes of data stored by Databricks for enterprise-wide logistics and operational optimization.

BKO’s Common Model combines market and trading data with plant maintenance, inventory and operational data to produce a level of optimization far beyond simple plant maintenance and inventory management. The vast amounts of operational and financial data that can be stored in Databricks and managed by Unity Catalog and BKO AI’s Common Model, enables the synchronized exchange of the situational information required for enterprise-wide optimization and supporting better business and operational decision making.

Today, BKOAI is excited to be a launch partner for the Databricks Data Intelligence Platform for Energy:

This is Databricks’ first industry-specific data intelligence platform built on the lakehouse architecture for energy customers.

With Databricks’ Data Intelligence Platform for Energy, energy leaders are enabled with a centralized data and AI platform that can democratize data access to their entire enterprise by delivering the full value of asset, operations, environmental, and customer data to deliver a safer, more reliable, and smarter energy system.

BKOAI is developing its Common Model using the Databricks Data Intelligence Platform for energy to deliver an open, flexible data platform infused with AI. This solution aims to provide one model, a single source of truth by integrating real-time and financial applications across the enterprise. Integrating vast amounts of real-time data with financial systems to inform business decisions is a common challenge, particularly in the continuous manufacturing industry. The Common Model’s ability to synchronize and maintain contextual information between applications leverages the value of your data to help you predict, plan and schedule for optimized productivity throughout the supply chain.

Building and maintaining the same reference configurations for all applications.

Analyses and predictions produced using different asset configurations between applications provide for inaccurate planning and scheduling, and confusion reigns while the problems caused are resolved. In the worst case, wrong decisions based on faulty data can seriously de-optimize a business or operational plan.

The Common Model is just that, it provides and maintains a set of common asset attributes and connectivity between different applications and provides a means of editing this information in one place, and having it reflected to all applications immediately.

Routing all configuration data through a single node enables the information required by a particular app to be routed to that app, and also provides a single point of access control.

Synchronizing time information across the enterprise:

Turning an optimized production plan LP or continuous plan into a compliant and viable schedule. The Common Model helps to synchronize past, present, and future simulations so that the plan can be turned into a discrete schedule using the best data available. This is particularly important where discrete operations use a continuous plan to formulate a daily schedule. This is the case with most “flowing” use cases, such as a refinery.

Supporting look-ahead trading and hedging:

It is difficult for a trader to decide to close on a trade that involved supply materials that will not hit the manufacturing process for a period of weeks or even months without a clear operational schedule that describes the predicted state of the process at that time.
By tying together all the operational real-time and business data and synchronizing across time, the Common Model is able to provide a good estimate of that future state, and thus inform the trader whether the trade is viable or not.
Linking data through time and process, such as in any flowing operation, enables the business owner to make informed decisions across global operations.

To learn more about the Common Model,
please visit our Common Model Page.

Sustainability gains using machine learning to optimize fired heater power plant operations.

By BKO AI Engineer RANSAC.

TL;DR
Strategy Proposal:
Use ML to optimize fired heater operations.
Energy Consumption:
Fired heaters account for ~33% of refinery energy use.
ML vs. CFD Models:
ML adapts in real-time, considers more variables.
Variables for ML Modeling:
Sensor data, ambient conditions, air quality, air dampers, burner configurations, fuel compositions, etc.

You'll learn how to choose the right database solutions for your tasks, and how to use your new knowledge to build agile, flexible, and high-performing graph-powered applications!

  • Predict coking events.
  • Ensure NOx and SOx emissions compliance.
  • Forecasts wear and potential failures.

Benefits:

Improve energy efficiency, reduce costs, support environmental compliance.

Brief description:

Fired heaters account for approximately 33% of our refinery's total energy consumption. This substantial share underscores the critical importance of optimizing their operation for energy efficiency and cost reduction. ML can dynamically adapt to changing conditions, offering real-time optimization versus the static nature of traditional Computational Fluid Dynamics (CFD) models. Also, incorporates a broader range of variables, including ambient conditions, air quality, burner configurations, and fuel compositions, providing a more holistic approach to optimization. Machine learning models are capable of learning from historical data, leading to continuously improving performance predictions and operational recommendations.

Utilize Databricks for Data Management and Analytics:

Centralized Data Hub: Leverage Databricks to aggregate, process, and analyze data from diverse sources, including sensor data from fired heaters, operational logs, ambient conditions, and fuel characteristics.

Advanced Analytics:
Use Databricks’ ML capabilities to develop, train, and refine predictive models for optimizing fired heater operations.

Real-time Data Processing:
Implement Databricks’ real-time analytics to dynamically adjust operational parameters, enhancing responsiveness to changing conditions.

Adopt Digital Twins for Simulation and Prediction:

Virtual Replication: Develop Digital Twins of the fired heaters to simulate their physical counterparts in a virtual environment. This allows for detailed analysis of how different variables affect performance without impacting actual operations.

Predictive Modeling:
Use Digital Twins to test the efficacy of different operational strategies under various scenarios, including changes in fuel composition, burner configurations, and environmental conditions.

Operational Optimization: Integrate Digital Twins with ML models developed in Databricks to predict outcomes like energy efficiency, emission levels, and the likelihood of coking events. This integration enables the identification of the most efficient operational settings.

Maintenance and Downtime Reduction: Utilize Digital Twins for predictive maintenance, forecasting wear and potential failures before they occur, thereby reducing unscheduled downtime and extending equipment lifespan.

Integration for Enhanced Decision-Making:

Feedback Loop: Establish a continuous feedback loop between the Digital Twins and Databricks’ ML models to constantly update and refine predictions based on new data and outcomes. This ensures that the optimization strategies evolve with changing operational realities.

Dashboard and Reporting
: Implement a comprehensive dashboard within Databricks to visualize key performance indicators (KPIs), operational metrics, and predictions from Digital Twins. This aids in making informed decisions quickly.