Uncategorized

It’s Time to Take a Modern Approach to Data Conversion

https://revgurus.com/wp-content/uploads/2020/12/erp-resource-planning-blog-768×512.jpg

ERP migrations are one of the most stressful activities an enterprise can take on.  The implementation of a new ERP can unlock great benefits to the business through more modern functionality, automation, AI and flexibility for the business to adapt to changing economic conditions.  With great opportunity comes great risks (of course).  One of the biggest concerns is data conversion and the associated risk of data loss, as during an ERP migration, systems may not be properly backed up. Another concern when converting a company’s data is security. ERP migration can leave vulnerabilities in a system, leaving room for potential security breaches, data leaks, and cyber-attacks. Also, integrating new ERP systems with existing software can be complex and lead to integration issues and cause systems to crash. The complexity of ERP migration itself can cause risks.

A recent Gartner blog (https://blogs.gartner.com/tonnie-van-der-horst/erp-is-dead-the-sequel/) citing ERP projects as requiring “investment levels of € 150 million” or more and “6-10 year planning horizon” for multi-billion global businesses.  When the stakes are this high, risk mitigation is a key component, and one of the biggest risks to an ERP migration project that can be actively managed towards simplicity is data conversion.  Massive amounts of time are spent converting data for isolated and niche use cases that don’t really drive value for a system transformation program.  Oracle’s A-Team blogs (https://www.ateam-oracle.com/post/options-for-migrating-clean-data-to-fusion-applications-cloud) identified that “75% of cloud implementation delays are due to data issues that are identified during the later stage in the project … This leads to increased cost, timeline, go-live delays and challenges with resource planning”

One great example is invoices.  During an ERP migration, years of historical invoices are usually exported, transformed, imported, marked as paid, reversal entries created to balance the books.  All of this is prepared for by the project team (thousands of hours of work) and it becomes a bottleneck to the go-live as all of this needs to be done on the target platform during the migration.  And what are these invoices used for?  Not reporting, as that happens in an EPM system, and historical invoices don’t play into the FY reporting (or the same could be accomplished with a few journal entries).  Not for vendor complaints which can be done out of a data warehouse or reporting solution.  The only truly important reason is for duplicate invoice check processes to execute.  This type of single purpose, non-operational usage of ERP also goes completely against simplification principles and the “digitalization of finance” outlined by Gartner (https://www.gartner.com/en/finance/trends/prepare-finance-for-digitalization) where the focus should be on “value delivery” and “impactful automation”.

Data Indicators’ approach:  dump all of those old invoices into our AP duplicate invoice checker.  When new invoices arrive in your ERP, integrate them into this separate duplicate checker, and if our algorithm (which include fuzzy matching and is superior to out of the box ERP solutions) finds a potential match, it can automatically block the invoice and/or put it into a worklist for an AP clerk to work.  

This strategy very much aligns with modern ERP principles of an “operational ERP” that is focused on running the day to day of the business, and not on all of the potential edge and exception cases that should be implemented using a more modular design and technology set to maximize competitive advantage through flexibility.  DataIndicators’ strategy focuses on bringing the best expert resources and modern solutions in Oracle ERP migration and upgrade projects to our customers, in order to maximize the success rate, minimize the risks, and improve the flexibility of business for our customers.

Author

admin

Comment (1)

  1. A WordPress Commenter
    December 13, 2022 Reply

    Hi, this is a comment.
    To get started with moderating, editing, and deleting comments, please visit the Comments screen in the dashboard.
    Commenter avatars come from Gravatar.

Leave a comment

Your email address will not be published. Required fields are marked *

Major Healthcare Supply Chain Management Company Reduces Cloud Storage Cost and Optimizes Spend Analytics

Multi-Cloud Optimization and Analytics Use Case

Optimized cloud spend analytics, resulting in a significant reduction in expenses, automated cost-center chargebacks, and granular insights for IT leaders

Project Scope

Data Indicators was engaged to help the client reduce cloud storage cost and optimize cloud spend analytics. The development project’s goal was to understand the costs of GCP and AWS, report accurate cost-center chargebacks, understand GKE namespace usage and costs, and create combined cloud spend views for IT leaders. The team also aimed to create optimization/tagging alerts to help the client manage their costs effectively.

Results delivered by Data Indicators

  • The Client was able to establish an optimized cloud spend analytics data platform with actionable insights across IT business partners.
  • Finance was able to charge each cost center the appropriate amount without the help of a full-time data analyst, as all the logic was automated in LookML.
  • IT leaders could analyze their spend down to the project, environment, resource, SKU, or day to identify opportunities and make changes.
  • The Client’s IT team also has the ability to change the data model as the business changed.
  • By retiring the need for CloudAbility, the client saved the growing expense of 0.5% of their annual cloud spend.

Technology stack used for the project

Google BigQuery, Google Cloud SQL for PostgreSQL and Microsoft SQL Server, Google Virtual Machines, Google Cloud Storage, and Apache Airflow. To support data cataloging across the data mesh, we are implementing Secoda. For data governance, we have chosen Immuta. In addition, we used Docker where appropriate and Terraform for infrastructure management.

Enabling data-driven decision making at a major healthcare company

Enabling data-driven decision-making Use Case

Standardized tooling and centralized data management supports fast onboarding and regulatory compliance for data teams

Summary

A large healthcare client needed a solution to help them make data-centric decisions, and a project was created to deliver this. The solution involved the collection and curation of large amounts of data and providing The Client with critical insights to inform decision-making. With the help of the Data Indicators team, a scalable, secure, and easily accessible solution was developed and deployed using GCP hosting and tools.

Solution

Data Indicators developed and delivered a core suite of processes and tooling that allowed The Client’s teams to retain ownership and responsibility over their product data while providing governance and data cataloging capabilities. This enabled The Client to ensure that data usage agreements and regulatory requirements are adhered to. The solution has resulted in a 400% faster time-to-onboarding for data teams and provided a standardized suite of tooling for ETL and data quality, as well as a centralized data catalog and centralized data access governance, including rights management for data usage agreements and support for regulatory compliance and auditability.

Key benefits

  • Faster time-to-onboarding for data teams.
  • Standardized suite of tooling for ETL and data quality.
  • Centralized data catalog.
  • Centralized data access governance.
  • Supports regulatory compliance and auditability.

Technology stack used for the project

Google BigQuery, Google Cloud SQL, Google Cloud Storage, Google App Engine, Java, Spring Boot, Typescript, RESTful, Python and Google Apigee.

Architectural Guidance Use Case

Architectural Guidance Use Case

Comprehensive architectural guidance on foundational, structural, semantic, and organizational levels of interoperability, covering interconnectivity, data format and models, governance, and best practices.

Summary

Client needed a partner to provide architectural guidance on the four levels of interoperability, which include:
Foundational: Interconnectivity requirements between systems.
Structural: Defining the format, syntax, and organization of data.
Semantic: Underlying data models and use of data elements.
Organizational: Governance, legal policies, standards, and best practices.

Project Scope: Data Indicators was tasked with the following deliverables

  • Provide an overview of the current state and architecture
  • Conduct discovery workshops with Client’s business and IT units to understand their short-term and long-term needs.
  • Prepare a gap analysis of the current versus future state based on Client’s strategic outlook.
  • Advise on the best architectural practices and technology recommendations.
    Document best practices and common languages in the application integration and interoperability architecture space.
  • Furnish a high-level depiction of the future state.
  • Assist in the level of effort required to execute the architectural recommendations to the Client.
  • Support the final state interoperability architecture, including API management, event-based architecture (IoT and other events), and HL7 events

In-scope services delivered by Data Indicators

  • Worked with the Client team to build on GCP’s Apigee (API) pipeline.
  • Developed modernized API services using GCP’s components and libraries.
  • Conducted unit and integration testing of the developed code and functionality.
  • Deployed to various environments and provided guidance on production environments.
  • Provided training and handoff support.

Technology stack used for the project

•Java, Spring Boot, Typescript, RESTful, Python, MongoDB, Apigee

MarTech Infrastructure, Real-Time Engagement, and Enhanced Data Lake Maintenance Use Case

MarTech Infrastructure, Real-Time Engagement, and Enhanced Data Lake Maintenance Use Case

Unlock company’s potential with centralized customer profile store, real-time engagement, data segmentation, governance, and event-based data streaming pipelines.

Summary

The client was seeking a reliable partner to help them establish an integrated MarTech infrastructure that could support the creation of a centralized API-driven customer profile store, real-time customer engagement, content personalization, segmentation, and data science. They also required support in enhancing and maintaining their Hadoop-based data lake, implementing event-based data streaming pipelines, and ensuring data governance, privacy, and regulatory compliance. The client was looking for a trusted advisor to guide them through this process and help them achieve their marketing goals.

Project Scope

Data Indicators was tasked with the following deliverables:

  • Supporting enhancements to current state and architecture
  • Conducting discovery workshops with Client’s business/marketing and IT units to understand their short-term and long-term needs
  • Preparing a gap analysis of the current versus future state based on Client’s strategic objectives.
  • Advising on the best architectural practices, methodology and technology recommendations.
  • Documenting best practices and common languages in the application integration and interoperability architecture space.
  • Furnishing a high-level depiction of the future state.
  • Assisting in the level of effort required to execute the architectural recommendations to Client.
  • Data and software engineering, within teams across the organization
 

In-scope services delivered by Data Indicators

  • Developed data ingestion, streaming pipelines and modernized API services
  • Conducted unit and integration testing of the developed code and functionality.
  • Deployed to various environments and provided guidance on production environments.
  • Provided training and handoff support.

Technology stack used for the project

Apache Spark ( Python and Scala ), Apache NiFi for real-time orchestration and transformation, Apache Airflow for batch orchestration and transformation, Snowflake Data Warehouse,  Snowflake Snowpipe real time ingestion, Snowflake Snowpark , AWS S3, AWS EC2, Docker, Kubernetes and Adobe Experience Cloud