PySpark – Python Spark Hadoop coding framework & testing

Big data Python Spark PySpark coding framework logging error handling unit testing PyCharm PostgreSQL Hive data pipeline

This course will bridge the gap between your academic and real world knowledge and prepare you for an entry level Big Data Python Spark developer role. You will learn the following

What you’ll learn

  • Python Spark PySpark industry standard coding practices – Logging, Error Handling, reading configuration, unit testing.
  • Building a data pipeline using Hive, Spark and PostgreSQL.
  • Python Spark Hadoop development using PyCharm.

Course Content

  • Introduction –> 2 lectures • 4min.
  • Setting up Hadoop Spark development environment –> 9 lectures • 25min.
  • Creating a PySpark coding framework –> 5 lectures • 28min.
  • Logging and Error Handling –> 4 lectures • 25min.
  • Creating a Data Pipeline with Hadoop Spark and PostgreSQL –> 6 lectures • 25min.
  • Reading configuration from properties file –> 2 lectures • 5min.
  • Unit testing PySpark application –> 3 lectures • 9min.
  • spark-submit –> 2 lectures • 3min.

PySpark - Python Spark Hadoop coding framework & testing

Requirements

  • Basic programming skills.
  • Basic database skills.
  • Hadoop entry level knowledge.

This course will bridge the gap between your academic and real world knowledge and prepare you for an entry level Big Data Python Spark developer role. You will learn the following

  • Python Spark coding best practices
  • Logging
  • Error Handling
  • Reading configuration from properties file
  • Doing development work using PyCharm
  • Using your local environment as a Hadoop Hive environment
  • Reading and writing to a Postgres database using Spark
  • Python unit testing framework
  • Building a data pipeline using Hadoop , Spark and Postgres

Prerequisites :

  • Basic programming skills
  • Basic database knowledge
  • Hadoop entry level knowledge