Working with Apache Kafka
Get hands-on tooling experience with architecting, programming, streaming, monitoring, and tuning your data using Apache Kafka.
Apache Kafka is the industry-leading tool for real-time data pipeline processing. Kafka serves as the key solution to addressing the challenges of successfully transporting big data. Its high-scalability, fault tolerance, execution speed, and fluid integrations are some of the key hallmarks that make it an integral part of many Enterprise Data architectures.
This hands-on Apache Kakfa training workshop gets you up and running so you can immediately take advantage of the low latency, massive parallelism, and exciting use cases Kafka makes possible. Led by an enterprise engineering expert, you’ll get live instruction and coaching on how to be effective when using Kafka in your work or project.
This “skills-centric” course is about 50% hands-on lab and 50% lecture, coupling the most current techniques with the soundest industry practices. Throughout the course, you will be led through a series of progressively advanced topics, where each topic consists of lectures, group discussion, comprehensive hands-on lab exercises, and lab review.
Available formats for this course
Duration2 days/14 hours of instruction
Public Classroom Pricing
Starting at: $1895(USD)
GSA Price: $1420
Group Rate: $1795
Part 1: Introduction to Streaming Systems
- Fast data
- Streaming architecture
- Lambda architecture
- Message queues
- Streaming processors
Part 2: Introduction to Kafka
- Comparing Kafka with other queue systems (JMS / MQ)
- Kaka concepts: Messages, Topics, Partitions, Brokers, Producers, commit logs
- Kafka & Zookeeper
- Producing messages
- Consuming messages (Consumers, Consumer Groups)
- Message retention
- Scaling Kafka
- Labs: Getting Kafka up and running; Using Kafka utilities
Part 3: Programming with Kafka
- Configuration parameters
- Producer API (Sending messages to Kafka)
- Consumer API (consuming messages from Kafka)
- Commits, Offsets, Seeking
- Schema with Avro
- Lab: Writing Kafka clients in Java; Benchmarking Producer APIs
Part 4: Kafka Streams
- Streams overview and architecture
- Streams use cases and comparison with other platforms
- Learning Kafka Streaming concepts (KStream, KTable, KStore)
- KStreaming operations (transformations, filters, joins, aggregations)
- Labs: Kafka Streaming labs
Part 5: Administering Kafka
- Hardware / Software requirements
- Deploying Kafka
- Configuration of brokers / topics / partitions / producers / consumers
- Security: How secure Kafka cluster, and secure client communications (SASL, Kerberos)
- Monitoring: monitoring tools
- Capacity Planning: estimating usage and demand
- Troubleshooting: failure scenarios and recovery
Part 6: Monitoring and Instrumenting Kafka
- Monitoring Kafka
- Instrumenting with Metrics library
- Labs; Monitor Kafka cluster
- Instrument Kafka applications and monitor their performance
Part 7: Case Study / Workshop (Time-Permitting)
- Students will build an end-to-end application simulating web traffic and send metrics to Grafana.
Participants in this workshop should have a working knowledge of at least one programming language (preferably Python, Java, or Scala) and be able to work from the command line in a Linux VM or container.
Professionals who may benefit include:
- Java developers seeking to be proficient in Apache Kafka.
- Developers who are comfortable with Java, and have reasonable experience working with databases.
- Students should also be able to navigate Linux command line and have basic knowledge of Linux editors (such as VI / nano) for editing code.
- Data Scientists
- Software Engineers
- Get Kafka up and running
- Produce and Consume Messages
- Write Kafka clients in Java
- Program using Kafka API
- Build a data streaming pipeline using Kafka Streams
- Monitor Kafka Performance Metrics
- Tune Kafka for Optimal Performance
- Troubleshoot Common Kafka Issues
- Administer and Deploy Kafka