Utilizing the power of Hadoop, Apache Spark and machine learning, we analyze News content to determine underlying sentiments.
This Readme.md will walk you through the entire process, from setting up data streaming with Zookeeper and Kafka, to using Spark for data processing and machine learning for sentiment classification
kafka-python
,hdfs
, newsapi-python
) installedRequirements: requirements.txt
Producer: news_producer.py
Consumer: kafka_consumer_to_hdfs.py
Configuration: config.json
{
"newsapi": {
"key": "your_newsapi_key",
"api_page_size": 100,
"source": "bbc-news,cnn,fox-news,nbc-news,the-guardian-uk,the-new-york-times,the-washington-post,usa-today,independent,daily-mail"
},
"kafka": {
"bootstrap_servers": "localhost:9092",
"topic": "news-topic"
},
"hdfs": {
"url": "http://namenode:9870",
"path": "/user/spark",
"file_name": "news_data_articles.txt"
}
}
Install required Python packages:
pip install -r requirements.txt
Set up your News API key:
Generate or fetch the news api from https://newsapi.org/account
Update the Configurations in the config.json:
kafka-topics --create --topic news-topic --bootstrap-server localhost:9092 --partitions 1 --replication-factor 1
Run Docker Compose to start Kafka and Zookeeper:
Please ensure Docker and Docker Compose are installed before running the command.
docker-compose up -d
Service | URL |
---|---|
Hadoop | Hadoop UI |
Spark | Spark Master UI |
Jupyter | Jupyter UI |
Run the NewsAPI Kafka producer:
python news_producer.py
Run the Kafka consumer and write it to HDFS:
python kafka_consumer_to_hdfs.py
Verify the data in HDFS
hdfs dfs -ls /user/spark/
hdfs dfs -head /user/spark/news_data_articles.txt
Loading the HDFS data to spark for Machine learning analysis