Blog title: researching the XYZ

Short description of post.

Published: December 6, 2021

Reading Time: 5 minutes

Title

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nulla eu sem viverra, blandit justo in, ullamcorper nisl. Aliquam maximus lorem et molestie pretium. In luctus facilisis eros id finibus. Nam ac lectus lacus. Nullam quis egestas risus. Pellentesque habitant morbi tristique senectus et netus et malesuada fames ac turpis egestas. Fusce id elit gravida sem gravida sollicitudin. Sed congue nulla elit, id cursus nibh viverra vitae.

Title

In vel sapien a justo condimentum finibus. Fusce non magna a leo suscipit fringilla nec in magna. Vestibulum viverra dapibus magna in placerat. Pellentesque sed sodales ex. Vivamus blandit eu justo ac luctus. Etiam nec tellus molestie, suscipit velit ut, lacinia velit. Quisque libero tellus, hendrerit ac dignissim et, faucibus ac nulla. Vestibulum finibus posuere varius. Maecenas finibus est non egestas aliquet. Cras venenatis metus sed iaculis fermentum. In ipsum dui, sagittis ac augue id, feugiat bibendum dui. Sed pellentesque orci orci, quis mollis massa tempor mattis.

  1#! /bin/python3.10
  2import re
  3import requests
  4import os
  5from os.path import exists
  6from bs4 import BeautifulSoup
  7from colorama import Fore, Style
  8company_list = ['scythe','dragos']
  9company_url = {'scythe':'https://www.scythe.io/about/careers','dragos':'https://jobs.lever.co/dragos'}
 10
 11# Define formatting
 12reset = Style.RESET_ALL
 13green = Fore.GREEN
 14purple = Fore.MAGENTA
 15sep = Fore.BLUE + "---------------------------" + reset
 16
 17def get_format(response,company_name):
 18    '''Return the HTML attribute needed for each site'''
 19    match company_name:
 20       case "scythe":
 21            soup = BeautifulSoup(response.text, 'html.parser').findAll("h3",attrs={"id": "w-node-_6a3848d7-bd9c-4061-be22-05d0c32b7a82-c32b7a81"})
 22            return soup
 23       case "dragos":
 24            soup = BeautifulSoup(response.text, 'html.parser').findAll("h5",attrs={"data-qa": "posting-name"})
 25            return soup
 26
 27def parse(posting_location,company_name):
 28    '''Issue the HTML request for job posting page'''
 29    # Issue request to scyhte careers page 
 30    response = requests.get(posting_location,company_name)
 31
 32    # Get the format of the specific site to parse
 33    parsed_response = get_format(response,company_name)
 34
 35    return parsed_response 
 36
 37def parse_html(html_response):
 38    '''Takes list of HTML strings and parses them to contain just the job posting'''
 39    # for each job posting
 40    postings_list = []
 41    for i in html_response:
 42        # Remove HTML from job posting 
 43        postings_list.append(re.sub('<[^<]+?>', '', str(i)))
 44    return postings_list
 45
 46def get_new(postings_list,old_list,company_name):
 47    '''Get the difference in the new job postings and the old ones'''
 48    new_post = list(set(postings_list) - set(old_list)) + list(set(old_list) - set(postings_list)) 
 49    path = './data/'+company_name 
 50
 51    # Save job posting to file
 52    with open(path, 'w') as f:
 53        for listing in postings_list:
 54            f.write("%s \n" % listing)
 55    if new_post:
 56        return company_name,new_post 
 57
 58def get_old(company_name):
 59    '''Get the previous job postings and add them to a list'''
 60    # If data file does not exist, create it
 61    path = './data/'+company_name
 62    if not exists(path):
 63        open(path, 'a').close()
 64    with open(path) as file:
 65        lines = file.readlines()
 66        old_list = []
 67        for i in lines:
 68            # for each job posting, strip out the characters we don't need 
 69            old_list.append(i.rstrip())
 70    return old_list
 71
 72def print_results(company_name,new_post):
 73    '''print the job postings'''
 74    company_name = str(new_post[0]).upper()
 75    print(f"{green}{company_name} has new job postings!")
 76    print(sep)
 77    new_post = new_post[1:]
 78    job_postings = list(new_post)
 79    for i in job_postings:
 80        print(f"{purple}",*i,sep="\n")
 81    print(sep)
 82    pass
 83
 84def select_company():
 85    '''Select the company passed in list and then call the function required for that specific job'''
 86    # Get the company name from the index number passed in through the function
 87    for company in range(len(company_list)):
 88        company_name = list(company_url.keys())[company]
 89        posting_location = list(company_url.values())[company]
 90        html_response = parse(posting_location,company_name)
 91        old_list = get_old(company_name)
 92        postings_list = parse_html(html_response)
 93        new_post = get_new(postings_list,old_list,company_name)
 94        if new_post:
 95            print_results(company_name,new_post)
 96        else:
 97            print(f"{green}{company_name}{purple} has no new job postings")
 98
 99print(f"{green}Ear2Ground:{purple} A Program to help you keep tabs on the job postings of infosec companies")
100print(sep)
101def main():
102    path = './data/'
103    if not os.path.exists(path):
104        os.makedirs(path)
105    select_company()
106
107if __name__ == "__main__":
108    main()

Pellentesque habitant morbi tristique senectus et netus et malesuada fames ac turpis egestas. Nunc aliquam, urna a ultricies dictum, mi arcu commodo eros, dictum vulputate leo magna ac orci. Quisque ultricies molestie nibh, eget sagittis est commodo ut. Cras fermentum, lectus eu interdum rhoncus, erat tortor aliquam velit, eu iaculis purus ex sed lorem. Nunc maximus nisi eu mauris ultricies, non placerat nisl fringilla. Quisque dignissim tellus enim, ut faucibus diam iaculis sit amet. Aenean vestibulum et nunc tempor elementum. Mauris vel augue id justo tempor euismod. Nunc elementum vulputate ante sit amet pulvinar. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Fusce vitae faucibus lectus.

  1. This
  2. Is
  3. A
  4. List

Smaller Title 2

  • This
  • Is
  • Also
  • A
  • List

Sed imperdiet metus at porta blandit. Proin aliquet fringilla fringilla. In eu mi tempus, condimentum leo in, fermentum purus. Fusce eget dignissim quam. Nulla faucibus elit vel ligula laoreet tempor. Phasellus sed magna velit. Donec at euismod mi. Cras suscipit interdum ligula.

This is a block quote. Groovy.

In ac euismod diam, quis vehicula tellus. Duis mollis, nulla quis egestas congue, eros enim tempor urna, ac pretium elit mi quis nulla. Nunc id vestibulum felis. Aliquam quis massa at dui posuere mattis. Curabitur fermentum rutrum nisl, nec hendrerit velit vestibulum ut. Donec varius euismod ex, eget lacinia odio scelerisque eget. Cras posuere, massa tincidunt tristique semper, tellus felis porta lorem, eu pulvinar velit lacus sit amet orci. Ut finibus dolor ac lectus tristique, non condimentum justo tristique. Pellentesque consectetur mollis tincidunt. Praesent dapibus, dui sed rhoncus luctus, erat ligula posuere eros, quis ullamcorper justo leo id tellus.