Building a LinkedIn Scheduling Service

Demo on YouTube: https://www.youtube.com/watch?v=5tHELYBTQ7I


Motivation


Linkedin offers no solution for publishing posts at a set date and time. This really sucks because most of my articles and code for personal work is written during off peak linkedin hours. I searched for solutions but most of them were paid, so I made my own service that integrates with Linkedin's API's and is managed via my internal dashboard. In this article I will be taking you over exactly how I built it.


Background Knowledge


The service is written in Ruby on the Rails framework. The code is hosted on GitHub here -> https://github.com/hamdaankhalid/UltimatePersonalWebsite. I chose to just namespace models, view, controllers, and jobs under "internal". I also chose to not overemphasize on separating business logic into it's own granular classes and just keep them as private methods. The reason for this was that by placing concerns in private methods I had a clean separation for the future refactoring of my codebase. As long as you have a background knowledge of the MVC architecture and understand task queues at a high level you will be able to follow this article.


Requirements


I start of every feature with a core set of acceptance requirements and user stories. This one included the following:

- An admin should be able to reference an article and queue up a linkedin post to be shared a t a specified date and time.

- An admin can edit any scheduled post if not already sent out.

- An admin can destroy any scheduled post that has not already been sent out.

- An admin can view the status of all schedules on a. dashboard.


Unknowns


I further narrowed down the items that I knew were challenges and would require further thought process.

- An active job once enqueues via the Redis adapter does not provide for interface to edit or remove.

- I had never used the Linkedin API. I allocated some spike time to read through the docs and put together postman requests to make sure I could make successful requests before trying to automate anything.


Decisions


- Active Job and Redis for task queue, and asynchronous execution.

- Postgres for DB, since I already have this setup for the core application.


Implementation


LinkedinScheduler Model:

Schedules had to be persisted, so I started off with a linkedin_scheduler model, name-spaced under internal. You can find the code for the model here: https://github.com/hamdaankhalid/UltimatePersonalWebsite/blob/main/app/models/internal/linkedin_scheduler.rb

This model has a schedule_for date time field, a tittle string field, an article entity that it refers to with a one-many relation, post_body string field, as well as a sent or not sent flag.

This model is storing the status and values that will be used for the post that needs to be made to linkedin.


LinkedSchedulersController Controller

The controller is responsible for serving the views, enqueuing jobs, interacting with our model (Yes not super clean responsibility, but it's a part of eventual refactor).

https://github.com/hamdaankhalid/UltimatePersonalWebsite/blob/main/app/controllers/internal/linkedin_schedulers_controller.rb


The controller has routes for returning the following views: a dashboard to view the status of all scheduled posts, a link to delete these schedules, a link to edit, and a link to view each scheduled post's individual details.


The core logic of this controller lives in create, update, and destroy methods. In the spirit of DRY, you will see that responsibilities such as enqueuing and redirecting lives in private methods in this class, this will also make it super easy to abstract away into it's own service next.

Let's start with the ability to create a linkedinSchedule, this method takes the parameters needed to construct a LinkedinScheduler object. We create this object and save this to our database, right after which we enqueue a job using Active Job that is set for the scheduled_for time stamp.


The active job code can be found here https://github.com/hamdaankhalid/UltimatePersonalWebsite/blob/main/app/jobs/schedule_post_job.rb


This job expects an Id for a linkedinScheduler model object. It then fetches the object with the id from the database, builds the url that will be attached to the linkedin post, instantiates a LinkedInClientService object with the api token pulled from the environment, and invokes the share_post method on the LinkedInClientService object with the parameters including the post body, the ur for the articlel, and the title of the article, after a successful response from the LinkedinclientService we update the linkedinScheduler object to have its attribute sent to be true.


The LinkedinClientService code can be found here: https://github.com/hamdaankhalid/UltimatePersonalWebsite/blob/main/app/services/internal/linkedin_client_service.rb

This service is a wrapper around the Linkedin API for sharing posts, it uses the net/http ruby library to make http requests to LinkedIn's api with the parameters included in the post request body.


The dashboard just fetches all LinkedinScheduler objects from the database and renders them along with their details, after a successful share the sent attribute will be marked as true and displayed as sent! If there is a failure, we will see that the schedule object has passed it's time and not been sent.


Destroy Flow:

In order to destroy we quite simply delete the LinkedinScheduler object selected by ID via the dashboard from the database, this will cause a deserialization error, that will force the job to fail, but since it's been deleted it will be gone from the users dashboard as well. We also have an early return statement which redirects the user if the schedule_for time stamp is past the current time.


Edit Flow:

If the LinkedinScheduler object found based on the parameters for ID via the dashboard does not have a schedule_for timestamp past current, we can edit a linkedinScheduler object.

The edit flow will delete and recreate a new object, but it abstracts this away and the user only sees an "edited" schedule job on their dashboard.


Further Improvements


- Separating code from private to it's own service.

- Enforce no further retries on jobs failing with serialization errors. An initial try/catch or "begin/rescue" block can tackle this! Update (Done!)


Summary


I built this in one weekend so it isn't the cleanest code, but I really enjoyed this, and it's a service I saw a need for and built it. Nothing beats the satisfaction of building something that you actively need!


Comments

Commenter: Anon

this is actually pretty cool

Add a comment: