|IndyWatch Education Feed Archiver|
IndyWatch Education Feed was generated at Community Resources IndyWatch.
Today, I am thrilled to share with you a new, flexible grid framework generator: Grid Wiz!
Grid frameworks are essential for experiences that span many codebases to keep the layouts aligned. The grid keeps columns at specific dimensions as the user goes to different pages within the experience.
While at IBM, I introduced CSS Gridish. It had a simple premise: Give a config file of your grid design and get both CSS Flexbox and CSS Grid back. This helps teams transition to CSS Grid once their users browsers support the final spec.
There were two fundamental decisions in CSS Gridish that troubled me: the use of vw units and Node Sass. vw units created more bugs and poor development experiences than it solved. Node Sass is powerful and used by a lot of IBM, but restricted the environment flexibility of the package.
So for my next personal project, I set off to work on Grid Wiz. Lets check out why I am much more excited about this project
Different experiences have different browser requirements based on the users visiting. Your grid framework should also be performant with the smallest amount of code needed. With specific browser compatibility, you can support the right browsers with minimal code.Here is a demonstration of flipping between `displayFlex` and `displayGrid` mode with no visual changes.
Need to support browsers all the way back to Internet Explorer? Use the Flexbox mode with the most code. No need for Internet Explorer, but do need to cover some slightly older versions of todays browsers? The CSS Variables mode will save you a lot of code with the exact same visual output. When a user base is finally ready for CSS Grid, you will get the extra functionality with the least amount code.
Here is a breakdown of the support modes you can toggle between:Switch between these support modes on the live demo to watch the size of the CSS change.
Okay, let's get our hands dirty!
I am a big fan of diving the method to chunks, so we will follow the same approach here.
Initially create a file called index.html. It is our core player. The index.html file contains both the HTML template part and the Vue object.
I am using Visual Studio Code here.
Now what we need to create is the Vue object inside the index.html file. It is created under the script tag .
It can be created by:
The whole syntax is below:
new Vue is an instance of Vue. We can access properties like el, data, methods etc to be with Vue. Properties will be explained below.
As we know Vue has a property called the el. This prope...
Global Scope acts as the first automatically created Execution Context roughly at the time when the browser opens a URL:Global Scope is the first Execution Context on The Call Stack.
A new Execution Context (EC) is spawned from a scope containing variable definition and the binding of the this object. Essentially the this object in a given EC is the link to the object context under which it operates:The binding of the this object across Execution Contexts on The Call Stack.
A new Execution Context is caused by a function call. This new Execution Context is then placed on top of Execution Stack (or The Call Stack).
A new Execution Context can also be created when an object is instantiatedbut its only because object instantiation is a constructor function call.
From then on, whatever functions you call or whatever objects you instantiate will cause a new execution context created and pushed onto the stack.
This process repeats while maintaining a this object chain all the way up to the currently executing context (the topmost one):
There is always one currently executing context. The rest are stacked below.
After the function is finished executing, the stack is removed from the top and the code control flow returns to the previous or uppermost execution context.
In other words its a LIFO (Last In First Out) order.
Noteworthy: This stacking occurs only when you call a function from the scope of another function. And that function calls yet another one from its own scope. Thus, it creates not only a stack of, but also a chain of contexts tied by the this objectrelated to the scope in which the function was executed.
Stacking will not happen no matter how many function calls are made in the same Lexical Environment and scope. They all just push and pop within the...
Alright, alright, Im not being too helpful, yet. Heres the basic idea: As projects get too large, with too many contributors, it gets impossible to track who did what and when. Did someone introduce a change that broke the entire system? How do you figure out what that change was? How do you go back to how things were before? Back to the unbroken wonderland?
Ill take it a step furthersay not a project with many contributors, just a small project with you as the creator, maintainer and distributor: You create a new feature for this project, which introduces subtle bugs you find out later. You dont remember what changes you made to the existing code base to create this new feature. Problem?
The answer to all these problems is versioning! Having versions for everything youve coded ensures you know who made the changes, what changes and exactly where, since the beginning of the project!
And now, I invite you to stop thinking of (g)it as a blackbox, open it up and find out what treasures await. Figure out how Git works, and youll never again have a problem making things work. Once youre through this, I promise youll realise the folly of doing what the XKCD comic above says. That is exactly what versioning is trying to prevent.
I am assuming you know the basic commands in GIT, or have heard about them and used them at least once. If not, heres a basic vocabulary to help you get started.
Repository: a place for storing things. With Git, this means your code folder
head: A pointer to the latest code you were working on
add: An action to ask Git to track a file
commit: An action to save the current statesuch that one can revisit this state if needed
remote: A repository that isnt local. Can be in another folder or in the cloud (for example: Github): helps other people to easily collaborate, as they dont have to get a copy from your systemthey can just get it from the cloud. Also, ensures you have a backup in case you break your laptop
pull: An action to get updated code from the remote
push: An action to send updated code to the remote
Check out my new printables for playing math with your kids:
The free 50-page PDF Hundred Charts Galore! file features 1100 charts, 099 charts, bottoms-up versions, multiple-chart pages, blank charts, game boards, and more. Everything you need to play the activities in my 70+ Things to Do with a Hundred Chart book (coming soon from Tabletop Academy Press).
If all goes well, the hundred chart book should be out (at least in ebook format) by the end of this month. While youre waiting, you can try some of the activities in my all-time most popular blog post:
Want to help your kids learn math? Claim your free 24-page problem-solving booklet, and sign up to hear about new books, revisions, and sales or other promotions.
The Web is growing at a massive rate. More and more web apps are dynamic, immersive and do not require the end user to refresh. There is emerging support for low latency communication technologies like websockets. Websockets allow us to achieve real-time communication among different clients connected to a server.
A lot of people are unaware of how to secure their websockets against some very common attacks. Let us see what they are and what should you do to protect your websockets.
WebSocket doesnt come with CORS inbuilt. That being said, it means that any website can connect to any other websites websocket connection and communicate without any restriction! Im not going into reasons why this is the way it is, but a quick fix to this is to verify Origin header on the websocket handshake.
Moreover, if youre actually authenticating users using, preferably, cookies, then this is not really a problem for you (more on this on point #4)
Rate limiting is important. Without it, clients can knowingly or unknowingly perform a DoS attack on your server. DoS stands for Denial of Service. DoS means a single client is keeping the server so busy that the server is unable to handle other clients.
In most of the cases it is a deliberate attempt by an attacker to bring down a server. Sometimes poor frontend implementations can also lead to DoS by normal clients.
Were gonna make use of the leaky bucket algorithm (which apparently is a very common algorithm for networks to implement) for implementing rate limiting on our websockets.
The idea is that you have a bucket which has a fixed size hole at its floor. You start putting water in it and the water goes out through the hole at the bottom. Now, if your rate of putting water into the bucket is larger than the rate of flowing out of the hole for a long time, at some point, the bucket will become full and start leaking. Thats all.
Lets now understand how it relates to our websocket:
Youve followed your first React.js tutorial and youre feeling great. Now what? In the following article, Im going to discuss 5 concepts that will bring your React skills and knowledge to the next level.
If youre completely new to React, take some time to complete this tutorial and come back after!
By far the most important concept on this list is understanding the component lifecycle. The component lifecycle is exactly what it sounds like: it details the life of a component. Like us, components are born, do some things during their time here on earth, and then they die
But unlike us, the life stages of a component are a little different. Heres what it looks like:Image from here!
Lets break this image down. Each colored horizontal rectangle represents a lifecycle method (except for React updates DOM and refs). The columns represent different stages in the components life.
A component can only be in one stage at a time. It starts with mounting and moves onto updating. It stays updating perpetually until it gets removed from the virtual DOM. Then it goes into the unmounting phase and gets removed from the DOM.
The lifecycle methods allow us to run code at specific points in the components life or in response to changes in the components life.
Lets go through each stage of the component and the associated methods.
Since class-based components are classes, hence the name, the first method that runs is the constructor method. Typically, the constructor is where you would initialize component state.
Next, the component runs the getDerivedStateFromProps. Im going to skip this method since it has limited use.
Now we come to the render method which returns your JSX. Now React mounts onto the DOM.
Lastly, the componentDidMount method runs. Here is where you would do any asynchronous calls to databases or directly manipulate the DOM if you need. Just like that, our component is born.
This phase is triggered every...
TL;DR: Hooks have learned from the trade-offs of mixins, higher order components, and render props to bring us new ways to create contained, composable behaviors that can be consumed in a flat and declarative manner.
However, hooks come with their own price. They are not the silver bullet solution. Sometimes you need hierarchy. So lets take a closer look.
React Hooks are here, and I immediately fell in love with them. To understand why Hooks are great, I think it helps to look at how weve been solving a common problem throughout Reacts history.
Heres the situation. You have to show the users mouse position.Well that was easy!
Were engineers though, and we have a ton of tools to help us break this pattern out. Lets review some of the ways weve historically done it and their trade-offs.
Mixins get a lot of flak. They set the stage for grouping together lifecycle hooks to describe one effect.
While the general idea of encapsulating logic is great, we ended up learning some serious lessons from mixins.
Its not obvious where this.state.x is coming from. With mixins, its also possible for the mixin to be blindly relying on that a property exists in the component.
That becomes a huge problem as people start including and extending tons of mixins. You cant simply search in a single file and assume you havent broken something somewhere else.
Refactoring needs to be easy. These mixed-in behaviors need to be more obvious that they dont belong to the component. They shouldnt be using the internals of the...
Programming Interviews are hard!
Telephone screening interviews are a bit easier than the traditional onsite whiteboard interviews. The whiteboard interviews involve a whole lot of pressure and anxiety due to the lack of a code editor to code on. The thing that these interviews do have in common is the kind of skills they test.
Usually, a programming interview will involve one programming challenge. The candidate has to work on it for the duration of the interview. The time allotted is usually 3035 minutes. The first 10 minutes are taken up by introductions and other things.
Given a programming problem, the interviewer usually wants the candidate to:
Throughout my career, I have discovered and rediscovered a simple truth. The ability to concentrate single-mindedly on your most important task, to do it well, and to finish it completely, is the key to great success, achievement, respect, status, and happiness in life.
Brian Tracy, Eat That Frog
The problem with programming, along with entrepreneurship and most jobs in tech, is that it requires a lot of mental effort. So no matter how pointless or trivial the task, we still feel productive.
While your brain may be sweating from the sheer challenge of it all, it doesnt mean that what youre doing is automatically the best use of your time.
Your best use of time is always going to be adding value. Sometimes thats code, sometimes not.
If youre an entrepreneur or single founder, your job is to add value to your customers lives. If youre employed or want to be, your job is to add value to your company, and the companys customers.
Nothing in life will move you forward, faster, than consistent prioritization of things that add the most value for others (and for yourself.)
This is what separates the top performers from everyone else, the highest paid from the resentful, the productive and impactful from the overworked.
Dont be fooled by what you see on social media. The answer is not 18 hour days and 100 hour weeks. The hustle never stops!
Youll sleep when youre dead, right?
Well, I like sleeping. And I like clocking into work, getting a lot done, getting paid well for it, and leaving work at work. I like having the free time to write and work on my own stuff.
Im afforded these luxuries (you know, like a work-life balance) because my company trusts me. Im far from a world-class developer. In fact, Im a self-taught programmer and I didnt start coding seriously until my late 20's.
But Ive built up a track record of getting the stuff done that adds value fo...
This time Im going to tell you about this very useful pattern in React called the container pattern or container component pattern.
This is one of the first patterns I learned. It helped me a lot to separate problems in smaller ones and solve them one at a time.
Also, it definitely helped make my code much more reusable and self-contained at once.
It might seem a paradox! How you get your code to be reusable and self-contained at the same time?
Well, reusable because you learn to do small dummy (presentational) components that you can re-use a lot.
Self-contained because the container, view, or whatever you are using to keep all your logic can easily be detached from one place and attached to any other one without big changes/refactoring in your main app.
The truth is when you want to do a feature you always start simple and clean.
Days pass by and you get to add one more small feature here, one more feature there. Youre making a patch here, a patch there, and your whole code becomes messy and unmanageable.
Trust me, Ive been there. And Im still there nowadays! We all are, at a certain point, because programming is a craft. But we can minimize that a lot with practice and with this amazing design pattern.
But, what is a design pattern?
A design pattern is nothing more than a general, reusable solution to a commonly occurring problem within a given context in software design. Its not a finished design that can be transformed directly into source or machine code. Its a description or template for how to solve a problem that can be used in many different situations.
Design patterns are formalized best practices that the programmer can use to solve common problems when designing an application or system.
You know the MVC software design pattern?
Well, MVC stands for...
I have spent much of my career as a graduate student researcher, and now as a Data Scientist in the industry. One thing I have come to realize is that a vast majority of solutions proposed both in academic research papers and in the workplace are just not meant to shipthey just dont scale!
And when I say scale, I mean:
Some of these approaches either work on extremely narrow use cases, or have a tough time generating results in a timely manner.
More often than not, the problem lies is in the approach that was used although when things go wrong, we tend to render the problem unsolvable. Remember, there will almost always be more than one way to solve a Natural Language Processing (NLP) or Data Science problem. Optimizing your choices will increase your chance of success in deploying your models to production.
Over the past decade I have shipped solutions that serve real users. From this experience, I now follow a set of best practices that maximizes my chance of success every time I start a new NLP project.
In this article, I will share some of these with you. I swear by these principles and I hope these become handy to you as well.
KISS (Keep it simple, stupid). When solving NLP problems, this seems like common sense.
But I cant say this enough: choose techniques and pipelines that are easy to understand and maintain. Avoid complex ones that only you understand, sometimes only partially.
In a lot of NLP applications, you would typically notice one of two things:
The first question to ask yourself is if you need all the layers of pre-processing?
Do you really need part-of-speech tagging, chunking, entity resolution, lemmatization and etc. What if you strip out a few layers? How does this affect the performance of your models?
With access to massive amounts of data, you can often actually let the evidence in the data guide your model.
Im currently a developer for Blueprint, an organization at UC Berkeley. We develop software pro bono for non-profits and advance technology for social good. This past year, my team worked on building a solution for The Dream Project. The goal is to provide a better education for children in the Dominican Republic.
I am writing this post in hopes of sharing my experience of doing Salesforce integration using Heroku Connect for a Ruby on Rails/React Native app.
Heroku Connect makes it easy for you to build Heroku apps that share data with your Salesforce deployment. Using bi-directional synchronization between Salesforce and Heroku Postgres, Heroku Connect unifies the data in your Postgres database with the contacts, accounts and other custom objects in the Salesforce database.
Another way to put it
Heroku Connect assists with replacing your apps Postgres database with a Salesforce database. Of course, since Rails apps connect natively to Postgres, you cannot do immediate calls and pushes to Salesforce without some API like the Restforce gem.
In reality, the Postgres database that our Rails app will interact with serves as a disguise for Salesforce. It is a working middleman. All data must go through it before reaching Salesforce, and vice versa. Heroku Connect is the bridge that combines the capabilities of Force.com and Heroku without the need for a gem.
You might ask why even bother learning Salesforce integration?
Well, Salesforce integration streamlines the process of storing and retrieving data. Especially customer data for businesses.
You can provide your customers with modernized computer systems for an improved user experience and workflow. Youll be accelerating app development. It also creates better tools for informing management and sales on business performance.
This helps businesses achieve efficient levels of operation for business-to-consumer apps. It does this through instantaneous and accurate updates.
To give some background for the code snippets below in the tutorial, I will explain beforehand the project I was working on. This project got me introduced to Heroku Connect.
Previously, Dream recorded student information in a Salesforce database. This was not ideal for teachers to use. To make their lives easier, we created a user-friendly app. The app handled course creation, s...
When I started out writing tests for my React application, it took me some tries before I figured out how to set up my testing environment using Jest & Enzyme. This tutorial assumes that you already have a React application set up with webpack & babel. Well continue from there.
This is part of a series of articles I have written. I talk about how to set up a React application for production the right and easy way.
Before we begin, if at any time you feel stuck please feel free to check the code repository. PRs are most welcome if you feel things can be improved.
You need to have Node installed in order to use npm (node package manager).
First things first, create a folder called app then open up your terminal and go into that app folder and type:
npm init -y
This will create a package.json file for you. In your package.json file add the following:https://medium.com/media/338c71a56bc2d93bd191c44883664365/href
Second create a folder called src in your app folder. src/app/ folder is where all of your React code along with its test will reside. But before that lets understand why we did what we did in our package.json file.
Ill talk about the scripts in a bit (promise). But before that lets learn why we need the following dependencies. I want you to know what goes inside your package.json So lets start.
@babel/core Since generally we use webpack to compile our react code. Babel is a major dependency that helps tell webpack how to compile the code. This is a peer dependency for u...
Its 2018 and I just wrote a title that contains the words Serverless server. Life has no meaning.
Despite that utterly contradictory headline, in this article were going to explore a pretty nifty way to exploit SendGrids template functionality using Timer Triggers in Azure Functions to send out scheduled tabular reports. We are doing this because thats what everyone wants in their inbox. A report. With numbers in it. And preferably some acronyms.
First, lets straw-man this project with a contrived application that looks sufficiently boring enough to warrant a report. I have just the thing. A site where we can adjust inventory levels. The word inventory is just begging for a report.
This application allows you to adjust the inventory quantity (last column). Lets say that an executive somewhere has requested that we email them a report every night that contains a list of every SKU altered in the last 24 hours. Because of course, they would ask for that. In fact, I could swear Ive built this report in real life in a past job. Or theres a glitch in the matrix. Either way, were doing this.
Here is what were going to be building
Normally the way you would build this is with some sort of report server. Something like SQL Server Reporting Services or Business Objects or whatever other report servers are out there. Honestly, I dont want to know. But if you dont have a report server, this gets kind of tedious.
Lets go over what you have to do to make this happen
This is the kind of thing that nobody wants to do. But I think this project can be a lot of fun, and we can use some interesting technology to pull it off. Starting with Serverless.
Serverless is a really good use case for one-off requests like this. In this case, we can use Azure Functions to create a Timer Trigger function.
To do that, Im going to use the Azure Funct...
A good logging mechanism helps us in our time of need.
When were handling a production failure or trying to understand an unexpected response, logs can be our best friend or our worst enemy.
Their importance for our ability to handle failures is enormous. When it comes to our day to day work, when we design our new production service/feature, we sometimes overlook their importance. We neglect to give them proper attention.
When I started developing, I made a few logging mistakes that cost me many sleepless nights. Now, I know better, and I can share with you a few practices Ive learned over the years.
When developing on our local machine, we usually dont mind using a file handler for logging. Our local disk is quite large and the amount of log entries being written is very small.
That is not the case in our production machines. Their local disk usually has limited free disk space. In time the disk space wont be able to store log entries of a production service. Therefore, using a file handler will eventually result in losing all new log entries.
If you want your logs to be available on the services local disk, dont forget to use a rotating file handler. This can limit the max space that your logs will consume. The rotating file handler will handle overriding old log entries to make space for new ones.
Our production service is usually spread across multiple machines. Searching a specific log entry will require investigating all them. When were in a hurry to fix our service, theres no time to waste on trying to figure out where exactly did the error occur.
Instead of saving logs on local disk, stream them into a centralized logging system. This allows you to search all them at the same time.
If youre using AWS or GCPyou can use their logging agent. The agent will take care of streaming the logs into their logging search engine.
There is a thin line between too few and too many logs. In my opinion, log entries should be meaningful and only serve the purpose of investigating issues on our production environment. When youre about to add a new log entry, you should think about how you will use it in the future. Try to answer this question: What information does the log message provide the developer who wil...
I was a lone software developer. When I was in college, I attended the KDE conference. It was my first encounter with the open source world. At the conference, I thought the presenters and the people raising hands were very smart. I knew there was free software available, created by the community for the community. But the developers that build it were foreign to me.
I thought really cool, intelligent people developed this software. I thought you had to to be really smart and privileged to join them.
I tried to participate in Google Summer of Code (GSOC) two times during college, but wasnt successful. Then after graduation, during my job, I used lots of open source projects. I even used them when freelancing. I heavily relied on community-developed tools and tech. I was really fascinated with peoples stories on how they started contributing to open source, and how they got their amazing remote jobs!
Now after procrastinating for another two months and not being able to land a remote job, I decided to do it once and for all and contribute myself.
I started uploading my code to the GitHubwhenever wrote any new code. I created an open source NPM module along with some other demo projects and uploaded them. But this wasnt the gist of open source. I wasnt actually contributing to other repos or working with other developers to create software. I was still working in isolation.
Then it came: I stumbled upon Hacktoberfest. They (DigitalOcean, GitHub, and Twilio) were giving away a free t-shirt if you submitted 5 Pull Requests to an open source project on GitHub in October. Even if your PR was not merged, still it counted towards your progress. And this time, they had a ton of t-shirts, so, it was easy to get one. This was the final push I neededapparently, a free t-shirt gives you an amazing boost!.
So thus I started my journey in the OPEN SOURCE WORLD.
I searched for open source projects to tackle on GitHub. I wanted some easy tasks to quickly get familiar with the PR process. So I looked for issues which did not require me to jump into the whole source code.
There were many developers who started projects for Hacktoberfest and newcomers. It was easy to submit PR in these repos, so I submitted three PRs. I submitted my other two PRs to other peoples personal projects. There were many other repos where you just had to add your name to t...
Fatimat Gbajabiamila talks about challenging stereotypes, her love for pair programming, and why shes committed to giving back
Rebecca: Fatimat, thank you so much for making time to chat with me. How was your first week on the job?
Fatimat: To be honest, its hard to believe Im getting paid to code. Its been quite the journey.
Rebecca: A journey youve undertaken without a university degree, as I understand.
Fatimat: Yeah, I left school after finishing my A levels, where I studied economics, business, and maths.
Rebecca: What were you doing before you heard about Founders and Coders?
Fatimat: I was working with a charity called Futureversity as a project coordinator, organizing summer programmes for young people and recruiting volunteers. I loved my colleagues, but I knew it wasnt something I wanted to do as a career.
Rebecca: How did you figure out you wanted to pursue a career as a software developer?
Fatimat: Well, when I was younger, I had my heart set on pursuing a career in business and finance. I remember visiting Bloomberg on a trip as part of the Brokerage Citylink programme during Year 10 and deciding right then and there that I wanted to work there one day.
Rebecca: Really? Right then and there, as a teenager? Its hard for me to imagine a fifteen-year-old falling in love with financial services.
Fatimat: Honestly, I think the host just did a fantastic job of selling the company to us and inspiring us to aim high. They took us into the newsroom, where the trading numbers lined the walls, and then to an office, which was full of gadgets. That visit probably influenced my decision to study maths and economics, as I wanted to learn accounting so I could work for them.
Anyway, you grow up, life happens, dreams change and one day like me you start to ask yourselves important questions like, What am I going to do with my life? I was 21 and had no idea what I wanted t...
Chrome + Box = Chrome Web Store
We all like surfing the web. And we all like things to be at the touch of our fingertips. Why not create something that caters to these two absolute truths?
In this article, Ill explain the building blocks of a Chrome extension. Afterwards, youll just have to think of a good idea as an excuse to make one.
Chrome is by far the most popular web browser. Some estimations are as high as 59%. So, if you want to reach as many people as you can, developing a Chrome extension is the best way to do it.
To be able to publish a Chrome extension, you need to have a developer account which entails a $5 one-time signup fee.
Each Chrome extension should have these components: the manifest file, popup.html and popup.js and a background script. Lets take a look at them.
Google uses this file to acquire details about your extension when you will publish it. There are required, recommended and optional fields.
If you want to support multiple languages, read more here.
Imposter syndrome is something we all struggle with to one degree or another. Imposter syndrome is the fear of exposure as a fraud. If youre anything like me you have felt like your work was not good enough to show. Or you werent far along enough in your journey as a developer to have much to contribute.
After learning about Hacktoberfest last year, I wanted to contribute. But I felt overwhelmed, and imposter syndrome began to take hold.
I told myself I was too inexperienced as a developer and I worried that my commits wouldnt be worthwhile. Unfortunately, I let those fears get the better of me, and I didnt even bother signing up.
This year I forced myself to set my fears aside, studied this post on Hacktoberfest, and I dove in. Im going to share a little of what I worked on and the benefits of getting involved. Benefits that go far beyond getting a shirt and can be had 12 months out of the year!Image: twillo
I began on October 11th. I was starting at a slight disadvantage already being a third of the way through the month.
The time crunch motivated me. I decided I would try to submit a pull request every Friday and once during the week for the rest of the month. Setting a schedule was important. I focused on pull requests two or three days out of the week and tried not to stress the rest of the time. Regardless of how ambitious your goal is, five pull requests in a month or five pull requests in a week: its important to have a plan.
The pull request was easy, I didnt fork or clone the freeCodeCamp repository, I opened it right on the GitHub page.
Boom first pull request opened.
I didnt want all five pull requests to come from one repository (although there is nothing wrong with that). After a few pull requests on freeCodeCamp, I started venturing out and exploring GitHub....
|IndyWatch Education Feed Archiver|
IndyWatch Education Feed was generated at Community Resources IndyWatch.
Resource generated at IndyWatch using aliasfeed and rawdog