|IndyWatch Education Feed Archiver|
IndyWatch Education Feed was generated at Community Resources IndyWatch.
I have spent much of my career as a graduate student researcher, and now as a Data Scientist in the industry. One thing I have come to realize is that a vast majority of solutions proposed both in academic research papers and in the workplace are just not meant to shipthey just dont scale!
And when I say scale, I mean:
Some of these approaches either work on extremely narrow use cases, or have a tough time generating results in a timely manner.
More often than not, the problem lies is in the approach that was used although when things go wrong, we tend to render the problem unsolvable. Remember, there will almost always be more than one way to solve a Natural Language Processing (NLP) or Data Science problem. Optimizing your choices will increase your chance of success in deploying your models to production.
Over the past decade I have shipped solutions that serve real users. From this experience, I now follow a set of best practices that maximizes my chance of success every time I start a new NLP project.
In this article, I will share some of these with you. I swear by these principles and I hope these become handy to you as well.
KISS (Keep it simple, stupid). When solving NLP problems, this seems like common sense.
But I cant say this enough: choose techniques and pipelines that are easy to understand and maintain. Avoid complex ones that only you understand, sometimes only partially.
In a lot of NLP applications, you would typically notice one of two things:
The first question to ask yourself is if you need all the layers of pre-processing?
Do you really need part-of-speech tagging, chunking, entity resolution, lemmatization and etc. What if you strip out a few layers? How does this affect the performance of your models?
With access to massive amounts of data, you can often actually let the evidence in the data guide your model.
Im currently a developer for Blueprint, an organization at UC Berkeley. We develop software pro bono for non-profits and advance technology for social good. This past year, my team worked on building a solution for The Dream Project. The goal is to provide a better education for children in the Dominican Republic.
I am writing this post in hopes of sharing my experience of doing Salesforce integration using Heroku Connect for a Ruby on Rails/React Native app.
Heroku Connect makes it easy for you to build Heroku apps that share data with your Salesforce deployment. Using bi-directional synchronization between Salesforce and Heroku Postgres, Heroku Connect unifies the data in your Postgres database with the contacts, accounts and other custom objects in the Salesforce database.
Another way to put it
Heroku Connect assists with replacing your apps Postgres database with a Salesforce database. Of course, since Rails apps connect natively to Postgres, you cannot do immediate calls and pushes to Salesforce without some API like the Restforce gem.
In reality, the Postgres database that our Rails app will interact with serves as a disguise for Salesforce. It is a working middleman. All data must go through it before reaching Salesforce, and vice versa. Heroku Connect is the bridge that combines the capabilities of Force.com and Heroku without the need for a gem.
You might ask why even bother learning Salesforce integration?
Well, Salesforce integration streamlines the process of storing and retrieving data. Especially customer data for businesses.
You can provide your customers with modernized computer systems for an improved user experience and workflow. Youll be accelerating app development. It also creates better tools for informing management and sales on business performance.
This helps businesses achieve efficient levels of operation for business-to-consumer apps. It does this through instantaneous and accurate updates.
To give some background for the code snippets below in the tutorial, I will explain beforehand the project I was working on. This project got me introduced to Heroku Connect.
Previously, Dream recorded student information in a Salesforce database. This was not ideal for teachers to use. To make their lives easier, we created a user-friendly app. The app handled course creation, s...
TL;DR: Hooks have learned from the trade-offs of mixins, higher order components, and render props to bring us new ways to create contained, compassable behaviors that can be consumed in a flat and declarative manner.
However, hooks come with their own price. They are not the silver bullet solution. Sometimes you need hierarchy. So lets take a closer look.
React Hooks are here, and I immediately fell in love with them. To understand why Hooks are great, I think it helps to look at how weve been solving a common problem throughout Reacts history.
Heres the situation. You have to show the users mouse position.Well that was easy!
Were engineers though, and we have a ton of tools to help us break this pattern out. Lets review some of the ways weve historically done it and their trade-offs.
Mixins get a lot of flak. They set the stage for grouping together lifecycle hooks to describe one effect.
While the general idea of encapsulating logic is great, we ended up learning some serious lessons from mixins.
Its not obvious where this.state.x is coming from. With mixins, its also possible for the mixin to be blindly relying on that a property exists in the component.
That becomes a huge problem as people start including and extending tons of mixins. You cant simply search in a single file and assume you havent broken something somewhere else.
Refactoring needs to be easy. These mixed-in behaviors need to be more obvious that they dont belong to the component. They shouldnt be using the internals of the...
When I started out writing tests for my React application, it took me some tries before I figured out how to set up my testing environment using Jest & Enzyme. This tutorial assumes that you already have a React application set up with webpack & babel. Well continue from there.
This is part of a series of articles I have written. I talk about how to set up a React application for production the right and easy way.
Before we begin, if at any time you feel stuck please feel free to check the code repository. PRs are most welcome if you feel things can be improved.
You need to have Node installed in order to use npm (node package manager).
First things first, create a folder called app then open up your terminal and go into that app folder and type:
npm init -y
This will create a package.json file for you. In your package.json file add the following:https://medium.com/media/338c71a56bc2d93bd191c44883664365/href
Second create a folder called src in your app folder. src/app/ folder is where all of your React code along with its test will reside. But before that lets understand why we did what we did in our package.json file.
Ill talk about the scripts in a bit (promise). But before that lets learn why we need the following dependencies. I want you to know what goes inside your package.json So lets start.
@babel/core Since generally we use webpack to compile our react code. Babel is a major dependency that helps tell webpack how to compile the code. This is a peer dependency for u...
Youve followed your first React.js tutorial and youre feeling great. Now what? In the following article, Im going to discuss 5 concepts that will bring your React skills and knowledge to the next level.
If youre completely new to React, take some time to complete this tutorial and come back after!
By far the most important concept on this list is understanding the component lifecycle. The component lifecycle is exactly what it sounds like: it details the life of a component. Like us, components are born, do some things during their time here on earth, and then they die
But unlike us, the life stages of a component are a little different. Heres what it looks like:Image from here!
Lets break this image down. Each colored horizontal rectangle represents a lifecycle method (except for React updates DOM and refs). The columns represent different stages in the components life.
A component can only be in one stage at a time. It starts with mounting and moves onto updating. It stays updating perpetually until it gets removed from the virtual DOM. Then it goes into the unmounting phase and gets removed from the DOM.
The lifecycle methods allow us to run code at specific points in the components life or in response to changes in the components life.
Lets go through each stage of the component and the associated methods.
Since class-based components are classes, hence the name, the first method that runs is the constructor method. Typically, the constructor is where you would initialize component state.
Next, the component runs the getDerivedStateFromProps. Im going to skip this method since it has limited use.
Now we come to the render method which returns your JSX. Now React mounts onto the DOM.
Lastly, the componentDidMount method runs. Here is where you would do any asynchronous calls to databases or directly manipulate the DOM if you need. Just like that, our component is born.
This phase is triggered every ti...
Its 2018 and I just wrote a title that contains the words Serverless server. Life has no meaning.
Despite that utterly contradictory headline, in this article were going to explore a pretty nifty way to exploit SendGrids template functionality using Timer Triggers in Azure Functions to send out scheduled tabular reports. We are doing this because thats what everyone wants in their inbox. A report. With numbers in it. And preferably some acronyms.
First, lets straw-man this project with a contrived application that looks sufficiently boring enough to warrant a report. I have just the thing. A site where we can adjust inventory levels. The word inventory is just begging for a report.
This application allows you to adjust the inventory quantity (last column). Lets say that an executive somewhere has requested that we email them a report every night that contains a list of every SKU altered in the last 24 hours. Because of course, they would ask for that. In fact, I could swear Ive built this report in real life in a past job. Or theres a glitch in the matrix. Either way, were doing this.
Here is what were going to be building
Normally the way you would build this is with some sort of report server. Something like SQL Server Reporting Services or Business Objects or whatever other report servers are out there. Honestly, I dont want to know. But if you dont have a report server, this gets kind of tedious.
Lets go over what you have to do to make this happen
This is the kind of thing that nobody wants to do. But I think this project can be a lot of fun, and we can use some interesting technology to pull it off. Starting with Serverless.
Serverless is a really good use case for one-off requests like this. In this case, we can use Azure Functions to create a Timer Trigger function.
To do that, Im going to use the Azure Funct...
A good logging mechanism helps us in our time of need.
When were handling a production failure or trying to understand an unexpected response, logs can be our best friend or our worst enemy.
Their importance for our ability to handle failures is enormous. When it comes to our day to day work, when we design our new production service/feature, we sometimes overlook their importance. We neglect to give them proper attention.
When I started developing, I made a few logging mistakes that cost me many sleepless nights. Now, I know better, and I can share with you a few practices Ive learned over the years.
When developing on our local machine, we usually dont mind using a file handler for logging. Our local disk is quite large and the amount of log entries being written is very small.
That is not the case in our production machines. Their local disk usually has limited free disk space. In time the disk space wont be able to store log entries of a production service. Therefore, using a file handler will eventually result in losing all new log entries.
If you want your logs to be available on the services local disk, dont forget to use a rotating file handler. This can limit the max space that your logs will consume. The rotating file handler will handle overriding old log entries to make space for new ones.
Our production service is usually spread across multiple machines. Searching a specific log entry will require investigating all them. When were in a hurry to fix our service, theres no time to waste on trying to figure out where exactly did the error occur.
Instead of saving logs on local disk, stream them into a centralized logging system. This allows you to search all them at the same time.
If youre using AWS or GCPyou can use their logging agent. The agent will take care of streaming the logs into their logging search engine.
There is a thin line between too few and too many logs. In my opinion, log entries should be meaningful and only serve the purpose of investigating issues on our production environment. When youre about to add a new log entry, you should think about how you will use it in the future. Try to answer this question: What information does the log message provide the developer who wil...
I was a lone software developer. When I was in college, I attended the KDE conference. It was my first encounter with the open source world. At the conference, I thought the presenters and the people raising hands were very smart. I knew there was free software available, created by the community for the community. But the developers that build it were foreign to me.
I thought really cool, intelligent people developed this software. I thought you had to to be really smart and privileged to join them.
I tried to participate in Google Summer of Code (GSOC) two times during college, but wasnt successful. Then after graduation, during my job, I used lots of open source projects. I even used them when freelancing. I heavily relied on community-developed tools and tech. I was really fascinated with peoples stories on how they started contributing to open source, and how they got their amazing remote jobs!
Now after procrastinating for another two months and not being able to land a remote job, I decided to do it once and for all and contribute myself.
I started uploading my code to the GitHubwhenever wrote any new code. I created an open source NPM module along with some other demo projects and uploaded them. But this wasnt the gist of open source. I wasnt actually contributing to other repos or working with other developers to create software. I was still working in isolation.
Then it came: I stumbled upon Hacktoberfest. They (DigitalOcean, GitHub, and Twilio) were giving away a free t-shirt if you submitted 5 Pull Requests to an open source project on GitHub in October. Even if your PR was not merged, still it counted towards your progress. And this time, they had a ton of t-shirts, so, it was easy to get one. This was the final push I neededapparently, a free t-shirt gives you an amazing boost!.
So thus I started my journey in the OPEN SOURCE WORLD.
I searched for open source projects to tackle on GitHub. I wanted some easy tasks to quickly get familiar with the PR process. So I looked for issues which did not require me to jump into the whole source code.
There were many developers who started projects for Hacktoberfest and newcomers. It was easy to submit PR in these repos, so I submitted three PRs. I submitted my other two PRs to other peoples personal projects. There were many other repos where you just had to add your name to t...
Fatimat Gbajabiamila talks about challenging stereotypes, her love for pair programming, and why shes committed to giving back
Rebecca: Fatimat, thank you so much for making time to chat with me. How was your first week on the job?
Fatimat: To be honest, its hard to believe Im getting paid to code. Its been quite the journey.
Rebecca: A journey youve undertaken without a university degree, as I understand.
Fatimat: Yeah, I left school after finishing my A levels, where I studied economics, business, and maths.
Rebecca: What were you doing before you heard about Founders and Coders?
Fatimat: I was working with a charity called Futureversity as a project coordinator, organizing summer programmes for young people and recruiting volunteers. I loved my colleagues, but I knew it wasnt something I wanted to do as a career.
Rebecca: How did you figure out you wanted to pursue a career as a software developer?
Fatimat: Well, when I was younger, I had my heart set on pursuing a career in business and finance. I remember visiting Bloomberg on a trip as part of the Brokerage Citylink programme during Year 10 and deciding right then and there that I wanted to work there one day.
Rebecca: Really? Right then and there, as a teenager? Its hard for me to imagine a fifteen-year-old falling in love with financial services.
Fatimat: Honestly, I think the host just did a fantastic job of selling the company to us and inspiring us to aim high. They took us into the newsroom, where the trading numbers lined the walls, and then to an office, which was full of gadgets. That visit probably influenced my decision to study maths and economics, as I wanted to learn accounting so I could work for them.
Anyway, you grow up, life happens, dreams change and one day like me you start to ask yourselves important questions like, What am I going to do with my life? I was 21 and had no idea what I wanted t...
Chrome + Box = Chrome Web Store
We all like surfing the web. And we all like things to be at the touch of our fingertips. Why not create something that caters to these two absolute truths?
In this article, Ill explain the building blocks of a Chrome extension. Afterwards, youll just have to think of a good idea as an excuse to make one.
Chrome is by far the most popular web browser. Some estimations are as high as 59%. So, if you want to reach as many people as you can, developing a Chrome extension is the best way to do it.
To be able to publish a Chrome extension, you need to have a developer account which entails a $5 one-time signup fee.
Each Chrome extension should have these components: the manifest file, popup.html and popup.js and a background script. Lets take a look at them.
Google uses this file to acquire details about your extension when you will publish it. There are required, recommended and optional fields.
If you want to support multiple languages, read more here.
Imposter syndrome is something we all struggle with to one degree or another. Imposter syndrome is the fear of exposure as a fraud. If youre anything like me you have felt like your work was not good enough to show. Or you werent far along enough in your journey as a developer to have much to contribute.
After learning about Hacktoberfest last year, I wanted to contribute. But I felt overwhelmed, and imposter syndrome began to take hold.
I told myself I was too inexperienced as a developer and I worried that my commits wouldnt be worthwhile. Unfortunately, I let those fears get the better of me, and I didnt even bother signing up.
This year I forced myself to set my fears aside, studied this post on Hacktoberfest, and I dove in. Im going to share a little of what I worked on and the benefits of getting involved. Benefits that go far beyond getting a shirt and can be had 12 months out of the year!Image: twillo
I began on October 11th. I was starting at a slight disadvantage already being a third of the way through the month.
The time crunch motivated me. I decided I would try to submit a pull request every Friday and once during the week for the rest of the month. Setting a schedule was important. I focused on pull requests two or three days out of the week and tried not to stress the rest of the time. Regardless of how ambitious your goal is, five pull requests in a month or five pull requests in a week: its important to have a plan.
The pull request was easy, I didnt fork or clone the freeCodeCamp repository, I opened it right on the GitHub page.
Boom first pull request opened.
I didnt want all five pull requests to come from one repository (although there is nothing wrong with that). After a few pull requests on freeCodeCamp, I started venturing out and exploring GitHub....
As our Is Youth Work Dead series continues, Kerry Jenkins gives us an update on the situation in Birmingham.
As the biggest local authority Birmingham is proud to say it still has a youth service! It is provided by a dedicated in house team, often working in partnership with voluntary youth sector organisations to deliver youth work programmes across the second city.
Birmingham youth service has faced its fair share of cuts, the biggest of these in 2011 when its budget was reduced by around 70% and now has a budget of just 1.6m. In 2009/10 we had 140 full time equivalents we now have just 53 delivering from 16 youth centres across Brum, the youngest city in Europe.
The youth service have year on year faced uncertain, stressful and challenging times but have continued to deliver programmes to Brums young people and to deliver youth work. It still provides open access youth work. It has had to be creative and it has had to find income. It brings in 1.4m and delivers on the Youth Employment Initiative, the Prevent agenda, and Sexual Health. It works in partnership with the voluntary sec...
|IndyWatch Education Feed Archiver|
IndyWatch Education Feed was generated at Community Resources IndyWatch.
Resource generated at IndyWatch using aliasfeed and rawdog