Ask HN: Better ways to extract skills from job postings?
8 points by azeusCC 7 days ago | 9 comments
Hi HN,
I’m building a job aggregator with a live data platform that provides in-depth market analysis. I’m currently focused on improving how I extract skills from job postings. While my current extraction setup achieves ~90% accuracy, it struggles with edge cases and lacks flexibility, particularly when skills are phrased in unexpected ways.
1.The Problem: 1.1: Lack of flexibility: The system only captures predefined phrases. If a job post says something like "proficiency in spreadsheets" or "experience with advanced reporting tools", it misses that Excel is likely required.
1.2: Manual maintenance: Constantly updating JSON files to account for new variations is tedious and unsustainable as the project grows.
2.Current Setup: 2.1: Keyword-based extraction: I maintain a JSON file with predefined skill variations. Example:
"programming_languages": {
"JavaScript": ["javascript", "js" ...],
...
2.2: spaCy PhraseMatcher: I use PhraseMatcher and Matcher for efficient, rule-based extraction.3. Constraints: 3.1: Lightweight: I’m avoiding heavy ML models or resource-intensive pipelines to keep server costs low.
3.2: Flexible: I need a solution that better handles synonyms, context, and unexpected phrasing with minimal manual input.
3.3: Free or open-source: Ideally, something I can plug into my existing server setup without added costs.
4. My Questions: 4.1: How can I improve this process to make it more robust and context-aware?
4.2:Are there lightweight tools, heuristics, or libraries you’d recommend for handling variations and semantic similarity?
4.3: Would pre-trained embeddings (e.g., GloVe, FastText) or other lightweight NLP methods help here?
I’d love to hear from anyone who’s tackled similar challenges in NLP or information extraction. Any suggestions on balancing accuracy, flexibility, and computational efficiency would be greatly appreciated!
If anyone is interested in what my current market analysis looks like, I am leaving a link for you to analyze https://careercode.it/market
PaulHoule 7 days ago | next |
The #1 thing you need to think about is training data. (Going forward you are going to do a huge amount of manual work like it or not, the important thing is that you do it efficiently)
My take is that the PhraseMatcher is about the best thing you will find in spaCy, and I don't think any of the old style embeddings will really help you. (Word vectors are a complete waste of time! Don't believe me, fire up scikit-learn and see if you can train a classifier that can identify color words or emotionally charged words: related words are closer in the embedding space than chance but that doesn't mean you've got useful knowledge there) Look to the smallest and most efficient side of LLMs, maybe even BERT-based models. I do a lot of simple clustering and classification tasks with these sorts of models
https://sbert.net/
I can train a calibrated model on 10,000 or so documents in three minutes using stuff from sk-learn.
Another approach to the problem is to treat it as segmentation, that is you want to pick out a list of phrases like "proficiency in spreadsheets" and you can then feed those phrases through a classifier that turns that into "Excel". Personally I'm interested in running something like BERT first and then training an RNN to light up phrases of that sort or do other classification. The BERT models don't have a good grasp on the order of words in the document but the RNN fills that gap.