If you’ve ever tried to make sense of a giant wall of JSON in your terminal, you know it’s not fun. Whether you're debugging an API response or parsing logs, raw JSON is a pain to read and work with.
That’s where jq comes in. It’s like sed
or awk
but built for JSON, and once you get the hang of it, it’s a game changer.
In this article, we’ll walk through seven practical jq techniques you can start using right away to slice, reshape, and clean up your data without leaving the terminal.
Prerequisites: Basic command-line comfort and jq installed on your system.
We’ll cover how to:
- Filter deeply nested values
- Work with arrays — extract, map, and flatten them
- Merge and sort objects
- Handle missing or null data safely
Let’s dive in.
1. Filter Deeply Nested Values
The truth is that most JSON structures aren't flat. You always need to reach deep inside nested objects to grab the exact value you care about. Take this data.json as an example:
{
"user": {
"profile": {
"contact": {
"email": "[email protected]"
}
}
}
}
To pull out the email, you have to chain properties with dots:
jq '.user.profile.contact.email' data.json
You’ll get:
"[email protected]"
But maybe you're piping this into something else and don’t want the quotes. You can use -r
for raw output:
jq -r '.user.profile.contact.email' data.json
And if that whole .contact
block is missing? Append ? to avoid breaking:
jq -r '.user.profile.contact?.email' data.json
This way, your script won’t blow up when the key is missing; it'll just return null
.
2. Extract Array Elements Cleanly
Arrays are everywhere in JSON. For example, you may want to list users, results, items, etc. jq
makes it simple to pull out values. Say you are given an array of users:
{
"users": ["Alice", "Bob", "Clara"]
}
To explode that list into lines, the command below will help you:
jq -r '.users[]' data.json
Output:
Alice
Bob
Clara
That []
unpacks the array. Without -r
, you'd get each value wrapped in quotes, often annoying.
Now let’s level up. Suppose you have the following:
{
"users": [
{ "name": "Alice", "id": 1 },
{ "name": "Bob", "id": 2 }
]
}
And you just want the names:
jq -r '.users[] | .name' data.json
The .array[] | .field
pattern is your friend.
3. Transform Array Values with map
Sometimes you don’t just want the data, you want to shape it. map lets you apply logic to every array item in one shot.
For example, say you have an array of scores:
{
"scores": [10, 20, 30]
}
You can double every score:
jq '.scores | map(. * 2)' data.json
Output:
[20, 40, 60]
Or, uppercase all names:
{
"names": ["alice", "bob", "clara"]
}
jq '.names | map(ascii_upcase)' data.json
Output:
["ALICE", "BOB", "CLARA"]
You can get fancier, like extracting and transforming a field from a list of objects:
jq '[.users[] | .name | ascii_downcase]' data.json
4. Merge Objects into One
Let’s say you get separate JSON objects that you want to combine into one, maybe from an API response or from multiple sources.
For example:
[
{ "name": "Alice" },
{ "age": 30 },
{ "email": "[email protected]" }
]
You can merge them this way:
jq 'add' data.json
Output:
{
"name": "Alice",
"age": 30,
"email": "[email protected]"
}
It works on arrays of objects — not just two, but any number.
Bonus: Want to merge two specific objects in a bigger structure? You can do:
jq '.[0] + .[1]' data.json
5. Sort Arrays and Objects by Fields
Raw data is rarely in the order you want. jq helps sort it — alphabetically, numerically, or by a specific field.
Say you have the following simple array:
{
"prices": [15, 5, 10]
}
You can sort it with the command below:
jq '.prices | sort' data.json
Output:
[5, 10, 15]
Now with objects:
{
"items": [
{ "price": 15 },
{ "price": 5 },
{ "price": 10 }
]
}
You can sort the above object by price:
jq '.items | sort_by(.price)' data.json
Output:
[
{ "price": 5 },
{ "price": 10 },
{ "price": 15 }
]
You can also sort by strings:
jq '.users | sort_by(.name)' data.json
6. Flatten Nested Arrays
Have you ever ended up with arrays inside arrays? Like the code below:
[[1, 2], [3, 4], [5]]
One command flattens it:
jq 'flatten' data.json
Output:
[1, 2, 3, 4, 5]
You can even control depth: flatten(1)
flattens one level only. The default is full flattening. This is useful when you’re chaining multiple filters that return arrays, and end up with [[], [], []]
.
7. Handle Null and Missing Values Safely
APIs and real-world JSON often come with surprises, like nulls, missing keys, and empty arrays. Without guardrails, jq will crash.
Example:
{ "user": { "name": "Alice" } }
If you ask for .user.email:
jq '.user.email' data.json
You will get an error. To avoid getting an error, you should add a ?
:
jq '.user.email?' data.json
Output:
null
Even better, you can define fallback values:
jq '.user.email // "unknown"' data.json
Now, it gives:
"unknown"
Same idea with arrays:
{ "items": [] }
jq '.items[]? // "none"' data.json
Instead of throwing an error, it gives "none" or just skips the iteration if empty.
Integrating jq in Workflows
Once you get comfortable querying JSON with jq, the next step is wiring it into your actual workflow — whether that’s a one-off shell command, a data processing script, or something running in CI/CD.
1. Combining jq with curl
Most APIs return JSON. If you’re using curl to hit an endpoint, you can pipe the response directly into jq.
Example: Get your IP from an API and extract just the IP address.
curl -s https://api.ipify.org?format=json | jq -r '.ip'
-s
makes curl silent (no progress bar).-r
gives you the raw IP without quotes.
Here’s see a more complex example with GitHub API:
curl -s https://api.github.com/repos/stedolan/jq | jq -r '.full_name, .stargazers_count'
Output:
stedolan/jq
23456
This is powerful because you can call an API and immediately slice the JSON to exactly what you need.
2. Combining jq with grep, awk, etc.
jq works great when you have structured JSON. But sometimes you get a messy mix — logs, text files, or partial data.
Let’s say you're tailing a log file that includes JSON blobs:
tail -f logs.txt | grep 'json=' | awk -F'json=' '{print $2}' | jq '.status'
This does the following:
tail -f
watches the log file live.- grep 'json=' filters lines that include JSON.
awk
strips everything before the JSON.jq
extracts just the status field.
This kind of chaining is common when you’re scraping, monitoring, or debugging in production.
3. Using jq in Shell Scripts
You can use jq just like any other tool inside a shell script. Here’s a basic example that hits an API, checks a value, and makes a decision.
#!/bin/bash
REPO="octocat/Hello-World"
STARS=$(curl -s "https://api.github.com/repos/$REPO" | jq '.stargazers_count')
if [ "$STARS" -gt 1000 ]; then
echo "🔥 This repo is hot!"
else
echo "🧊 Needs more stars."
fi
Run with:
bash check-stars.sh
This is especially useful for automating checks, generating reports, or triggering actions based on live API data.
4. Using jq in CI/CD Pipelines
CI/CD platforms like GitHub Actions, GitLab CI, CircleCI, etc., often use JSON for API responses, test outputs, or config files.
Example: In a GitHub Actions step, you might check a deployment status:
- name: Check deployment status
run: |
STATUS=$(curl -s "$API_URL" | jq -r '.deployment.status')
if [ "$STATUS" != "success" ]; then
echo "Deployment failed"
exit 1
fi
Here, jq
acts as a gatekeeper — if the deployment JSON doesn’t look right, the pipeline fails fast.
You can also parse the tool output. Some tools (like terraform, aws, docker inspect
) support JSON output — perfect for jq
.
Performance Note
For large JSON files (e.g., multi-gigabyte logs), some operations like sorting or deep nesting with a map can slow down or consume heavy memory.
Use --stream
for huge datasets. The same way SQL lets you shape relational data, jq gives you surgical precision over JSON.
Conclusion
If you work with JSON, jq isn’t optional — it’s essential. The difference between struggling with messy data and owning it comes down to whether you know how to slice, filter, and transform it effortlessly.
So dive in, experiment, and before you know it, you’ll be wielding jq like a pro.
Frequently Asked Questions
Can I deploy Java applications using continuous integration/continuous deployment (CI/CD) pipelines on a Java VPS?
Yes, you can set up CI/CD pipelines for automating the deployment of Java applications on a Java VPS. This is particularly useful for agile development environments where frequent updates are common.
What makes Astro.js different from traditional JavaScript frameworks?
Astro.js is a static site generator that prioritizes static HTML generation and minimal client-side JavaScript. Unlike traditional JavaScript frameworks, it focuses on faster performance, improved SEO, and seamless integration with multiple front-end frameworks.
Can I use Astro.js with other JavaScript libraries and frameworks?
Yes, Astro.js is designed to work with various frontend frameworks and libraries, allowing you to leverage your preferred tools alongside its performance optimizations.
Are there specific hosting requirements for JavaScript frameworks like React, Angular, or Vue.js?
Answer: Generally, hosting for client-side JavaScript frameworks like React, Angular, or Vue.js does not require specialized servers since the frameworks run in the user's browser.

Joel Olawanle is a Software Engineer and Technical Writer with over three years of experience helping companies communicate their products effectively through technical articles.
View all posts by Joel Olawanle