This is the blog for parsing a report in JSON. As I said in the parsing JSON for transactions, I really didn't understand JSON vs XML. But this part needs to know what a JSON on a report looks like so I will show you this snippet first. The complicated part of this is that any report you get can have lots of fields added or removed. Indeed, these changes especially extra columns can be added as options on the URL call. In this case, I am parsing the stock report so I know what columns I want to load into a result.dbf. I suspect that for every report you want to extract, you will need to create a separate parsing routine.
Lets get started. First is the snippet of the JSON
{
"Header": {
"Time": "2026-02-28T07:50:18-08:00",
"ReportName": "VendorBalanceDetail",
"DateMacro": "all",
"Currency": "USD",
"Option": [
{
"Name": "report_date",
"Value": "2026-02-28"
},
{
"Name": "NoReportData",
"Value": "false"
}
]
},
"Columns": {
"Column": [
{
"ColTitle": "Date",
"ColType": "Date",
"MetaData": [
{
"Name": "ColKey",
"Value": "tx_date"
}
]
},
{
"ColTitle": "Transaction Type",
"ColType": "String",
"MetaData": [
{
"Name": "ColKey",
"Value": "txn_type"
}
]
},
{
"ColTitle": "Num",
"ColType": "String",
"MetaData": [
{
"Name": "ColKey",
"Value": "doc_num"
}
]
},
{
"ColTitle": "Due Date",
"ColType": "Date",
"MetaData": [
{
"Name": "ColKey",
"Value": "due_date"
}
]
},
{
"ColTitle": "Amount",
"ColType": "Money",
"MetaData": [
{
"Name": "ColKey",
"Value": "subt_neg_amount"
}
]
},
{
"ColTitle": "Open Balance",
"ColType": "Money",
"MetaData": [
{
"Name": "ColKey",
"Value": "subt_neg_open_bal"
}
]
},
{
"ColTitle": "Balance",
"ColType": "Money",
"MetaData": [
{
"Name": "ColKey",
"Value": "rbal_neg_open_bal"
}
]
}
]
},
"Rows": {
"Row": [
{
"Header": {
"ColData": [
{ "value": "Adobe", "id": "767" },
{ "value": "" },
{ "value": "" },
{ "value": "" },
{ "value": "" },
{ "value": "" },
{ "value": "" }
]
},
"Rows": {
"Row": [
{
"ColData": [
{ "value": "2026-02-20" },
{ "value": "Bill", "id": "49138" },
{ "value": "" },
{ "value": "2026-02-27" },
{ "value": "19.99" },
{ "value": "19.99" },
{ "value": "19.99" }
],
"type": "Data"
}
]
},
"Summary": {
"ColData": [
{ "value": "Total for Adobe" },
{ "value": "" },
{ "value": "" },
{ "value": "" },
{ "value": "19.99" },
{ "value": "19.99" },
{ "value": "" }
]
},
"type": "Section"
},
{
"Header": {
"ColData": [
{ "value": "Appsmith", "id": "891" },
{ "value": "" },
{ "value": "" },
{ "value": "" },
{ "value": "" },
{ "value": "" },
{ "value": "" }
]
},
"Rows": { 1
"Row": [1
{
"ColData": [
{ "value": "2026-02-04" },
{ "value": "Bill", "id": "48686" },
{ "value": "" },
{ "value": "2026-03-01" },
{ "value": "52.80" },
{ "value": "52.80" },
{ "value": "52.80" }
],
"type": "Data"
}
]
},
"Summary": {
"ColData": [
{ "value": "Total for Appsmith" },
{ "value": "" },
{ "value": "" },
{ "value": "" },
{ "value": "52.80" },
{ "value": "52.80" },
{ "value": "" }
]
},
"type": "Section"
},
There is an opening section that describes all of the columns and definitions and field types. We don't need this but if you want to
make a universal report parser, you might consider using this section to create the table. Remember that Foxpro only allows 10
character field names so you would need to figure that out.
The main loop starts with Rows, under rows, there is a row loop which has a header, rows & summary. Each Row at the main level is a
single vendor.
We need to get the name of the vendor from the header, then we need to loop through the rows. Each of these rows is a transaction that
needs to be added to the result.dbf.
There is a way to get a count of the main loop of rows loscript.EVAL("data.Rows.Row.length") This represents the number of
vendors
The next level loop is the number of rows within each vendor. To count for the first vendor is loscript.EVAL("data.Rows.Row[0].Rows.Row.length").
Knowing this we can construct a loop and write the values of every column directly into the result.dbf. Here is the code.
That is the JSON Parser.
Some notes. It is important to be able to visualize what the json looks like with it's indentations. The indentations tell you
the depth needed. This example goes 2 levels deep which is about all most reports go. One trick to visualizing it is to suspend
the program at the beginning of the program. Then do _cliptext=lcresponse. This will place the entire stream into the clipboard.
Then go to Chatgpt and ask them to indent the JSON. Make sure your lcresponse is just a few transactions. Chatgpt will tell you
the stream is too long. Then copy and past this into word. Change the font to courier (Fixed width font) so you can see the
indent structure.
Next note. Unlike the XML parser, there is no need to create the parse.dbf and figure out the levels. This is much more elegant and actually simpler by far.
Well that is it. Extracting is one thing, writing is another. Intuit only accepts JSON to write so the process is
to construct a JSON. My next project.
If you found this helpful, send me an email Craigs@sobelsoftware.com
Craig Sobel