This is the blog for parsing a report in JSON.  As I said in the parsing JSON for transactions, I really didn't understand JSON vs XML.  But this part needs to know what a JSON on a report looks like so I will show you this snippet first.  The complicated part of this is that any report you get can have lots of fields added or removed.  Indeed, these changes especially extra columns can be added as options on the URL call.  In this case, I am parsing the stock report so I know what columns I want to load into a result.dbf.  I suspect that for every report you want to extract, you will need to create a separate parsing routine.

Lets get started.  First is the snippet of the JSON

{
  "Header": {
    "Time": "2026-02-28T07:50:18-08:00",
    "ReportName": "VendorBalanceDetail",
    "DateMacro": "all",
    "Currency": "USD",
    "Option": [
      {
        "Name": "report_date",
        "Value": "2026-02-28"
      },
      {
        "Name": "NoReportData",
        "Value": "false"
      }
    ]
  },
  "Columns": {
    "Column": [
      {
        "ColTitle": "Date",
        "ColType": "Date",
        "MetaData": [
          {
            "Name": "ColKey",
            "Value": "tx_date"
          }
        ]
      },
      {
        "ColTitle": "Transaction Type",
        "ColType": "String",
        "MetaData": [
          {
            "Name": "ColKey",
            "Value": "txn_type"
          }
        ]
      },
      {
        "ColTitle": "Num",
        "ColType": "String",
        "MetaData": [
          {
            "Name": "ColKey",
            "Value": "doc_num"
          }
        ]
      },
      {
        "ColTitle": "Due Date",
        "ColType": "Date",
        "MetaData": [
          {
            "Name": "ColKey",
            "Value": "due_date"
          }
        ]
      },
      {
        "ColTitle": "Amount",
        "ColType": "Money",
        "MetaData": [
          {
            "Name": "ColKey",
            "Value": "subt_neg_amount"
          }
        ]
      },
      {
        "ColTitle": "Open Balance",
        "ColType": "Money",
        "MetaData": [
          {
            "Name": "ColKey",
            "Value": "subt_neg_open_bal"
          }
        ]
      },
      {
        "ColTitle": "Balance",
        "ColType": "Money",
        "MetaData": [
          {
            "Name": "ColKey",
            "Value": "rbal_neg_open_bal"
          }
        ]
      }
    ]
  },
  "Rows": {
    "Row": [
      {
        "Header": {
          "ColData": [
            { "value": "Adobe", "id": "767" },
            { "value": "" },
            { "value": "" },
            { "value": "" },
            { "value": "" },
            { "value": "" },
            { "value": "" }
          ]
        },
        "Rows": {
          "Row": [
            {
              "ColData": [
                { "value": "2026-02-20" },
                { "value": "Bill", "id": "49138" },
                { "value": "" },
                { "value": "2026-02-27" },
                { "value": "19.99" },
                { "value": "19.99" },
                { "value": "19.99" }
              ],
              "type": "Data"
            }
          ]
        },
        "Summary": {
          "ColData": [
            { "value": "Total for Adobe" },
            { "value": "" },
            { "value": "" },
            { "value": "" },
            { "value": "19.99" },
            { "value": "19.99" },
            { "value": "" }
          ]
        },
        "type": "Section"
      },
      {
        "Header": {
          "ColData": [
            { "value": "Appsmith", "id": "891" },
            { "value": "" },
            { "value": "" },
            { "value": "" },
            { "value": "" },
            { "value": "" },
            { "value": "" }
          ]
        },
        "Rows": {  1
          "Row": [1
            {
              "ColData": [
                { "value": "2026-02-04" },
                { "value": "Bill", "id": "48686" },
                { "value": "" },
                { "value": "2026-03-01" },
                { "value": "52.80" },
                { "value": "52.80" },
                { "value": "52.80" }
              ],
              "type": "Data"
            }
          ]
        },
        "Summary": {
          "ColData": [
            { "value": "Total for Appsmith" },
            { "value": "" },
            { "value": "" },
            { "value": "" },
            { "value": "52.80" },
            { "value": "52.80" },
            { "value": "" }
          ]
        },
        "type": "Section"
      },
 


There is an opening section that describes all of the columns and definitions and field types.  We don't need this but if you want to make a universal report parser, you might consider using this section to create the table.  Remember that Foxpro only allows 10 character field names so you would need to figure that out.
The main loop starts with Rows, under rows, there is a row loop which has a header, rows & summary.  Each Row at the main level is a single vendor.
We need to get the name of the vendor from the header, then we need to loop through the rows.  Each of these rows is a transaction that needs to be added to the result.dbf.
There is a way to get a count of the main loop of rows  loscript.EVAL("data.Rows.Row.length")  This represents the number of vendors
The next level loop is the number of rows within each vendor.  To count for the first vendor is loscript.EVAL("data.Rows.Row[0].Rows.Row.length").
Knowing this we can construct a loop and write the values of every column directly into the result.dbf.  Here is the code.

**the resulting dbf is passed here.  The file has already been created.  Whether parsing in JSON or XML, t
***it is likely that I will need a similar parser for every kind of report I wish to extract
***the basic structure is the same but the names of the fields vary in the JSON and the resulting tables(s)
**ITS FILENAME IS DETERMINED BY THE CALLING PROGRAM
PARAMETER lcdbf
**in JSON, i am using the mentity to scan down the json. To do this, the added space needs to be removed
mentity=TRIM(mentity)
SET SAFETY OFF
CLOSE DATA
USE (lcdbf)
**parse the resulting text file
* lcJson = your JSON string
lcjson = lcresponse
loscript.language = "JScript"

* Inject JSON into JS context
loscript.addcode([var data = ] + lcjson + [;])

* Count mentity occurrences
lnrowscount =loscript.EVAL("data.Rows.Row.length")
**-2 because there is a total row at the bottom
FOR lnrows=0 TO lnrowscount-2
   **store the vendor for each row.  Vendor does not appear on each line  
   mvendno=loscript.EVAL("data.Rows.Row["+ALLT(STR(lnrows,3))+"].Header.ColData[0].value")
   ?mvendno
   lnrowcount=loscript.EVAL("data.Rows.Row["+ALLT(STR(lnrows,3))+"].Rows.Row.length")
   **this is the number of line items per vendor
   FOR lnrow=0 TO lnrowcount-1
      SELECT result
      APPEND BLANK
      REPLACE vendor WITH mvendno
**ColData is an array from 0 to the number of columns-1.  There is no name so it must be extracted like below
**take a look at the json and look at the array to know which part of the array goes into the correct field in result.dbf
      REPLACE txndate WITH loscript.EVAL("data.Rows.Row["+ALLT(STR(lnrows,3))+"].Rows.Row["+ALLT(STR(lnrow,3))+"].ColData[0].value")
      REPLACE docnumber WITH loscript.EVAL("data.Rows.Row["+ALLT(STR(lnrows,3))+"].Rows.Row["+ALLT(STR(lnrow,3))+"].ColData[2].value")
      REPLACE amount WITH VAL(loscript.EVAL("data.Rows.Row["+ALLT(STR(lnrows,3))+"].Rows.Row["+ALLT(STR(lnrow,3))+"].ColData[4].value"))
      REPLACE balance WITH VAL(loscript.EVAL("data.Rows.Row["+ALLT(STR(lnrows,3))+"].Rows.Row["+ALLT(STR(lnrow,3))+"].ColData[4].value"))
   NEXT
NEXT

That is the JSON Parser.
Some notes.  It is important to be able to visualize what the json looks like with it's indentations.  The indentations tell you the depth needed.  This example goes 2 levels deep which is about all most reports go.  One trick to visualizing it is to suspend the program at the beginning of the program.  Then do _cliptext=lcresponse.  This will place the entire stream into the clipboard.  Then go to Chatgpt and ask them to indent the JSON.  Make sure your lcresponse is just a few transactions.  Chatgpt will tell you the stream is too long.  Then copy and past this into word.  Change the font to courier (Fixed width font) so you can see the indent structure. 

Next note.  Unlike the XML parser, there is no need to create the parse.dbf and figure out the levels.  This is much more elegant and actually simpler by far.

Well that is it.  Extracting is one thing, writing is another.  Intuit only accepts JSON to write so the process is to construct a JSON.  My next project.
If you found this helpful, send me an email Craigs@sobelsoftware.com

Craig Sobel