Take the following json object:
{
"person": {
"name": "Todd",
"age": 10
},
"address": {
"name": "18B Victory",
"zip": 20019
}
}
It can be ingested into SQL quite straightforwardly with the following structure, where each Person object (in this case "Todd") represents a single row of data.
person.name person.age address.name address.zip
"Todd" 10 "18B Victory" 20019
Everything looks ok in the above, though I might modify the column names to make them less verbose.
Now, take the following json data which is stored by the object-id lookup, i.e., it is a key-value pair, even though the actual types of the data is the exact same as the first document — str: (str, age):
{
"X1AB": {
"name": "Todd",
"age": 10
},
"Y2AL": {
"name": "Brad",
"age": 20
}
}
This would be ingested in the following format to SQL (provided we didn’t transform the data beforehand):
X1AB.name X1AB.age Y2AL.name Y2AL.age
"Todd" 10 "Brad" 20
I know that SQL has the PIVOT/UNPIVOT operator, though I think this is probably insufficient in dealing with the above. I think what I need is basically to "extract the prefix at position $ in the document to a new column", something like this:
_prefix name age
X1AB "Todd" 10
Y2AL "Brad" 20
Is there any way to do such a transformation as the above from SQL?
>Solution :
Your statement that SQL would parse this back in the format you state is untrue. Getting columns with dynamic names like that isn’t going to happen, as you actually do need to define the names of the columns you want. If you just consumed the data as JSON, with no definitions, you’d get one row per key, so would end up with 4 rows (a name and age key for both outer key values). db<>fiddle
If you define the columns for your second object, you get exactly what you want:
DECLARE @JSON nvarchar(MAX) =N'{
"X1AB": {
"name": "Todd",
"age": 10
},
"Y2AL": {
"name": "Brad",
"age": 20
}
}';
SELECT J.[key],
V.name,
V.age
FROM OPENJSON (@JSON) J
CROSS APPLY OPENJSON (J.value)
WITH (name nvarchar(50),
age int) V;
Getting the result you say is wrong would likely require dynamic SQL, which would be far harder.