admin管理员组文章数量:1201362
Just a thought really... and wondering if Gzipped JSON already covers this.
But say you have a list of game objects in a response:
game = {
name: 'Randomer Quest!',
description: 'Randomer's Quest is a brilliant game!',
activated: true,
points: 10,
thumb: 'randomer-quest.jpg'
};
When you json_encode this, it becomes 151 bytes
:
{"games": [{"name":"Randomer Quest!","description":"Randomer's Quest is a brilliant game!","activated":true,"points":10,"thumb":"randomer-quest.jpg"}]}
Ok... but what if you have a list of about 100 games? That's about 13,913 bytes
... but do we really need to keep declaring those properties?
I know you can just decode it and loop through it (the magic) but what if we're a little more intelligent about it and declare the properties in a seperate object and then have an array of data? We'd have to prefill properties that aren't there usually but I still think its worth it.
Something like this:
{
"games": {
p: ["name", "description", "activated", "points", "thumb"],
d: [
["Randomer Quest!", "Randomer's Quest is a brilliant game!", true, 10, "randomer-quest.jpg"],
["Randomer Quest!", "Randomer's Quest is a brilliant game!", true, 10, "randomer-quest.jpg"]
]
}
}
P are properties, D is the data in arrays. Afterwards we have: 9,377 bytes
67% of the size!
Ok I know you're going to say that's nothing but you do see requests that are more like 40-100kb. And I think that's quite a massive difference. Anyone employing something like this already? Perhaps we have tools that already do this automatically?
32bitkid has pretty much said that if you were going to do this, you might as well just trim it down to CSV format... which makes sense... that would be around 9,253 bytes
66.5%.
"name", "description", "activated", "points", "thumb"
"Randomer Quest!", "Randomer's Quest is a brilliant game!", true, 10, "randomer-quest.jpg"
"Randomer Quest!", "Randomer's Quest is a brilliant game!", true, 10, "randomer-quest.jpg"
I've seen JSON requests of about 100kb, which would turn into 66.5kb (losing 33.5kb)
What do you think?
Dom
Just a thought really... and wondering if Gzipped JSON already covers this.
But say you have a list of game objects in a response:
game = {
name: 'Randomer Quest!',
description: 'Randomer's Quest is a brilliant game!',
activated: true,
points: 10,
thumb: 'randomer-quest.jpg'
};
When you json_encode this, it becomes 151 bytes
:
{"games": [{"name":"Randomer Quest!","description":"Randomer's Quest is a brilliant game!","activated":true,"points":10,"thumb":"randomer-quest.jpg"}]}
Ok... but what if you have a list of about 100 games? That's about 13,913 bytes
... but do we really need to keep declaring those properties?
I know you can just decode it and loop through it (the magic) but what if we're a little more intelligent about it and declare the properties in a seperate object and then have an array of data? We'd have to prefill properties that aren't there usually but I still think its worth it.
Something like this:
{
"games": {
p: ["name", "description", "activated", "points", "thumb"],
d: [
["Randomer Quest!", "Randomer's Quest is a brilliant game!", true, 10, "randomer-quest.jpg"],
["Randomer Quest!", "Randomer's Quest is a brilliant game!", true, 10, "randomer-quest.jpg"]
]
}
}
P are properties, D is the data in arrays. Afterwards we have: 9,377 bytes
67% of the size!
Ok I know you're going to say that's nothing but you do see requests that are more like 40-100kb. And I think that's quite a massive difference. Anyone employing something like this already? Perhaps we have tools that already do this automatically?
32bitkid has pretty much said that if you were going to do this, you might as well just trim it down to CSV format... which makes sense... that would be around 9,253 bytes
66.5%.
"name", "description", "activated", "points", "thumb"
"Randomer Quest!", "Randomer's Quest is a brilliant game!", true, 10, "randomer-quest.jpg"
"Randomer Quest!", "Randomer's Quest is a brilliant game!", true, 10, "randomer-quest.jpg"
I've seen JSON requests of about 100kb, which would turn into 66.5kb (losing 33.5kb)
What do you think?
Dom
Share Improve this question edited Oct 26, 2011 at 19:09 Rune FS 21.7k7 gold badges64 silver badges100 bronze badges asked Oct 25, 2011 at 10:30 creamcheesecreamcheese 2,5444 gold badges29 silver badges55 bronze badges 4- 1 JSONH does similar. – hyperslug Commented Oct 25, 2011 at 10:55
- that would be a "TSON", a Table format JSON (I had to suggest a name) – SparK Commented Oct 25, 2011 at 11:03
- I really like this idea! Got this exact problem now - JSON response too large due to massive repetition of property names. – HorseloverFat Commented Jun 11, 2013 at 13:51
- I think CSV as has been said below would be awesome... JSON isn't exactly readable without tools so why not create a CSV web reader that makes it viewable for development etc – creamcheese Commented Jun 12, 2013 at 17:38
6 Answers
Reset to default 14I agree this is much more compact.
{
"games": {
p: ["name", "description", "activated", "points", "thumb"],
d: [
["Randomer Quest!", "Randomer's Quest is a brilliant game!", true, 10, "randomer-quest.jpg"],
["Randomer Quest!", "Randomer's Quest is a brilliant game!", true, 10, "randomer-quest.jpg"]
]
}
}
But wait, you could optimize it further, do you really need the "games" object? this is even smaller!
{
p: ["name", "description", "activated", "points", "thumb"],
d: [
["Randomer Quest!", "Randomer's Quest is a brilliant game!", true, 10, "randomer-quest.jpg"],
["Randomer Quest!", "Randomer's Quest is a brilliant game!", true, 10, "randomer-quest.jpg"]
]
}
And really, whats the point of the "p" and "d" and the object that contains, i know that the property names are going to be first, and my data is going to be second?
[
["name", "description", "activated", "points", "thumb"],
["Randomer Quest!", "Randomer's Quest is a brilliant game!", true, 10, "randomer-quest.jpg"],
["Randomer Quest!", "Randomer's Quest is a brilliant game!", true, 10, "randomer-quest.jpg"]
]
And those array and object markers are just getting in the way, save a few more bytes!
"name", "description", "activated", "points", "thumb"
"Randomer Quest!", "Randomer's Quest is a brilliant game!", true, 10, "randomer-quest.jpg"
"Randomer Quest!", "Randomer's Quest is a brilliant game!", true, 10, "randomer-quest.jpg"
Wait... this format already exists. It is CSV. And its been around since the mid 1960's. And its part of the reason why XML and JSON were invented in the first place. JSON and XML add flexibility to the objects being stored and to make them more human readable than tightly packed binary objects. Are you really that worried about the size of the data going over the pipe? If you are (if that is, in fact, your bottleneck) then there are a bunch of different ways to address that problem.
But, personally, I think you should use the technology and the tools for what they are made for, and what they excel at doing.
You're trying to use a hammer to screw in a screw... You'll get it in the wall, but it wont be pretty or pleasant for either party involved.
Find a pattern that solves your problem, not the other way around.
From experience, the primary reason behind using text based formats is that they are easy for a human (with unsophisticated tools) to read and debug. [For instance, I consider XML a huge no-go for most tasks].
A quite old reference about why we use text formats, although still worth a serious read is this chapter of The Art of Unix Programming.
So you must aim for clarity, not size. Aiming for size is a case of premature optimization.
If you are worried about bandwidth or storage, consider compressing the data. Text formats lend themselves well to fast and powerful compression, to the point where technically, they are not inferior to binary formats sizewise. Also, you separate the concerns of 1/ representing data conveniently 2/ transferring data efficiently.
I'm not knowledgeable in this domain, but I'm ready to bet there are 1/ Javascript libraries for compression 2/ systematic ways to have the data compressed at the protocol level.
Last, if you are worried about performance, well, you'd rather have a compelling reason (and solid profiling data) for giving up the comfort that text based formats provide.
I use ColdFusion for server-side language, which has a function serializeJson(). This creates a JSON packet, and if it's from a query, it looks almost exactly like what you're proposing.
{
"COLUMNS": [
"ID",
"NAME"
],
"DATA": [
[
1,
"London"
],
[
2,
"Liverpool"
],
[
3,
"Glasgow"
]
]
}
Works pretty well too.
This is quite interesting, you may want to check BSON if you have a lot of data to transfer.
How about MessagePack?
http://msgpack.org/
MessagePack is an efficient binary serialization format. It lets you exchange data among multiple languages like JSON. But it's faster and smaller. Small integers are encoded into a single byte, and typical short strings require only one extra byte in addition to the strings themselves.
Actually yes you can. 30 years ago cpu's was not as today so there is not a good idea to compress the json and decompress it in client side.
What i could do is:
- Replace all instance of
"," ----------> |
// from 3 bytes to 1 - Replace all instance of
},{ ----------> ^
// from 3 byes to 1 - Replace all instance of
:" ----------> *
// from 3 bytes to 1
And from a json 100k you will have a 50k Json and on Client side you will replace back those characters and then you will have it all.
You can go deeper if for example you say i will replace all "abd" ----> @
or maybe words that exists too many times to a single char which is never inside the json.
本文标签: javascriptMaking JSON responses even smaller just an ideaStack Overflow
版权声明:本文标题:javascript - Making JSON responses even smaller... just an idea - Stack Overflow 内容由网友自发贡献,该文观点仅代表作者本人, 转载请联系作者并注明出处:http://www.betaflare.com/web/1738631260a2103759.html, 本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌抄袭侵权/违法违规的内容,一经查实,本站将立刻删除。
发表评论