I wrote a shell script, which let me manage dns records through an api.
The raw core-command looks like about this:
The output contains a list of all zones with all records and is about 800 Kilobytes JSON-Data.
I ran into a first issue when I used this(incorrect) try to remove leading 000 from the 800K-variable:
I think the correct term should be ${output#000}. Maybe this is an error when I typed something in without being completely aware, what I'm doing. The resulting regex caused the program did not finish this command. I assume the pattern required a huge amount of computing in that 800K-variable.
My question is:
Are there - in your experience - other general caveats, why one generally should refrain from using such big variable content sizes? As far as I read, there are no relevant size limits within linux regarding variables(for data sizes <100MB). Or would it be generally better, to use files for data at a certain limit?
Main caveat, it's almost almost always a dumb idea. Ad-hoc data, "dump-and-fix-later", is the modern UUOC, wasteful and pointless. If you know what you actually want, you can either deal with it in a structured way or avoid storing it entirely.
Another caveat, most shells don't let you keep binary data in variables. Mass dumping unchecked data into variables can occasionally surprise you.
thanks for your answer, even if not using very nice words.
What I get out of it: Beware what unknown data can possibly and in worst case may do and especially in heavy interpreted shell environment.
---
The method was not - like you assumed - to get the full data out of the API, have it locally and pull the needed bits out locally. It was in short looking at the API the only way to get what I need. Digging deeper now into the api documentation revealed other ways to do the same more efficiently.
Maximum efficiency was not my goal. I started with a shell script and at about half completion of the task I realized that the task may be to complex for a shell script and it would have been better to use a scripting language. Now I have a 500 lines Bash script which is implemented fairly well structured and working quite well. I will not rewrite it again if there are not serious issues.
My intention with this question is/was to get feedback is if there are any serious problems I was not aware of.
Since all of that data is JSON data which will get fed into jq for extraction, I do not see much potential trouble ahead.
Location: Asia Pacific, Cyberspace, in the Dark Dystopia
Posts: 19,118
Thanks Given: 2,351
Thanked 3,359 Times in 1,878 Posts
Quote:
Originally Posted by stomp
Since all of that data is JSON data which will get fed into jq for extraction, I do not see much potential trouble ahead.
If your data is JSON, why are you processing it with curl and shell scripts where there are tools better suited for processing JSON data?
I process a lot of JSON data on Linux and never use a shell script to process this JSON data. There are many other languages, libs and tools built to process JSON. Why use a tool suboptimal for JSON processing?
Just curious ...
As I said, I process reams of JSON data on Linux and do not use shell scripts to process any JSON objects.
basically it boils down to jq. Without it I never would have considered doing this task with bash. All JSON-Handling is done with jq. Bash is the code around it.
Even I may have considered differently after having it done, because the task got a little more complex than expected, I think I got the task done very quickly. It's working more than sufficiently well, the code is maintainable and the speed is acceptable. For smaller tasks I consider this still as an excellent choice.
I wanted to know what is the best way to query json formatted files for content? Ex. Data
https://usn.ubuntu.com/usn-db/database-all.json.bz2
When looking at keys as in:
import json
json_data = json.load(open('database-all.json'))
for keys in json_data.iterkeys():
print 'Keys--> {}... (0 Replies)
Hi All,
This is my first post here. Hoping to share and gain knowledge from this great forum !!!!
I've scanned this forum before posting my problem here, but I'm afraid I couldn't find any thread that addresses this exact problem.
I'm trying to split a large XML file (with multiple tag... (7 Replies)
Hi,
I've some directory that I used as working directory for a program. At the end of the procedure, the content is deleted. This directory, when I do a ls -l, appears to still take up some space. After a little research, I've seen on a another board of this forum that it's not really taking... (5 Replies)
Greetings, I'm stuck in a time warp using ancient machines from the prehistoric era that should be rightfully displayed in the Smithsonian.
We're running Solaris 7 on FDDI n/w on an E6500 host and wish to use MTU (packet size) > 1500, more like 3072 bytes to begin with and possibly up to 4096... (9 Replies)
We're running Solaris 7 on FDDI n/w on an E6500 host and wish to use MTU (packet size) > 1500, more like 3072 bytes to begin with and possibly up to 4096 bytes.
Linux has /etc/network/interfaces. Does ANYONE remember the equivalent in Unix? When I do ifconfig eth0 mtu 4000, I get the error... (0 Replies)
Hi
I am new to shell scripting.I want to create a batch file which creates a desired number of files with a specific size say 1MB each to consume space.How can i go about it using for loop /any other loop condition using shell script?
Thanks (3 Replies)
Hi everybody,
I am currently trying to develop a simple content management system where I have an internal website for my users to upload large files onto the server. The site is password protected and my users won't be trying to hack into the system so security is a non-factor (as least for... (3 Replies)
Hi,
I have a large file with a repeating pattern in it. Now i want the file split into the block of patterns with a specified no. of lines in each file.
i.e. The file is like
1...
2...
2...
3...
1...
2...
3...
1...
2...
2...
2...
2...
2...
3...
where 1 is the start of the block... (5 Replies)
I would like to edit a doc which is large file size. I can't use "vi" command due to out of memory.
$ vi large.dat
ex: 0602-101 Out of memory saving lines for undo.
Please help. Thanks. (2 Replies)
We have EDP members will do some testing job in my system , but sometimes these process will generate some error to the system log or any file ( usually the members don't know the log is reached to this level ) , then make the system crashed , could suggest the way how can to prevent this problem ?... (2 Replies)