sending curl POST with file
curl -X POST -d @myfilename http://www.url.com
``
or
``
curl -XPOST 'localhost:9200/bank/_search?pretty' -d '
{
"query": { "match_phrase": { "address": "mill lane" } }
}'
curl -X POST -d @myfilename http://www.url.com
``
or
``
curl -XPOST 'localhost:9200/bank/_search?pretty' -d '
{
"query": { "match_phrase": { "address": "mill lane" } }
}'
For example if we have a file that contains JSON object like
{"a": 1, "b":{"c": 11}
{"a": 2, "b":{"c": 22}}
{"a": 3, "b":{"c": 33}}
And you want to construct something like
["1", "11"]
["2", "22"]
["3", "33"]
You can do that with jq
$ cat filename.json | jq -c '[.a, .b.c]'
this is particularly usefull to grep something from JSON log
$ sudo netstat -atp tcp| grep -i listen
or
$ sudo lsof -i -P | grep -i listen
To get current unix timestamp we can do
$ date +%s
1471365644
But how do we parse a file which contain epoch timestamp to date?
$ echo 1471365644 | perl -pe 's/(\d+)/localtime($1)/e'
if we have it in milliseconds, we could remove the milliseconds part with
$ echo 1471365644000 | cut -c -10 | perl -pe 's/(\d+)/localtime($1)/e'
Tue Aug 16 18:40:44 2016
Assuming your epoch seconds is 10 character. But if you have more or less, you need to do some other string processing first
I have a use-case where 1 have single file that contain 20K line. I need to have another file with 1 million line with just content of the first file repeated 50 time.
Solution: form stack overflow
$ perl -0777pe '$_=$_ x 50' input_file.txt > output_file.txt
arguments:
Sometimes you need to paste yanked text while performing command. For example subtituting yanked text with some words.
yanked text are stored in the 0
and "
register.
in the command mode you can paste this with ctrl-R [registe]
example in our cases this would be ctrl-R 0
so full command would be for example :%s/[press ctrl-R then 0]/replacement/gc
There are couple of way to rename multiple files.
From http://unix.stackexchange.com/questions/1136/batch-renaming-files
example we have files
image0001.png
image0002.png
image0003.png
...
And we would like to rename it to
0001.png
0002.png
0003.png
...
this works with linux and mac without installing anything
|
|
This is my favorite way since I’m using ZSH and oh-my-zsh. Check it now, it’s awesome
|
|
or let zsh automatically define $1, $2 etc
|
|
for more complex operation, sometimes it easier to rename the file on your editor.
there is qmv
from renameutils
$ qmv *.png
it will open editor with 2 columns, you can edit the right size rename that file
from http://superuser.com/questions/305128/how-to-specify-level-of-compression-when-using-tar-zcvf
GZIP=-9 tar cvzf file.tar.gz /path/to/directory
or
tar cvf - /path/to/file0 /path/to/file1 | gzip -9 - > files.tar.gz
WARNING: make sure the memory is enough to put the data back from swap, else system will start killing processes
To clear up swap space (put the data back into memory) we can do
# swapoff -a && swapon -a
Because this is quite slow process, it’s a good idea to run this inside screen session.
Sometimes we have a file that contains a json object per line. For example log file in json format
{foo: 1, bar: 2}
{foo: 3, bar: 4}
if we want to read this in python pandas we need to convert it to
[
{foo: 1, bar: 2},
{foo: 3, bar: 4}
]
easy way to do this is with jq --slurp
$ cat file.json | jq --slurp . > one_array.json
then you can read it in python pandas (notebook) like this
|
|