We use proprietary and third party´s cookies to improve your experience and our services, identifying your Internet Browsing preferences on our website; develop analytic activities and display advertising based on your preferences. If you keep browsing, you accept its use. You can get more information on our Cookie Policy
Cookies Policy
Ask Your Question

frb's profile - activity

2017-03-01 17:40:37 +0200 answered a question About computing cosmos

These questions are being answered by private email or directly in SOF.

2017-02-22 08:14:28 +0200 answered a question Cygnus & hive

Answered in stackoverflow.

2017-02-02 08:22:38 +0200 received badge  Famous Question (source)
2017-01-31 18:52:23 +0200 answered a question Grouping rules related

Already answered in stackoverflow.

2017-01-31 09:27:02 +0200 received badge  Enthusiast
2017-01-27 07:47:44 +0200 answered a question Many questions unanswered from other means (private emails and Stackoverflow)

They are being answered directly on stackoverflow.

2016-10-24 11:04:50 +0200 answered a question NGSI and CartoDB integration

AFAIK, the first option ("orion2cartodb") is a preliminary work done for a demo, which is not part of Cygnus.

Regarding Cygnus, first of all must be said from Cygnus 1.0.0 an important refactor was done: the code was split into cygnus-common, a suit of common classes and utilities for all kind of Cygnus agents; and cygnus-ngsi, containing the NGSI specific part of Cygnus, and known as "NGSI Cygnus agent". Other Cygnus agents can be added to Cygnus, such as "Twitter Cygnus agent", developed by Universidad Politécnica de Valencia; this Twitter agent uses cygnus-common as well.

Being said that, from Cygnus 1.0.0 a sink for CartoDB was included in cygnus-ngsi agent. This sink works as any other sink, i.e. it can be used in any cygnus-ngsi agent following the Apache Flume architecture. You can check for its documentation here.

And within cygnus-common, you can find CartoDB backend classes, a set of utility classes available for all Cygnus agents; CartoDB sink within cygnus-ngsi uses it.

2016-06-29 11:11:15 +0200 answered a question Requirements to implement several enablers

There is no need to install Cosmos. Cosmos is the code name of the Hadoop deployment available at FIWARE Lab, which can be used by FIWARE users. It is a shared cluster using the standard Hadoop stack and custom plugins that allow multitenancy. Please have a look on http://catalogue.fiware.org/enablers/...

If you finally don't want to use the shared instance and want to have your own instance, simply install Hadoop (in that case, there is no need for installing any custom plugin from FIWARE).

2016-05-23 15:40:51 +0200 commented question Create a statistics

Hi, can you elaborate a bit more on the kind of data and statistics you want to achieve?

2016-05-05 08:31:17 +0200 answered a question No grouping rules have been read Cygnus

I think this is the same question than the following one in SOF: http://stackoverflow.com/questions/36...

2016-04-15 14:26:36 +0200 answered a question Is Cosmos httpFS working after maintenance?

Hi, it should be working now:

$ cat somedata.txt 
hi there
this is some random data
bla bla bla
yeah

bye!
$ curl -v -X PUT -T somedata.txt "http://cosmos.lab.fiware.org:14000/webhdfs/v1/user/frb/somedata.txt?op=create&user.name=frb" -H "X-auth-Token: MY_OAUTH2_TOKEN"
*   Trying 130.206.80.46...
* Connected to cosmos.lab.fiware.org (130.206.80.46) port 14000 (#0)
> PUT /webhdfs/v1/user/frb/somedata.txt?op=create&user.name=frb HTTP/1.1
> Host: cosmos.lab.fiware.org:14000
> User-Agent: curl/7.43.0
> Accept: */*
> X-auth-Token: MY_OAUTH2_TOKEN
> Content-Length: 57
> Expect: 100-continue
> 
< HTTP/1.1 100 Continue
* We are completely uploaded and fine
< HTTP/1.1 307 Temporary Redirect
< X-Powered-By: Express
< Access-Control-Allow-Origin: *
< Access-Control-Allow-Methods: HEAD, POST, GET, OPTIONS, DELETE
< Access-Control-Allow-Headers: origin, content-type, X-Auth-Token, Tenant-ID, Authorization
< server: Apache-Coyote/1.1
< set-cookie: hadoop.auth="u=frb&p=frb&t=simple&e=1460755949222&s=kN3JIHw4eFReh1FCNry1nCQq1pc="; Version=1; Path=/
< location: http://cosmos.lab.fi-ware.org:14000/webhdfs/v1/user/frb/somedata.txt?op=CREATE&user.name=frb&data=true
< Content-Type: application/json; charset=utf-8
< content-length: 0
< date: Fri, 15 Apr 2016 11:32:28 GMT
< connection: close
< 
* Closing connection 0
$ curl -v -X PUT -T somedata.txt "http://cosmos.lab.fi-ware.org:14000/webhdfs/v1/user/frb/somedata.txt?op=CREATE&user.name=frb&data=true" -H "X-auth-Token: MY_OAUTH2_TOKEN" -H "Content-Type: application/octet-stream"
*   Trying 130.206.80.46...
* Connected to cosmos.lab.fi-ware.org (130.206.80.46) port 14000 (#0)
> PUT /webhdfs/v1/user/frb/somedata.txt?op=CREATE&user.name=frb&data=true HTTP/1.1
> Host: cosmos.lab.fi-ware.org:14000
> User-Agent: curl/7.43.0
> Accept: */*
> X-auth-Token: MY_OAUTH2_TOKEN
> Content-Type: application/octet-stream
> Content-Length: 57
> Expect: 100-continue
> 
< HTTP/1.1 100 Continue
* We are completely uploaded and fine
< HTTP/1.1 201 Created
< X-Powered-By: Express
< Access-Control-Allow-Origin: *
< Access-Control-Allow-Methods: HEAD, POST, GET, OPTIONS, DELETE
< Access-Control-Allow-Headers: origin, content-type, X-Auth-Token, Tenant-ID, Authorization
< server: Apache-Coyote/1.1
< set-cookie: hadoop.auth="u=frb&p=frb&t=simple&e=1460756200287&s=xzsKIuBvrTcrTTYkgafTYI8qIDE="; Version=1; Path=/
< Content-Type: application/json; charset=utf-8
< content-length: 0
< date: Fri, 15 Apr 2016 11:36:40 GMT
< connection: close
< 
* Closing connection 0
$ curl -X GET "http://cosmos.lab.fi-ware.org:14000/webhdfs/v1/user/frb/somedata.txt?op=open&user.name=frb" -H "X-auth-Token: MY_OAUTH2_TOKEN"
hi there
this is some random data
bla bla bla
yeah

bye!
2016-04-04 14:29:36 +0200 answered a question Error Hive

Despite the error above, the Hive CLI should prompt you for a query with "hive>". Please, have a look a the end of my execution:

$ hive
log4j:ERROR Could not instantiate class [org.apache.hadoop.hive.shims.HiveEventCounter].
java.lang.RuntimeException: Could not load shims in class org.apache.hadoop.log.metrics.EventCounter
...
log4j:ERROR Could not instantiate appender named "EventCounter".

Logging initialized using configuration in jar:file:/usr/local/apache-hive-0.13.0-bin/lib/hive-common-0.13.0.jar!/hive-log4j.properties
hive>

Such an error is because a version missmatching when starting a Hive component, but nothing important.

2016-03-29 10:45:25 +0200 commented question Error Hive

Are you using the Hive CLI inside the cluster?

2016-03-29 10:01:39 +0200 answered a question How can I know if Cygnus is connected to Cosmos correctly?

You will have to use WebHDFS in order to browse your HDFS user space. Detailed referece for WebHDFS can be found at https://hadoop.apache.org/docs/curren....

For instance, by using my HDFS user "frb" I can do:

$ curl -X GET "http://cosmos.lab.fiware.org:14000/webhdfs/v1/user/frb?op=liststatus&user.name=frb" -H "X-Auth-Token: <MY_TOKEN>" | python -m json.tool { "FileStatuses": { "FileStatus": [ { "accessTime": 0, "blockSize": 0, "group": "frb", "length": 0, "modificationTime": 1455191589436, "owner": "frb", "pathSuffix": "-p", "permission": "755", "replication": 0, "type": "DIRECTORY" }, { "accessTime": 0, "blockSize": 0, "group": "frb", "length": 0, "modificationTime": 1456152801883, "owner": "frb", "pathSuffix": "FRB", "permission": "740", "replication": 0, "type": "DIRECTORY" },...

Cygnus stores the data as:

hdfs:///user/<myuser>/<fiware-service>/<fiware-service-path>/<entityId>_<entityType>/<entityId>_<entityType>.txt

You can check the Cygnus logs as well in order to know if the data has been successfully persisted. Try finding logs like:

time=2016-03-29T09:55:50.779CEST | lvl=INFO | trans=1459238120-153-0000000000 | svc=default | subsvc=/ | function=persistAggregation | comp=Cygnus | msg=com.telefonica.iot.cygnus.sinks.OrionHDFSSink[954] : [hdfs-sink] Persisting data at OrionHDFSSink. HDFS file (myservice/myservicepath/entityId_entityType/entityId_entityType.txt), Data ({"recvTime":"2016-03-29T07:55:24.441Z","fiwareServicePath":"myservicepath","entityId":"xxx","entityType":"xxx", "xxx":"111", "xxx_md":[]})
2016-03-29 09:49:34 +0200 answered a question How can I get space on new Cosmos Global Instance HDFS.

Hi, you are right. Since the new cluster is in a beta status, you must contact me in order to create an account. Please, issue an email to francisco.romerobueno@telefonica.com

The only thing I need is the result of this command:

$ curl -X GET "https://account.lab.fiware.org/user?access_token=<YOUR_TOKEN>"

For instance, if using my token I get:

{"organizations": [], "displayName": "frb", "roles": [{"name": "provider", "id": "106"}], "app_id": “9556cc76154361b3b43d7b31f0600982", "email": "frb@tid.es", "id": "frb”}

The interesting part is the "id" field.

Never show me your token!

If you don’t have a token, you can get one by querying the old cluster this way:

$ curl -k -X POST "https://cosmos.lab.fiware.org:13000/cosmos-auth/v1/token" -H "Content-Type: application/x-www-form-urlencoded" -d "grant_type=password&username=<YOUR_USER>&password=<YOUR_PASSWORD>”

Where user and password are the email and password you used when you registered in FIWARE Lab. You should get something like:

{"access_token": "qj33UcnW6leYAqr3X004DWLqaja0Ix", "token_type": "Bearer", "expires_in": 3600, "refresh_token": "V2Wlk7aFCnElKlW9BOmRzGhBtqgR2z"}

Never show me the token nor the passwords!

Once you pass me your ID I can create an account in the new storage cluster, where you can start uploading data through WebHDFS.

2016-03-03 14:39:27 +0200 received badge  Notable Question (source)
2016-02-29 08:11:08 +0200 received badge  Popular Question (source)
2016-02-26 08:40:20 +0200 asked a question How to use all the disk space in my VM

I've created a VM supposedly having 20 GB of disk. Nevertheless, a df -h command shows only 8 GB. Where are the other 12 GB? I've heard something about mounting a volume. Please, guide me. Thanks.

2016-02-01 00:29:17 +0200 commented answer Cygnus to CKAN data store

Regarding the row mode, could you post a full log? Please, use a service like pastrebin or similar.

2016-02-01 00:28:07 +0200 commented answer Cygnus to CKAN data store

I've edited the link, I think it was not working. Anyway, when working in column mode, all the CKAN data structure (organization, dataset, resource, datastore and viewer) must be provisioned in advance. Only when working in row mode, everything is automatically created by Cygnus.

2016-02-01 00:26:08 +0200 received badge  Editor (source)
2016-01-28 09:02:13 +0200 answered a question Cygnus to CKAN data store

I've seen you are using attr_persistence=column

Did you provisioned in advance the organization, package, resource and datastore structure? Please, have a look on this piece of documentation.

2016-01-28 02:49:37 +0200 commented question Cygnus to CKAN data store

I will need to know your Cygnus configuration. Thanks!

2015-12-17 09:46:39 +0200 answered a question Persistence error in Cygnus

After confirming the issue with a Cygnus deploymnet of our own, we have decided this is an issue regarding version 0.11.0. This has been described at https://github.com/telefonicaid/fiwar....

My recommendation is to rollback to Cygnus 0.10.0 while we go deeper into the details. In order to do that, start by effectively removing all Cygnus 0.11.0 stuff; the following command will do that:

$ sudo rpm -e -vv --allmatches --nodeps --noscripts --notriggers cygnus

Then, simply install the 0.10.0 version:

$ sudo yum install cygnus-0.10.0
2015-12-17 09:45:43 +0200 commented question Persistence error in Cygnus

Start by effectively removing all Cygnus 0.11.0 stuff; the following command will do that: $ sudo rpm -e -vv --allmatches --nodeps --noscripts --notriggers cygnus Then, simply install the 0.10.0 version: $ sudo yum install cygnus-0.10.0

2015-12-17 05:16:41 +0200 commented question Persistence error in Cygnus

After confirming the issue with a Cygnus deploymnet of our own, we have decided this is an issue regarding version 0.11.0. This has been described at https://github.com/telefonicaid/fiware-cygnus/issues/680. My recommendation is to rollback to Cygnus 0.10.0 while we go deeper into the details.

2015-12-16 02:02:51 +0200 commented question Persistence error in Cygnus

Which Cygnus version are you running? Which is the OS of the machine running Cygnus? By the way, this kind of technical questions is better to be done at stackoverflow.com (fiware-cygnus tag), just for the next time ;)

2015-12-10 08:48:01 +0200 answered a question ClassNotFoundException in shared hadoop, when trying to create a directory

This issue was fixed. The reason was the shared cluster automatically entered in safe mode.

2015-10-21 05:33:19 +0200 commented answer cosmos node is in safe mode, cannot execute query on hive

Yes, this kind of service availability problems are better to be emailed rather than posted as a community question. Or if posted, an email is always useful for us, in order to react ASAP.

2015-10-21 04:21:35 +0200 answered a question cosmos node is in safe mode, cannot execute query on hive

As the user should probably noticed, this was solved some time ago. It is not usual, but from time to time the cluster automatically enters in safe mode in certain scenarios in order to preserve the integrity of the data.

2015-10-01 02:00:00 +0200 answered a question cosmos: password reset

This is a well known issue and it should be fixed in the next release of the portal. The reason is the first attempt creates the user but a password is not assigned to it (because the password contains symbols and fails). Then, subsequent attempts check wether the user exists or not (it exists, but no password is assigned, as said before). This can be fixed by emailing me (francisco.romerobueno@telefonica.com) your IdM registered email / cosmos user; I'll set the password manually.

2015-09-30 02:58:19 +0200 received badge  Supporter (source)
2015-09-14 04:55:33 +0200 answered a question Cosmos : Error accessing Hive
2015-08-25 13:38:58 +0200 received badge  Teacher (source)
2015-08-20 12:39:21 +0200 answered a question Cosmos database privacy

This has been a temporal malfunction. It should be fixed now.