Actually it seems, when I have two requests, both jobs seems to crash throwing the following errors: 

phoenixdb.errors.InternalError: ("RuntimeException: org.apache.phoenix.exception.PhoenixIOException: org.apache.phoenix.exception.PhoenixIOException: Failed after attempts=36, exceptions:\nThu Nov 23 12:02:31 UTC 2017, null, java.net.SocketTimeoutException: callTimeout=60000, callDuration=60108: row '2017-07-01-BM8955-1498884512.0-386-1509295361.5' on table 'BGALLSALES' at region=BGALLSALES,2017-06-24-BM8912-1498306109.0-58-1509294220.81,1510055260213.c2fbd3a07c560958f5944cda3c29c04f., hostname=bigdata-datanode,16020,1511427352525, seqNum=330973\n -> PhoenixIOException: org.apache.phoenix.exception.PhoenixIOException: Failed after attempts=36, exceptions:\nThu Nov 23 12:02:31 UTC 2017, null, java.net.SocketTimeoutException: callTimeout=60000, callDuration=60108: row '2017-07-01-BM8955-1498884512.0-386-1509295361.5' on table 'BGALLSALES' at region=BGALLSALES,2017-06-24-BM8912-1498306109.0-58-1509294220.81,1510055260213.c2fbd3a07c560958f5944cda3c29c04f., hostname=bigdata-datanode,16020,1511427352525, seqNum=330973\n -> ExecutionException: org.apache.phoenix.exception.PhoenixIOException: Failed after attempts=36, exceptions:\nThu Nov 23 12:02:31 UTC 2017, null, java.net.SocketTimeoutException: callTimeout=60000, callDuration=60108: row '2017-07-01-BM8955-1498884512.0-386-1509295361.5' on table 'BGALLSALES' at region=BGALLSALES,2017-06-24-BM8912-1498306109.0-58-1509294220.81,1510055260213.c2fbd3a07c560958f5944cda3c29c04f., hostname=bigdata-datanode,16020,1511427352525, seqNum=330973\n -> PhoenixIOException: Failed after attempts=36, exceptions:\nThu Nov 23 12:02:31 UTC 2017, null, java.net.SocketTimeoutException: callTimeout=60000, callDuration=60108: row '2017-07-01-BM8955-1498884512.0-386-1509295361.5' on table 'BGALLSALES' at region=BGALLSALES,2017-06-24-BM8912-1498306109.0-58-1509294220.81,1510055260213.c2fbd3a07c560958f5944cda3c29c04f., hostname=bigdata-datanode,16020,1511427352525, seqNum=330973\n -> PhoenixIOException: Failed after attempts=36, exceptions:\nThu Nov 23 12:02:31 UTC 2017, null, java.net.SocketTimeoutException: callTimeout=60000, callDuration=60108: row '2017-07-01-BM8955-1498884512.0-386-1509295361.5' on table 'BGALLSALES' at region=BGALLSALES,2017-06-24-BM8912-1498306109.0-58-1509294220.81,1510055260213.c2fbd3a07c560958f5944cda3c29c04f., hostname=bigdata-datanode,16020,1511427352525, seqNum=330973\n -> RetriesExhaustedException: Failed after attempts=36, exceptions:\nThu Nov 23 12:02:31 UTC 2017, null, java.net.SocketTimeoutException: callTimeout=60000, callDuration=60108: row '2017-07-01-BM8955-1498884512.0-386-1509295361.5' on table 'BGALLSALES' at region=BGALLSALES,2017-06-24-BM8912-1498306109.0-58-1509294220.81,1510055260213.c2fbd3a07c560958f5944cda3c29c04f., hostname=bigdata-datanode,16020,1511427352525, seqNum=330973\n -> SocketTimeoutException: callTimeout=60000, callDuration=60108: row '2017-07-01-BM8955-1498884512.0-386-1509295361.5' on table 'BGALLSALES' at region=BGALLSALES,2017-06-24-BM8912-1498306109.0-58-1509294220.81,1510055260213.c2fbd3a07c560958f5944cda3c29c04f., hostname=bigdata-datanode,16020,1511427352525, seqNum=330973 -> IOException: Call to bigdata-datanode/10.10.10.166:16020 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=48582, waitTime=60004, operationTimeout=60000 expired. -> CallTimeoutException: Call id=48582, waitTime=60004, operationTimeout=60000 expired.", None, None, None)


I've changed the hbase configuration like this: 

      <property>
      <name>phoenix.query.timeoutMs</name>
      <value>1800000</value>
      </property>
      <property>
      <name>hbase.regionserver.lease.period</name>
      <value>1200000</value>
      </property>

      <property>
      <name>hbase.regionserver.lease.period</name>
      <value>1200000</value>
      </property>
      <property>
      <name>hbase.rpc.timeout</name>
      <value>1200000</value>
      </property>
      <property>
      <name>hbase.client.scanner.caching</name>
      <value>1000</value>
      </property>
      <property>
      <name>hbase.client.scanner.timeout.period</name>
      <value>1200000</value>
      </property>

but with no help. Any suggessation would be great, Mostly I use python client, but this happens when I use sqlline as well. 

Thanks


On Thu, Nov 23, 2017 at 10:30 AM, Vaghawan Ojha <vaghawan781@gmail.com> wrote:
Hi, 

Thank you very much for reaching out, 

- Phoenix version: 4.12.0-HBase-1.2
- Hbase: 1.2.6
- Hadoop: 2.8.1


DEBUG from PQS: 

2017-11-23 04:27:46,157 WARN org.apache.hadoop.hbase.client.ScannerCallable: Ignore, probably already closed
org.apache.hadoop.hbase.UnknownScannerException: org.apache.hadoop.hbase.UnknownScannerException: Unknown scanner '14111'. This can happen due$
If the issue is due to reason (b), a possible fix would be increasing the value of'hbase.client.scanner.timeout.period' configuration.
        at org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2394)
        at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33648)
        at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2196)
        at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
        at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
        at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
        at java.lang.Thread.run(Thread.java:748)

The exception thrown in the python client is attached. 

The two query sent was something like this: 

(SELECT COL1, COL2, COL3, COL4, COL5 FROM SALESTABLE WHERE PURCHSHED_DATE>=TO_TIMESTAMP(d) AND PURCHSHED_DATE<=TO_TIMESTAMP(d)) Having real data of 1 million records. 
(SELECT * FROM USERS) Having dummy data of 10 records. 

I've one namenode and one datanode. 

Thank you for the help.

On Thu, Nov 23, 2017 at 4:42 AM, Josh Elser <elserj@apache.org> wrote:
There is no such configuration which would preclude your ability to issue two queries concurrently.

Some relevant information you should share if you'd like more help:

* Versions of HDFS, HBase, and Phoenix
* What your "request" is
* Thread-dump (e.g. jstack) of your client and the PQS
* DEBUG logs from PQS


On 11/21/17 12:30 AM, Vaghawan Ojha wrote:
Hi,

I've a one Namenode server and another Datanode server, phoenix query server are running on both.

For a single request, everything is fine, but when I do multiple requests simultaneously, the query server timeouts , the table consists only 2-3 gbs of data.

I think, I'm missing some configuration to manage this, if somebody could help me out, it would be great.

Thank you
Vaghawan