Watch out the MapKitView

Found a location service problem in one of the app I’ve built for my client.

The problem, even after calling locationManager stopUpdatingLocation, the location service indicator still shows app is using location service!

Debug around, then found out it might be caused by the MapkitView in the details view. That details view has been already pop out from the navigation stack, the some how the MapKitView is still stay in memory sucking the location service!

Using Instrument tool to debug this, it proofs that even after the details view dismissed, the process list still shows the MapKit and geo services.

Great! Now try to do the clean up.

Tried self.mapView.delegate = nil; and self.mapView = nil; in viewWillDisapper, didn’t help.

Finally the trick is to call [self.mapView removeFromSuperView];

Some other posts pointed out the good practice is to create MapKitView on demand instead of dragging to view. Same solution.

Here is the proper code to clean up mapview:

[self.mapView removeFromSuperView]; //very important to completely clean up memory used by location service 
self.mapView.delegate = nil;
self.mapView = nil;
Advertisements

Why do we need $scope while ‘this’ just simply work?

Update: It turns out that ‘as’ syntax is the new style since v1.1.5, and it suppose take over all the old style $scope syntax. The nice outcome is that you will never leave a field name or method name in view without any scope prefix which is caused by $scope syntax. Way to go, ‘as’ as ‘this’! It also indicates that AngularJS changes so fast and so many training materials quickly become obsolete even they are old half year old.

Learning AngularJS, still can’t feel the need of using $scope.

The training video on codeschool doesn’t cover $scope concept, just simple use this.

When I came into the code example in controller section on guide, I rewrite the $scope style example using this style, it still works:


(function(angular) {
  'use strict';
var myApp = angular.module('spicyApp1', []);

myApp.controller('SpicyController', ['$scope', function($scope) {
    $scope.spice = 'very';

    $scope.chiliSpicy = function() {
        $scope.spice = 'chili';
    };

    $scope.jalapenoSpicy = function() {
        $scope.spice = 'jalapeño';
    };
}]);


myApp.controller('Spicy2Controller',   function() {
    this.spice = 'very';

    this.chiliSpicy = function() {
        this.spice = 'chili';
    };

    this.jalapenoSpicy = function() {
        this.spice = 'jalapeño';
    };
});

})(window.angular);

<div ng-controller="SpicyController">
 <button ng-click="chiliSpicy()">Chili</button>
 <button ng-click="jalapenoSpicy()">Jalapeño</button>
 <p>The food is {{spice}} spicy!</p>
</div>

<div ng-controller="Spicy2Controller as spicy2">
 <button ng-click="spicy2.chiliSpicy()">Chili</button>
 <button ng-click="spicy2.jalapenoSpicy()">Jalapeño</button>
 <p>The food is {{spicy2.spice}} spicy!</p>
</div>

I remember a post on stackoverfow talking about the difference of $scope and this, something about page loading, basically function defined on $scope won’t trigger on page load, so far I haven’t got any story to support this yet.

Will revisit this post after I understand more.

Upload file to S3 directly (from webforms)

Ref:

Note:

  • According to doc, policy field is optional in POST Forms, if the bucket is public-writable, which is not recommended. But I couldn’t get it working. Tried set acl, policy, and CORS, neither one works. Anyway it’s not good practice. Move on to make policy working in POST form is actually not that hard.
  • Policy needs base64 encoded, the format of policy is a JSON string, note, this is the policy stay in POST Form, web client only, not in anywhere on bucket settings. Don’t be confused by the same term applied on bucket permission settings.
  • Signature needs based64 encoded too, after SHA-1 encrypted with your account secret, which should be pared with the AWSaccesskeyId field in POST Form.
  • The doc listed a few scripts in different language showing how to encrypt and encode policy/signature. You can actually do this manually using some online tool:
    http://www.base64encode.org/ base64 encode
    http://www.freeformatter.com/hmac-generator.html#ad-output SHA-1 encrypt
    https://conv.darkbyte.ru/  or http://home.paulschou.net/tools/xlate/ base64 encode the encrypted signature in hex string.
    Note: that base64encode.org page only do regular ASCII string base64 encoding.
  • When pasting base64 string in POST form, watch out for the extra new line tag after = or ==, remove it otherwise S3 will complain invalid signature.

 

To manually get policy and signature:

  1.  Copy paste policy string into http://www.base64encode.org/, to get base64 encoded policy string.
  2.  Copy paste base64 encoded policy string into http://www.freeformatter.com/hmac-generator.html#ad-output, and put your account secret as secret key, to get encrypted signature in hex string.
  3. Copy paste encrypted signature hex string into https://conv.darkbyte.ru/ or http://home.paulschou.net/tools/xlate/, right in HEX textarea, click decode, the string shown in Base64 textarea is the final base64encoded encrypted singature string your webForm needed.

One little thing, how to get the response?

By default, S3 won’t check duplicated file name, so if you upload an existing file name, the old one on S3 will be overwritten. (There is an option in SDK to prevent this.)

Good practice will be generate guid file name before upload.

To get some response back from S3 upload, you can use success_action_redirect field.

According to doc:

success_action_redirect The URL address to which the user’s web browser will be redirected after the file is uploaded. This URL should point to a “Successful Upload” page on your web site, so you can inform your users that their files have been accepted. S3 will add bucket, key and etag parameters to this URL value to inform your web application of the location and hash value of the uploaded file.

How to display/grab those parameters without creating a dynamic page (PHP) on a host?

I found a very useful service: http://ivanzuzak.info/urlecho/

To set the success_action_redirect value to http://urlecho.appspot.com/echo?debugMode=1

Note that debugMode is on, so you can see those return code on page.

A typical output I got looks like this:

Request received:
http://urlecho.appspot.com/echo?debugMode=1&bucket=your-bucket-name&key=uploads%2Fuploaded-file.pdf&etag=%2248b918c648abfa0267ffdc9975ea8bbf%22

Status code:
200

Headers:
Content-Length: 0
etag: "48b918c648abfa0267ffdc9975ea8bbf"
key: uploads/uploaded-file.pdf
Cache-Control: max-age=3600
bucket: your-bucket-name
Access-Control-Allow-Origin: *
Content-Type: text/html; charset=utf-8

Body:
None

Last thing, did I mention that this POST form page is a pure static page so you can simply upload to S3 and it will just works?

Setup ssh over https access to git server

If you are unlucky working behind firewall trying to connect to github/bitbucket using ssh. Here is the ssh-via-https git server you should redirect to:

GitHub:

ssh.github.com on 443 instead of github.com on 22

https://help.github.com/articles/using-ssh-over-the-https-port

BitBucket:

altssh.bitbucket.org on 443 instead of bitbucket.org on 22

https://confluence.atlassian.com/display/BITBUCKET/Use+the+SSH+protocol+with+Bitbucket

Those entries are added in ~/.ssh/config, on Windows machine it can be either on your %HOMEDRIVE% \ %HOMEPATH%, or C:\Users\your.name\.ssh, run ssh -vT git@github.com or ssh -vT git@bitbucket.org to find out.

Before redirect:

λ ssh -vT git@bitbucket.org
OpenSSH_4.6p1, OpenSSL 0.9.8e 23 Feb 2007
debug1: Reading configuration data /c/Users/fmao.AGLC/.ssh/config
debug1: Applying options for bitbucket.org
debug1: Connecting to bitbucket.org [131.103.20.168] port 22.
debug1: Connection established.
debug1: identity file /c/Users/fmao.AGLC/.ssh/identity type -1
debug1: identity file /c/Users/fmao.AGLC/.ssh/id_rsa type 1
debug1: identity file /c/Users/fmao.AGLC/.ssh/id_dsa type -1
(hang here, if 22 port is blocked.)

After redirect:


λ ssh -vT git@bitbucket.org
OpenSSH_4.6p1, OpenSSL 0.9.8e 23 Feb 2007
debug1: Reading configuration data /c/Users/myusername/.ssh/config
debug1: Applying options for bitbucket.org
debug1: Connecting to altssh.bitbucket.org [131.103.20.174] port 443.
debug1: Connection established.
debug1: identity file /c/Users/myusername/.ssh/identity type -1
debug1: identity file /c/Users/myusername/.ssh/id_rsa type 1
debug1: identity file /c/Users/myusername/.ssh/id_dsa type -1
debug1: Remote protocol version 2.0, remote software version OpenSSH_5.3
debug1: match: OpenSSH_5.3 pat OpenSSH*
debug1: Enabling compatibility mode for protocol 2.0
...
debug1: Authentication succeeded (publickey).
debug1: channel 0: new [client-session]
debug1: Entering interactive session.
debug1: client_input_channel_req: channel 0 rtype exit-status reply 0
logged in as maodd.

Custom Git Server

In case you want to connect to your own git server which you have root access, edit the /etc/ssh/sshd_config to add an additional listening port other than 22.

Add a new line with content of ‘Port 443’ right under the line of ‘Port 22’ in /etc/ssh/sshd_config

Restart ssh server: sudo /etc/init.d/ssh restart

Try the same thing in your client machine.

ssh -p 443 yourgitserver.com to ensure 443 can go through your firewall.

Follow the similar hostname setup in .ssh/config file mentioned above.

ref: https://help.ubuntu.com/12.04/serverguide/openssh-server.html

Byebye dreamhost!

7 years ago, I started to setup a podcast system on dreamhost, based on their unlimited storage policy. That was a huge mistake!

Last week, I got an email from dreamhost saying my podcast episode files are considered as ‘personal backup’ so it’s not allowed.

My deadline to move away from dreamhost is the end of February.

Here I come, AWS S3+Glacier! I could have picked you 7 years ago.

Byebye dreamhost!

Running cron job with custom gem lib on dreamhost

Problem:

Installed parse-ruby-client gem on dreamhost VPS.

Ruby script runs OK in console mode (use logged in)

Crontab jobs doesn’t work, complains ‘parse-ruby-client’ not found.

Reason:

gem installed in user ~/.gem folder, not searchable by system process like crontab.

Solution:

Add those line into ~/.bash_profile
export GEM_HOME="$HOME/.gems"
export GEM_PATH="/usr/ruby/ruby/gems/1.8:$GEM_HOME"

Add this to ~/.bashrc
if [ -f ~/.bash_profile ]; then
.  ~/.bash_profile
fi

Those two should be good enough, but just in case. Add this before your shell.
. ~/.bashrc
/urs/bin/ruby $HOME/my_ruby_script.rb

Parse Push vs. Xtify Push

Both are free for 100K push per month.

Xtify provide analytics. Parse needs upgrade to get this.

Parse free tier only support one push certificate, either dev or prod, pick one only, it’s kind of pain in the ass, had to delete dev cert to test adhoc push.

From the other point of view, you never mess up dev push message and prod one.

Xtify doesn’t have this limit.

One weird thing on parse is push certificate must have no password. Easier to manage.

One unique feature from parse is client side push, coding in Objective C.

Both support RESTful service call to push.

Parse has a very nice push log, in table style.